I am developing a software for an embedded microcontroller (STN1110). It has no FPU or SFP emulation.

There is an SDK method which returns the uptime of the controller in microseconds, split in two variables: uint16 (msw) and uint32 (lsw).

I want to convert this "pair" into a single uint32 with seconds precision (`/ 1000 / 1000`

).

The SDK API docs state the following:

```
uint32 TimeGet48 (uint16 * time_msw)
-----------------------------------------
Read the current full 48-bit system time.
Returns all 48 bits of the current value of the system's 1MHz clock. The resolution of the returned 32 bit value is one microsecond, with the most significant word returned through the pointer passed as a parameter. The clock therefore wraps after approximately 8.9 years. That should be enough for most purposes.
Parameters:
- time_msw: A pointer to store the most significant word of the current system time
Returns:
- Lower 32 bits to the current system time.
```

The main problem here is, the controller supports only up to `uint32`

, so no `uint64`

.

I "glued" some code together that I thought may be the solution. It is not, as there are errors as soon as the uint16 value is > 0 and I'm not able to find it.

```
uint32 divide(uint32 time_in_us_lsb, uint16 time_in_us_msb, uint32 divisor)
{
uint32 value1 = (uint32)time_in_us_msb & 0xFFFF;
uint32 value2 = (uint32)(time_in_us_lsb >> 16) & 0xFFFF;
uint32 value3 = (uint32)time_in_us_lsb & 0xFFFF;
value2 += (value1 % divisor) << 16;
value3 += (value2 % divisor) << 16;
return (((value2 / divisor) << 16) & 0xFFFF0000) | ( value3 / divisor);
}
uint32 getUptimeSeconds() {
uint32 time_in_us_lsb;
uint16 time_in_us_msb;
time_in_us_lsb = TimeGet48(&time_in_us_msb);
uint32 result = divide(time_in_us_lsb, time_in_us_msb, 100000);
time_in_us_lsb = (uint32)result & 0xFFFFFFFF;
time_in_us_msb = 0;
return divide(time_in_us_lsb, time_in_us_msb, 10);
}
```

Note: Speed is not an issue here, as the function will not be called often during the applications lifetime.

Edit: Added the `uint32`

capability limit.

.. there are errors as soon as the uint16 value is > 0 and I'm not able to find it.

At least one error given "uptime of the controller in microseconds"

```
// uint32 result = divide(time_in_ms_lsb, time_in_ms_msb, 100000);
uint32 result = divide(time_in_ms_lsb, time_in_ms_msb, 1000000);
```

Makes more sense to simply use `uint64_t`

.

```
uint32 divide(uint32 time_in_ms_lsb, uint16 time_in_ms_msb, uint32 divisor) {
uint64_t sum = ((uint64_t)time_in_ms_msb << 32) | divisor;
return (uint32)(sum / divisor);
}
```

Recommend: using `ms`

in code for *microseconds* looks like *milliseconds*. Suggest `us`

for "μSeconds".

Given the CPU you target you probably want to optimize this for a fixed divisor. In your case a division by 1000000.

For that you can use a/1000000 = a * (2^64 / 1000000) / 2^64. (2^64 / 1000000) isn't an integer but an approximation is enough. The multiplication can be done as a series of shifts and adds. You can add a bit of error to the (2^64 / 1000000) to get fewer shifts and adds. The result will be slightly off but not much.

Done correctly your result will be no more than 1 off the right answer. If that is important you can multiply the result by 1000000 again, compare and then adjust the result by 1 as needed.

Note: You don't have to use 2^64 in the above approximation. Try 2^32 or 2^48 or other number to see if that gives you fewer shifts and adds.

PS: Check out what the compiler does for `(a * uint64_t(1<<48 / 1000000)) >> 48`

.

answered on Stack Overflow Apr 4, 2019 by Goswin von Brederlow

Should be:

```
uint32 getUptimeSeconds() {
uint32 time_in_us_lsb;
uint16 time_in_us_msb;
time_in_us_lsb = TimeGet48(&time_in_us_msb);
uint32 result = divide(time_in_us_lsb, time_in_us_msb, 10000);
time_in_us_lsb = (uint32)result & 0xFFFFFFFF;
time_in_us_msb = 0;
return divide(time_in_us_lsb, time_in_us_msb, 100);
}
```

answered on Stack Overflow Apr 4, 2019 by 1resu

User contributions licensed under CC BY-SA 3.0