k_uptime_get/k_uptime_delta on 32-bit devices


yshragai.firmware@...
 
Edited

Hi,
I want to use k_uptime_get and k_uptime_delta.
These are defined as having return type int64_t, and k_uptime_delta expects an argument of type *int64_t.
However, it seems that there is garbage in the upper 32 bits of the values returned by these functions.
Or at least, when I print the value using LOG_INF (with format %lld), I get values with some garbage data in the upper 32 bits (and it sometimes seems to corrupt other values being printed in the same LOG_INF line).
I know that it's garbage (vs. part of the actual time value) b/c (1) my system hasn't been up nearly long enough to have an uptime value of >32 bits, and (2) the value in the upper 32 bits depends on context - i.e., in one function it would be one value, in another function it's another value - which makes me strongly suspect that it's arbitrary data from other variables in the stack space.
If I cast it to uint32_t before printing with LOG_INF, it prints fine.

What's the solution? Do I cast to uint32_t every time I want to use values from these functions for any purpose other than as parameters to k_uptime_delta?

Thanks!


Andy Ross
 

This is definitely supposed to be working.  Your second note seems to rule out my first guess (word size mistake in the printf format).  Can you submit a bug in github (I'm "andyross" there) with code to exercise, along with details about your platform and kconfig?

Andy

On 8/19/2022 12:28 PM, yshragai.firmware@... wrote:

[Edited Message Follows]

Hi,
I want to use k_uptime_get and k_uptime_delta.
These are defined as having return type int64_t, and k_uptime_delta expects an argument of type *int64_t.
However, it seems that there is garbage in the upper 32 bits of the values returned by these functions.
Or at least, when I print the value using LOG_INF (with format %lld), I get values with some garbage data in the upper 32 bits (and it sometimes seems to corrupt other values being printed in the same LOG_INF line).
I know that it's garbage (vs. part of the actual time value) b/c (1) my system hasn't been up nearly long enough to have an uptime value of >32 bits, and (2) the value in the upper 32 bits depends on context - i.e., in one function it would be one value, in another function it's another value - which makes me strongly suspect that it's arbitrary data from other variables in the stack space.
If I cast it to uint32_t before printing with LOG_INF, it prints fine.

What's the solution? Do I cast to uint32_t every time I want to use values from these functions for any purpose other than as parameters to k_uptime_delta?

Thanks!


Manu R
 

I remember something like this happening to me a long time ago, and the reason was that the datatype was not aligned to a 64bit boundary. Could you try capturing the result in a int64_t that is aligned to a 8-byte boundary, and then printing it?
M


yshragai.firmware@...
 

How do you make sure that a int64_t is 8-byte aligned?


chrisduf
 

If I understand your question correctly, the answer actually depends on your toolchain.

With GCC, you might be looking for the "aligned" attribute, see https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html.