Daniel Thompson <daniel.thompson@...>
On 23/03/17 14:49, Benjamin Walsh wrote:
Weve started moving our project from Zephyr 1.5 to Zephyr 1.7. One bigWhen prototyping the new API, I started with timeouts as an int64_t in
change (aside from the really big ones like unified kernel and all
that) is that the APIs for timers and others seems to have changes
from taking in values in ticks to taking their arguments in
milliseconds. For most applications, this is probably fine but in our
case, its really unfortunate. We want to have a tick granularity of
500us (0.5ms) for our system so we can do operations at more precise
timings (yes, at the cost of extra timer interrupts), and with the old
API, that wasn't an issue. Youd just change your units to microseconds
and do things like:
To get a 1.5ms timer. Now, there seems to be K_MSEC and K_SECONDS to
convert from "kernel time measure" as before (replacing just MSEC and
SECONDS) but this is just a direct translation and the function itself
does the transformation into ticks as the first thing it does. Is
there a strong reason why the API was changed to prevent greater than
1ms ticks from being easy to achieve, especially considering users are
expected to use K_MSEC and K_SECONDS anyway? I don't expect any system
to have a 1us tick, but just a 0.5ms tick is now impossible (without
modifying the kernel code at least).
nanoseconds, to allow the same precision as struct timespec, but without
the akwardness of having to create an instance of a struct variable
every time. However, using it when writing test code mostly, I disliked
it for two reasons: 1. often having to pass in large numbers (minor
issue) and 2. having to deal with 64-bit number math. So, I polled
people, on the mailing list IIRC, to find out if having int32_t timeouts
in milliseconds would be reasonable for Zephyr. I received one or two
responses, both positive, so I went with this.
However, having timeouts as int32_t instead of uint32_t, there is
nothing preventing us from using negative values to represent other
units if we want to, in a backwards-compatible way. The only negative
value currently is K_FOREVER (0xffffffff). So, if we wanted to implement
better granularity, we could allow for a higher-rate system clock and
add macros to the API like, e.g.:
#define US_TIMEOUT(us) \
(int32_t)((((uint32_t)(us)) & 0x3fffffff) | 0x80000000)
// keep the two upper bits as control bits just in
// case '10' would mean 'microseconds', '11' could
// mean something else
Regarding reserving upper bits, perhaps think about this the other way around. What is the largest sane microsecond sleep? We need only to reserve enough bits to accommodate that...
For example once you are sleeping >100ms you should, perhaps, start questioning what the microsecond precision is needed for.
rc = sem_take(&my_sem, US_TIMEOUT(500));
and have the kernel timeout code decode this as the number of ticks
corresponding to 500us.
This is of course not implemented, but should be somewhat easy to do.