How to overcome timer delay


Dinh, Kien T
 

Hi,

I’m developing an app which requires fast sampling rate (~500 times per sec) via the ADC on the Arduino 101. It was alright using nano_timer_init/start/test APIs up version 1.5. However, after upgrading to version 1.6, noticeable time delay has been observed. To remove possible effects from other drivers, I’ve used the following code to test the time delay and got the below results.

It seems that the amount of delay is inversely proportional to the interval. For interval = 1000 ms, the delay is just 10 ms. But for interval as high as 10 ms, the delay becomes 1000 ms, making it impossible to use for high sampling rate app. Is there any Kconfig needs to be set or any way to minimize such delay?

=====

#include <zephyr.h>
#include <misc/printk.h>

#define INTERVAL 1

static int count;
static int t;

void timer_handler(struct k_timer *a_timer)
{
count += INTERVAL;
if (count % 1000 == 0) {
printk("Count %d, delta = %d\n", count,
k_uptime_get_32() - t);
t = k_uptime_get_32();
}
}

void main(void)
{
struct k_timer my_timer;

printk("Hello World! %s\n", CONFIG_ARCH);
k_timer_init(&my_timer, timer_handler, NULL);
t = k_uptime_get_32();
while (1) {
k_timer_start(&my_timer, INTERVAL, K_FOREVER);
k_timer_status_sync(&my_timer);
}
}
====

I got the same following outputs for both x86 qemu and Arduino 101 (x86):

* INTERVAL = 1000 (one second)
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010


* INTERVAL = 100 (one hundred millisecs)
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* INTERVAL = 10 (ten millisecs)
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000


* INTERVAL = 1 (one millisec)
Count 1000, delta = 20000
Count 2000, delta = 20000
Count 3000, delta = 20000


Thanks,
Kien


Benjamin Walsh <benjamin.walsh@...>
 

Hi Kien,

I’m developing an app which requires fast sampling rate (~500 times
per sec) via the ADC on the Arduino 101. It was alright using
nano_timer_init/start/test APIs up version 1.5. However, after
upgrading to version 1.6, noticeable time delay has been observed. To
remove possible effects from other drivers, I’ve used the following
code to test the time delay and got the below results.

It seems that the amount of delay is inversely proportional to the
interval. For interval = 1000 ms, the delay is just 10 ms. But for
interval as high as 10 ms, the delay becomes 1000 ms, making it
impossible to use for high sampling rate app. Is there any Kconfig
needs to be set or any way to minimize such delay?
When we changed the new API to take ms instead of kernel ticks for
timeouts, we also decided the timeouts mean "wait for at least this
time" instead of "wait for at most this time".

The system is still tick-based though. So we convert ms to ticks
internally.

If you want to wait "at most" an amount of time, you have to ask for
one tick less. So if you know your tick rate is 100Hz, and you want to
wait at most 20ms, you have to ask for 10ms (that would give you two
ticks).

Now, you say your sampling rate is 500Hz: however, the default tick rate
is 100Hz. You have to change CONFIG_SYS_CLOCK_TICKS_PER_SEC to 500.
However (again), since with a tick freq of 500Hz, if you wait for 2ms
you'll wait for "at least" 2ms, you might wait for 4ms. So what you
probably want is a CONFIG_SYS_CLOCK_TICKS_PER_SEC of 1000, and wait for
1ms, which will make you wait at most for 2ms.

I'm starting to wonder if we should have macros for this in the API,
e.g. AT_MOST()/AT_LEAST(), where you could do:

k_timer_start(&my_timer, AT_MOST(INTERVAL), 0);

This is all because the kernel is still tick-based. We would like to
move to a tickless kernel, where these would not be an issue anymore.

=====

#include <zephyr.h>
#include <misc/printk.h>

#define INTERVAL 1

static int count;
static int t;

void timer_handler(struct k_timer *a_timer)
{
count += INTERVAL;
if (count % 1000 == 0) {
printk("Count %d, delta = %d\n", count,
k_uptime_get_32() - t);
t = k_uptime_get_32();
}
}

void main(void)
{
struct k_timer my_timer;

printk("Hello World! %s\n", CONFIG_ARCH);
k_timer_init(&my_timer, timer_handler, NULL);
t = k_uptime_get_32();
while (1) {
k_timer_start(&my_timer, INTERVAL, K_FOREVER);
^^^^^^^^^
You cannot use K_FOREVER in this API: if you do not want periodic
repetition, you have to use 0.

I'm surprised this did not blow up. Actually, if you ran with
CONFIG_ASSERT=y, you would have hit the one at the top of
_add_timeout():

__ASSERT(timeout_in_ticks > 0, "");

k_timer_status_sync(&my_timer);
}
}
====

I got the same following outputs for both x86 qemu and Arduino 101 (x86):

* INTERVAL = 1000 (one second)
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010


* INTERVAL = 100 (one hundred millisecs)
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* INTERVAL = 10 (ten millisecs)
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000


* INTERVAL = 1 (one millisec)
Count 1000, delta = 20000
Count 2000, delta = 20000
Count 3000, delta = 20000
You're getting these numbers because your tick rate is probably 100.
With 1000 you would probably get:

* INTERVAL = 1000 (one second)
Count 1000, delta = 1001
Count 2000, delta = 1001
Count 3000, delta = 1001


* INTERVAL = 100 (one hundred millisecs)
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010


* INTERVAL = 10 (ten millisecs)
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* INTERVAL = 1 (one millisec)
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000

Cheers,
Ben



Thanks,
Kien
--
Benjamin Walsh, SMTS
WR VxWorks Virtualization Profile
www.windriver.com
Zephyr kernel maintainer
www.zephyrproject.org


Dinh, Kien T
 

Hi Benjamin,

Thank you for your reply and advices. Setting CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000
does improve the results like you said. Increasing the parameter also shortens the delay:

With interval=1ms:
* CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000


* CONFIG_SYS_CLOCK_TICKS_PER_SEC=2000
Count 1000, delta = 1500
Count 2000, delta = 1500
Count 3000, delta = 1500


* CONFIG_SYS_CLOCK_TICKS_PER_SEC=10000
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* CONFIG_SYS_CLOCK_TICKS_PER_SEC=100000
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010
main-loop: WARNING: I/O thread spun for 1000 iterations

So, although increasing it improves the delay, there is a limit. And for the TICKS_PER_SEC as high
as 10000, there is still 100ms delay over 1000 counts (10%). I think that in practice a mechanism
to compensate the delay to make it a more precise. Is there any better way?

Thanks,
Kien

PS: It would be fun blowing up the HW with some code. I used to blow up
the circuit with a capacitor soldered mistakenly in opposite way. Please let
me follow your advice not to use K_FOREVER in this case.

On 2017/01/24 1:16, "Benjamin Walsh" <benjamin.walsh(a)windriver.com> wrote:

Hi Kien,

> I’m developing an app which requires fast sampling rate (~500 times
> per sec) via the ADC on the Arduino 101. It was alright using
> nano_timer_init/start/test APIs up version 1.5. However, after
> upgrading to version 1.6, noticeable time delay has been observed. To
> remove possible effects from other drivers, I’ve used the following
> code to test the time delay and got the below results.
>
> It seems that the amount of delay is inversely proportional to the
> interval. For interval = 1000 ms, the delay is just 10 ms. But for
> interval as high as 10 ms, the delay becomes 1000 ms, making it
> impossible to use for high sampling rate app. Is there any Kconfig
> needs to be set or any way to minimize such delay?

When we changed the new API to take ms instead of kernel ticks for
timeouts, we also decided the timeouts mean "wait for at least this
time" instead of "wait for at most this time".

The system is still tick-based though. So we convert ms to ticks
internally.

If you want to wait "at most" an amount of time, you have to ask for
one tick less. So if you know your tick rate is 100Hz, and you want to
wait at most 20ms, you have to ask for 10ms (that would give you two
ticks).

Now, you say your sampling rate is 500Hz: however, the default tick rate
is 100Hz. You have to change CONFIG_SYS_CLOCK_TICKS_PER_SEC to 500.
However (again), since with a tick freq of 500Hz, if you wait for 2ms
you'll wait for "at least" 2ms, you might wait for 4ms. So what you
probably want is a CONFIG_SYS_CLOCK_TICKS_PER_SEC of 1000, and wait for
1ms, which will make you wait at most for 2ms.

I'm starting to wonder if we should have macros for this in the API,
e.g. AT_MOST()/AT_LEAST(), where you could do:

k_timer_start(&my_timer, AT_MOST(INTERVAL), 0);

This is all because the kernel is still tick-based. We would like to
move to a tickless kernel, where these would not be an issue anymore.

> =====
>
> #include <zephyr.h>
> #include <misc/printk.h>
>
> #define INTERVAL 1
>
> static int count;
> static int t;
>
> void timer_handler(struct k_timer *a_timer)
> {
> count += INTERVAL;
> if (count % 1000 == 0) {
> printk("Count %d, delta = %d\n", count,
> k_uptime_get_32() - t);
> t = k_uptime_get_32();
> }
> }
>
> void main(void)
> {
> struct k_timer my_timer;
>
> printk("Hello World! %s\n", CONFIG_ARCH);
> k_timer_init(&my_timer, timer_handler, NULL);
> t = k_uptime_get_32();
> while (1) {
> k_timer_start(&my_timer, INTERVAL, K_FOREVER);
^^^^^^^^^
You cannot use K_FOREVER in this API: if you do not want periodic
repetition, you have to use 0.

I'm surprised this did not blow up. Actually, if you ran with
CONFIG_ASSERT=y, you would have hit the one at the top of
_add_timeout():

__ASSERT(timeout_in_ticks > 0, "");


> k_timer_status_sync(&my_timer);
> }
> }
> ====
>
> I got the same following outputs for both x86 qemu and Arduino 101 (x86):
>
> * INTERVAL = 1000 (one second)
> Count 1000, delta = 1010
> Count 2000, delta = 1010
> Count 3000, delta = 1010
> …
>
> * INTERVAL = 100 (one hundred millisecs)
> Count 1000, delta = 1100
> Count 2000, delta = 1100
> Count 3000, delta = 1100
> …
>
> * INTERVAL = 10 (ten millisecs)
> Count 1000, delta = 2000
> Count 2000, delta = 2000
> Count 3000, delta = 2000
> …
>
> * INTERVAL = 1 (one millisec)
> Count 1000, delta = 20000
> Count 2000, delta = 20000
> Count 3000, delta = 20000

You're getting these numbers because your tick rate is probably 100.
With 1000 you would probably get:

* INTERVAL = 1000 (one second)
Count 1000, delta = 1001
Count 2000, delta = 1001
Count 3000, delta = 1001


* INTERVAL = 100 (one hundred millisecs)
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010


* INTERVAL = 10 (ten millisecs)
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* INTERVAL = 1 (one millisec)
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000

Cheers,
Ben

> …
>
> Thanks,
> Kien

--
Benjamin Walsh, SMTS
WR VxWorks Virtualization Profile
www.windriver.com
Zephyr kernel maintainer
www.zephyrproject.org


Benjamin Walsh <benjamin.walsh@...>
 

I’m developing an app which requires fast sampling rate (~500 times
per sec) via the ADC on the Arduino 101. It was alright using
nano_timer_init/start/test APIs up version 1.5. However, after
upgrading to version 1.6, noticeable time delay has been observed. To
remove possible effects from other drivers, I’ve used the following
code to test the time delay and got the below results.

It seems that the amount of delay is inversely proportional to the
interval. For interval = 1000 ms, the delay is just 10 ms. But for
interval as high as 10 ms, the delay becomes 1000 ms, making it
impossible to use for high sampling rate app. Is there any Kconfig
needs to be set or any way to minimize such delay?
When we changed the new API to take ms instead of kernel ticks for
timeouts, we also decided the timeouts mean "wait for at least this
time" instead of "wait for at most this time".

The system is still tick-based though. So we convert ms to ticks
internally.

If you want to wait "at most" an amount of time, you have to ask for
one tick less. So if you know your tick rate is 100Hz, and you want to
wait at most 20ms, you have to ask for 10ms (that would give you two
ticks).

Now, you say your sampling rate is 500Hz: however, the default tick rate
is 100Hz. You have to change CONFIG_SYS_CLOCK_TICKS_PER_SEC to 500.
However (again), since with a tick freq of 500Hz, if you wait for 2ms
you'll wait for "at least" 2ms, you might wait for 4ms. So what you
probably want is a CONFIG_SYS_CLOCK_TICKS_PER_SEC of 1000, and wait for
1ms, which will make you wait at most for 2ms.

I'm starting to wonder if we should have macros for this in the API,
e.g. AT_MOST()/AT_LEAST(), where you could do:

k_timer_start(&my_timer, AT_MOST(INTERVAL), 0);

This is all because the kernel is still tick-based. We would like to
move to a tickless kernel, where these would not be an issue anymore.

=====

#include <zephyr.h>
#include <misc/printk.h>

#define INTERVAL 1

static int count;
static int t;

void timer_handler(struct k_timer *a_timer)
{
count += INTERVAL;
if (count % 1000 == 0) {
printk("Count %d, delta = %d\n", count,
k_uptime_get_32() - t);
t = k_uptime_get_32();
}
}

void main(void)
{
struct k_timer my_timer;

printk("Hello World! %s\n", CONFIG_ARCH);
k_timer_init(&my_timer, timer_handler, NULL);
t = k_uptime_get_32();
while (1) {
k_timer_start(&my_timer, INTERVAL, K_FOREVER);
^^^^^^^^^
You cannot use K_FOREVER in this API: if you do not want periodic
repetition, you have to use 0.

I'm surprised this did not blow up. Actually, if you ran with
CONFIG_ASSERT=y, you would have hit the one at the top of
_add_timeout():

__ASSERT(timeout_in_ticks > 0, "");


k_timer_status_sync(&my_timer);
}
}
====

I got the same following outputs for both x86 qemu and Arduino 101 (x86):

* INTERVAL = 1000 (one second)
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010


* INTERVAL = 100 (one hundred millisecs)
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* INTERVAL = 10 (ten millisecs)
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000


* INTERVAL = 1 (one millisec)
Count 1000, delta = 20000
Count 2000, delta = 20000
Count 3000, delta = 20000
You're getting these numbers because your tick rate is probably 100.
With 1000 you would probably get:

* INTERVAL = 1000 (one second)
Count 1000, delta = 1001
Count 2000, delta = 1001
Count 3000, delta = 1001


* INTERVAL = 100 (one hundred millisecs)
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010


* INTERVAL = 10 (ten millisecs)
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* INTERVAL = 1 (one millisec)
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000
Thank you for your reply and advices. Setting
CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000 does improve the results like you
said. Increasing the parameter also shortens the delay:

With interval=1ms:
* CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000
Count 1000, delta = 2000
Count 2000, delta = 2000
Count 3000, delta = 2000


* CONFIG_SYS_CLOCK_TICKS_PER_SEC=2000
Count 1000, delta = 1500
Count 2000, delta = 1500
Count 3000, delta = 1500


* CONFIG_SYS_CLOCK_TICKS_PER_SEC=10000
Count 1000, delta = 1100
Count 2000, delta = 1100
Count 3000, delta = 1100


* CONFIG_SYS_CLOCK_TICKS_PER_SEC=100000
Count 1000, delta = 1010
Count 2000, delta = 1010
Count 3000, delta = 1010
main-loop: WARNING: I/O thread spun for 1000 iterations
You probably should not use tick rates that are that high, or you'll
spend all time in the timer interrupt handler (unless you also enable
tickless idle). :)

So, although increasing it improves the delay, there is a limit. And
for the TICKS_PER_SEC as high as 10000, there is still 100ms delay
over 1000 counts (10%). I think that in practice a mechanism to
compensate the delay to make it a more precise. Is there any better
way?
For your case, that is what I was saying above: if you set the tick rate
to 1000, and you ask for a delay of 1ms, this means at least 1ms, so the
system will wait for 2 ticks (the partial current one + the next one),
so it will wait for around 2ms. That is with the system clock timer,
which has a finite granularity.

If you want more precision, I would look if your board has a second
timer that your application could take ownership of so that you could
program it to fire periodically every 2ms.

Cheers,
Ben


Dinh, Kien T
 

Thank you so much for the explanations, Ben.

Kien

On 2017/01/25 1:23, "Benjamin Walsh" <benjamin.walsh(a)windriver.com> wrote:

> > > I’m developing an app which requires fast sampling rate (~500 times
> > > per sec) via the ADC on the Arduino 101. It was alright using
> > > nano_timer_init/start/test APIs up version 1.5. However, after
> > > upgrading to version 1.6, noticeable time delay has been observed. To
> > > remove possible effects from other drivers, I’ve used the following
> > > code to test the time delay and got the below results.
> > >
> > > It seems that the amount of delay is inversely proportional to the
> > > interval. For interval = 1000 ms, the delay is just 10 ms. But for
> > > interval as high as 10 ms, the delay becomes 1000 ms, making it
> > > impossible to use for high sampling rate app. Is there any Kconfig
> > > needs to be set or any way to minimize such delay?
> >
> > When we changed the new API to take ms instead of kernel ticks for
> > timeouts, we also decided the timeouts mean "wait for at least this
> > time" instead of "wait for at most this time".
> >
> > The system is still tick-based though. So we convert ms to ticks
> > internally.
> >
> > If you want to wait "at most" an amount of time, you have to ask for
> > one tick less. So if you know your tick rate is 100Hz, and you want to
> > wait at most 20ms, you have to ask for 10ms (that would give you two
> > ticks).
> >
> > Now, you say your sampling rate is 500Hz: however, the default tick rate
> > is 100Hz. You have to change CONFIG_SYS_CLOCK_TICKS_PER_SEC to 500.
> > However (again), since with a tick freq of 500Hz, if you wait for 2ms
> > you'll wait for "at least" 2ms, you might wait for 4ms. So what you
> > probably want is a CONFIG_SYS_CLOCK_TICKS_PER_SEC of 1000, and wait for
> > 1ms, which will make you wait at most for 2ms.
> >
> > I'm starting to wonder if we should have macros for this in the API,
> > e.g. AT_MOST()/AT_LEAST(), where you could do:
> >
> > k_timer_start(&my_timer, AT_MOST(INTERVAL), 0);
> >
> > This is all because the kernel is still tick-based. We would like to
> > move to a tickless kernel, where these would not be an issue anymore.
> >
> > > =====
> > >
> > > #include <zephyr.h>
> > > #include <misc/printk.h>
> > >
> > > #define INTERVAL 1
> > >
> > > static int count;
> > > static int t;
> > >
> > > void timer_handler(struct k_timer *a_timer)
> > > {
> > > count += INTERVAL;
> > > if (count % 1000 == 0) {
> > > printk("Count %d, delta = %d\n", count,
> > > k_uptime_get_32() - t);
> > > t = k_uptime_get_32();
> > > }
> > > }
> > >
> > > void main(void)
> > > {
> > > struct k_timer my_timer;
> > >
> > > printk("Hello World! %s\n", CONFIG_ARCH);
> > > k_timer_init(&my_timer, timer_handler, NULL);
> > > t = k_uptime_get_32();
> > > while (1) {
> > > k_timer_start(&my_timer, INTERVAL, K_FOREVER);
> > ^^^^^^^^^
> > You cannot use K_FOREVER in this API: if you do not want periodic
> > repetition, you have to use 0.
> >
> > I'm surprised this did not blow up. Actually, if you ran with
> > CONFIG_ASSERT=y, you would have hit the one at the top of
> > _add_timeout():
> >
> > __ASSERT(timeout_in_ticks > 0, "");
> >
> >
> > > k_timer_status_sync(&my_timer);
> > > }
> > > }
> > > ====
> > >
> > > I got the same following outputs for both x86 qemu and Arduino 101 (x86):
> > >
> > > * INTERVAL = 1000 (one second)
> > > Count 1000, delta = 1010
> > > Count 2000, delta = 1010
> > > Count 3000, delta = 1010
> > > …
> > >
> > > * INTERVAL = 100 (one hundred millisecs)
> > > Count 1000, delta = 1100
> > > Count 2000, delta = 1100
> > > Count 3000, delta = 1100
> > > …
> > >
> > > * INTERVAL = 10 (ten millisecs)
> > > Count 1000, delta = 2000
> > > Count 2000, delta = 2000
> > > Count 3000, delta = 2000
> > > …
> > >
> > > * INTERVAL = 1 (one millisec)
> > > Count 1000, delta = 20000
> > > Count 2000, delta = 20000
> > > Count 3000, delta = 20000
> >
> > You're getting these numbers because your tick rate is probably 100.
> > With 1000 you would probably get:
> >
> > * INTERVAL = 1000 (one second)
> > Count 1000, delta = 1001
> > Count 2000, delta = 1001
> > Count 3000, delta = 1001
> > …
> >
> > * INTERVAL = 100 (one hundred millisecs)
> > Count 1000, delta = 1010
> > Count 2000, delta = 1010
> > Count 3000, delta = 1010
> > …
> >
> > * INTERVAL = 10 (ten millisecs)
> > Count 1000, delta = 1100
> > Count 2000, delta = 1100
> > Count 3000, delta = 1100
> > …
> >
> > * INTERVAL = 1 (one millisec)
> > Count 1000, delta = 2000
> > Count 2000, delta = 2000
> > Count 3000, delta = 2000
>
> Thank you for your reply and advices. Setting
> CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000 does improve the results like you
> said. Increasing the parameter also shortens the delay:
>
> With interval=1ms:
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000
> Count 1000, delta = 2000
> Count 2000, delta = 2000
> Count 3000, delta = 2000
> …
>
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=2000
> Count 1000, delta = 1500
> Count 2000, delta = 1500
> Count 3000, delta = 1500
> …
>
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=10000
> Count 1000, delta = 1100
> Count 2000, delta = 1100
> Count 3000, delta = 1100
> …
>
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=100000
> Count 1000, delta = 1010
> Count 2000, delta = 1010
> Count 3000, delta = 1010
> main-loop: WARNING: I/O thread spun for 1000 iterations

You probably should not use tick rates that are that high, or you'll
spend all time in the timer interrupt handler (unless you also enable
tickless idle). :)

> So, although increasing it improves the delay, there is a limit. And
> for the TICKS_PER_SEC as high as 10000, there is still 100ms delay
> over 1000 counts (10%). I think that in practice a mechanism to
> compensate the delay to make it a more precise. Is there any better
> way?

For your case, that is what I was saying above: if you set the tick rate
to 1000, and you ask for a delay of 1ms, this means at least 1ms, so the
system will wait for 2 ticks (the partial current one + the next one),
so it will wait for around 2ms. That is with the system clock timer,
which has a finite granularity.

If you want more precision, I would look if your board has a second
timer that your application could take ownership of so that you could
program it to fire periodically every 2ms.

Cheers,
Ben


Benjamin Walsh <benjamin.walsh@...>
 

[..snip..]

void main(void)
{
struct k_timer my_timer;

printk("Hello World! %s\n", CONFIG_ARCH);
k_timer_init(&my_timer, timer_handler, NULL);
t = k_uptime_get_32();
while (1) {
k_timer_start(&my_timer, INTERVAL, K_FOREVER);
k_timer_status_sync(&my_timer);
}
}
[..snip..]

So, although increasing it improves the delay, there is a limit. And
for the TICKS_PER_SEC as high as 10000, there is still 100ms delay
over 1000 counts (10%). I think that in practice a mechanism to
compensate the delay to make it a more precise. Is there any better
way?
For your case, that is what I was saying above: if you set the tick rate
to 1000, and you ask for a delay of 1ms, this means at least 1ms, so the
system will wait for 2 ticks (the partial current one + the next one),
so it will wait for around 2ms. That is with the system clock timer,
which has a finite granularity.

If you want more precision, I would look if your board has a second
timer that your application could take ownership of so that you could
program it to fire periodically every 2ms.
Thank you so much for the explanations, Ben.
Well, that was an in-depth explanation of what is happening internally,
but I focused too much on what your code was doing that I kinda
overlooked a better way of doing what you were trying to do. :-/

Instead of stopping and starting the timer with a <duration> of 1ms and
<period> of 0 every iteration, you should just use the periodic feature
of the timer ! If you want it to fire every 2ms, do this:

k_timer_start(&my_timer, 2, 2);
while (1) {
k_timer_status_sync(&my_timer);
}

The <period> value is not aligned on the next tick boundary, since when
the timer is added back to the timeout queue, this happens in the
timer's timeout's expiration handler, which is called when already
aligned on a tick boundary. Thus, the value for the <duration> parameter
of k_timer_start when converted to ticks _will be_ pushed to the next
tick passed the requested duration, while the value for the <period>
parameter when converted to ticks _will not be_ pushed to the next tick.
Look in timer.c:_timer_expiration_handler():

/*
* if the timer is periodic, start it again; don't add _TICK_ALIGN
* since we're already aligned to a tick boundary
*/
if (timer->period > 0) {
key = irq_lock();
_add_timeout(NULL, &timer->timeout, &timer->wait_q,
timer->period);
irq_unlock(key);
}

So, with my example code above, the timer will thus fire in
2.<something> ms the first time, and at fixed interval every 2ms all
subsequent times. If you want the timer to fire within 2m the first
time, call timer start with a <duration> of 1ms instead, like this:

k_timer_start(&my_timer, 1, 2);

Hope this helps,
Ben