Re: How to overcome timer delay


Dinh, Kien T
 

Thank you so much for the explanations, Ben.

Kien

On 2017/01/25 1:23, "Benjamin Walsh" <benjamin.walsh(a)windriver.com> wrote:

> > > I’m developing an app which requires fast sampling rate (~500 times
> > > per sec) via the ADC on the Arduino 101. It was alright using
> > > nano_timer_init/start/test APIs up version 1.5. However, after
> > > upgrading to version 1.6, noticeable time delay has been observed. To
> > > remove possible effects from other drivers, I’ve used the following
> > > code to test the time delay and got the below results.
> > >
> > > It seems that the amount of delay is inversely proportional to the
> > > interval. For interval = 1000 ms, the delay is just 10 ms. But for
> > > interval as high as 10 ms, the delay becomes 1000 ms, making it
> > > impossible to use for high sampling rate app. Is there any Kconfig
> > > needs to be set or any way to minimize such delay?
> >
> > When we changed the new API to take ms instead of kernel ticks for
> > timeouts, we also decided the timeouts mean "wait for at least this
> > time" instead of "wait for at most this time".
> >
> > The system is still tick-based though. So we convert ms to ticks
> > internally.
> >
> > If you want to wait "at most" an amount of time, you have to ask for
> > one tick less. So if you know your tick rate is 100Hz, and you want to
> > wait at most 20ms, you have to ask for 10ms (that would give you two
> > ticks).
> >
> > Now, you say your sampling rate is 500Hz: however, the default tick rate
> > is 100Hz. You have to change CONFIG_SYS_CLOCK_TICKS_PER_SEC to 500.
> > However (again), since with a tick freq of 500Hz, if you wait for 2ms
> > you'll wait for "at least" 2ms, you might wait for 4ms. So what you
> > probably want is a CONFIG_SYS_CLOCK_TICKS_PER_SEC of 1000, and wait for
> > 1ms, which will make you wait at most for 2ms.
> >
> > I'm starting to wonder if we should have macros for this in the API,
> > e.g. AT_MOST()/AT_LEAST(), where you could do:
> >
> > k_timer_start(&my_timer, AT_MOST(INTERVAL), 0);
> >
> > This is all because the kernel is still tick-based. We would like to
> > move to a tickless kernel, where these would not be an issue anymore.
> >
> > > =====
> > >
> > > #include <zephyr.h>
> > > #include <misc/printk.h>
> > >
> > > #define INTERVAL 1
> > >
> > > static int count;
> > > static int t;
> > >
> > > void timer_handler(struct k_timer *a_timer)
> > > {
> > > count += INTERVAL;
> > > if (count % 1000 == 0) {
> > > printk("Count %d, delta = %d\n", count,
> > > k_uptime_get_32() - t);
> > > t = k_uptime_get_32();
> > > }
> > > }
> > >
> > > void main(void)
> > > {
> > > struct k_timer my_timer;
> > >
> > > printk("Hello World! %s\n", CONFIG_ARCH);
> > > k_timer_init(&my_timer, timer_handler, NULL);
> > > t = k_uptime_get_32();
> > > while (1) {
> > > k_timer_start(&my_timer, INTERVAL, K_FOREVER);
> > ^^^^^^^^^
> > You cannot use K_FOREVER in this API: if you do not want periodic
> > repetition, you have to use 0.
> >
> > I'm surprised this did not blow up. Actually, if you ran with
> > CONFIG_ASSERT=y, you would have hit the one at the top of
> > _add_timeout():
> >
> > __ASSERT(timeout_in_ticks > 0, "");
> >
> >
> > > k_timer_status_sync(&my_timer);
> > > }
> > > }
> > > ====
> > >
> > > I got the same following outputs for both x86 qemu and Arduino 101 (x86):
> > >
> > > * INTERVAL = 1000 (one second)
> > > Count 1000, delta = 1010
> > > Count 2000, delta = 1010
> > > Count 3000, delta = 1010
> > > …
> > >
> > > * INTERVAL = 100 (one hundred millisecs)
> > > Count 1000, delta = 1100
> > > Count 2000, delta = 1100
> > > Count 3000, delta = 1100
> > > …
> > >
> > > * INTERVAL = 10 (ten millisecs)
> > > Count 1000, delta = 2000
> > > Count 2000, delta = 2000
> > > Count 3000, delta = 2000
> > > …
> > >
> > > * INTERVAL = 1 (one millisec)
> > > Count 1000, delta = 20000
> > > Count 2000, delta = 20000
> > > Count 3000, delta = 20000
> >
> > You're getting these numbers because your tick rate is probably 100.
> > With 1000 you would probably get:
> >
> > * INTERVAL = 1000 (one second)
> > Count 1000, delta = 1001
> > Count 2000, delta = 1001
> > Count 3000, delta = 1001
> > …
> >
> > * INTERVAL = 100 (one hundred millisecs)
> > Count 1000, delta = 1010
> > Count 2000, delta = 1010
> > Count 3000, delta = 1010
> > …
> >
> > * INTERVAL = 10 (ten millisecs)
> > Count 1000, delta = 1100
> > Count 2000, delta = 1100
> > Count 3000, delta = 1100
> > …
> >
> > * INTERVAL = 1 (one millisec)
> > Count 1000, delta = 2000
> > Count 2000, delta = 2000
> > Count 3000, delta = 2000
>
> Thank you for your reply and advices. Setting
> CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000 does improve the results like you
> said. Increasing the parameter also shortens the delay:
>
> With interval=1ms:
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000
> Count 1000, delta = 2000
> Count 2000, delta = 2000
> Count 3000, delta = 2000
> …
>
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=2000
> Count 1000, delta = 1500
> Count 2000, delta = 1500
> Count 3000, delta = 1500
> …
>
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=10000
> Count 1000, delta = 1100
> Count 2000, delta = 1100
> Count 3000, delta = 1100
> …
>
> * CONFIG_SYS_CLOCK_TICKS_PER_SEC=100000
> Count 1000, delta = 1010
> Count 2000, delta = 1010
> Count 3000, delta = 1010
> main-loop: WARNING: I/O thread spun for 1000 iterations

You probably should not use tick rates that are that high, or you'll
spend all time in the timer interrupt handler (unless you also enable
tickless idle). :)

> So, although increasing it improves the delay, there is a limit. And
> for the TICKS_PER_SEC as high as 10000, there is still 100ms delay
> over 1000 counts (10%). I think that in practice a mechanism to
> compensate the delay to make it a more precise. Is there any better
> way?

For your case, that is what I was saying above: if you set the tick rate
to 1000, and you ask for a delay of 1ms, this means at least 1ms, so the
system will wait for 2 ticks (the partial current one + the next one),
so it will wait for around 2ms. That is with the system clock timer,
which has a finite granularity.

If you want more precision, I would look if your board has a second
timer that your application could take ownership of so that you could
program it to fire periodically every 2ms.

Cheers,
Ben

Join devel@lists.zephyrproject.org to automatically receive all group messages.