Re: [PATCH 2/4] nanokernel: Introduce workqueue API

Benjamin Walsh <benjamin.walsh@...>

On Fri, May 06, 2016 at 05:17:54PM +0300, Vlad Dogaru wrote:
Hi Luiz,

On Fri, May 06, 2016 at 04:35:48PM +0300, Luiz Augusto von Dentz wrote:
On Thu, May 5, 2016 at 6:04 PM, Vlad Dogaru <vlad.dogaru(a)> wrote:
On Thu, May 05, 2016 at 03:53:54PM +0300, Johan Hedberg wrote:
On Thu, May 05, 2016, Vlad Dogaru wrote:
+struct nano_work {
+ work_handler_t work_handler;
+ void *work_arg;
Instead of storing and passing work_arg to the callback, did you
consider passing just the nano_work struct itself and then using
CONTAINER_OF() in the callback to get the context its part of? That
would save four bytes for every nano_work struct.
Sounds good. I'll do this in the next iteration.

+ * @brief Initialize a @ref nano_work structure.
+ *
+ * @param work The structure to initialize.
+ * @param handler The function to run when @ref work is run.
+ * @param arg The argument to pass to the handler.
+ */
+static inline void nano_work_init(struct nano_work *work,
+ work_handler_t handler,
+ void *arg)
+ work->work_handler = handler;
+ work->work_arg = arg;
+ * @brief Submit a work item to a workqueue.
+ */
+static inline void nano_work_submit(struct nano_workqueue *wq,
+ struct nano_work *work)
+ nano_fifo_put(&wq->wq_fifo, work);
Maybe I'm missing something here, but don't you need to reserve four
bytes in the beginning of nano_work for the nano_fifo internal use (the
"next" pointer)? To me it looks like the nano_fifo_put() above would
overwrite the first four bytes of your work struct (i.e. the
work_handler pointer).
One of us is certainly missing something, that's for sure. And it's
likely me :)

I was under the impression that nano_fifos did their own bookkeeping.
After reading the code and docs, it's clear that they do not. Now the
question remains how on earth my code works. I suspect it's because
when an item is added to a fifo, a fiber is always waiting. So we never
actually call enqueue_data, we just context switch to the waiting fiber.

I'll fix this silliness in v2, along with your suggestion above. The
overall delta is 0. We save 4 bytes from removing the explicit
argument, but expend them on the fifo reserved field. The negligible
effect being that we have properly working fifos, of course :)

Thanks for your input,
Would there be a global workqueue as in Linux? Im asking this because
if there is perhaps it is better to introduce a delayed version
instead of introducing the callout API
(, but in order to share
the stack there should be a global workqueue.
In the current patchset there is a global workqueue, but it's labeled as
sensor-specific. I don't see a problem with making it generic. If I
do, sensors could use it, as well as any other subsystem that needs to.
Using the same fiber for both callout and workqueue is probably a good
idea for saving memory.

I'll also take a stab at implementing delayed work, let's see what the
results look like.
That's what the proposed callout API is doing though. Take a look at it
before duplicating work...

Btw, I did ask about
having a workqueue API in the past while discussing about the
callout/timer API which seems to not be under the radar here, but I
guess this is pretty common with projects using gerrit since we
stopped discussing the APIs on the mailing list probably only people
involved directly with the review knows what is going.

I also strongly suggest to add SYS_LOG and log the calls to the API
and create a unit test for work queues for testing the sequence is
maintained, etc.
I'll add a unit test in the next iteration. Not sure what one can log
about workqueues, though.

There also seems to be missing a way to cancel work, which would
probably force us to use a list since apparently you cannot remove
items in the middle from a fifo or perhaps we should actually extend
the fifo API to be able to do that since fifos are actually better for
these usage since you can wakeup the worker fiber with it.
I did think of canceling work, but I thought we'd add it later, when
there's a user for it. Otherwise it would just be an API waiting for a

Join to automatically receive all group messages.