Date   

Daily Gerrit Digest

donotreply@...
 

NEW within last 24 hours:
- https://gerrit.zephyrproject.org/r/4966 : samples: remove unused MDEF file
- https://gerrit.zephyrproject.org/r/4962 : irq: Use lowest priority not a hard-coded priority 2
- https://gerrit.zephyrproject.org/r/4965 : mvic: fixed printk format
- https://gerrit.zephyrproject.org/r/4964 : usb: do not assert on a variable we do not have
- https://gerrit.zephyrproject.org/r/4961 : irq: _ARC_V2_DEF_IRQ_LEVEL should be set to last legal priority
- https://gerrit.zephyrproject.org/r/4959 : x86: interrupts: optimize and simplify IRQ stubs
- https://gerrit.zephyrproject.org/r/4963 : samples: pwm: use new API interface
- https://gerrit.zephyrproject.org/r/4960 : x86: don't unconditionally run ISRs with interrupts enabled
- https://gerrit.zephyrproject.org/r/4955 : segmentation.h: fix get_gdt/get_idt
- https://gerrit.zephyrproject.org/r/4953 : Bluetooth: tester: Fix advertising data
- https://gerrit.zephyrproject.org/r/4952 : Bluetooth: HFP HF: Fix build failed if BT_DBG is defined
- https://gerrit.zephyrproject.org/r/4951 : Bluetooth: HFP HF: Enforce Kconfig's HFP_HF relation to RFCOMM

UPDATED within last 24 hours:
- https://gerrit.zephyrproject.org/r/4919 : Bluetooth: Controller: Use net_buf for evt and ACL RX
- https://gerrit.zephyrproject.org/r/4933 : pwm: qmsi_shim: implement pwm driver required by new APIs
- https://gerrit.zephyrproject.org/r/4883 : sanity: enable sanity multiple configuration
- https://gerrit.zephyrproject.org/r/4858 : drivers: pwm: re-design pwm API interfaces
- https://gerrit.zephyrproject.org/r/4882 : util: adds downstream failure check script
- https://gerrit.zephyrproject.org/r/4844 : win-doc: Add recommendation for regex library configuration
- https://gerrit.zephyrproject.org/r/4855 : win-doc: Adds the dependency with the pthread library
- https://gerrit.zephyrproject.org/r/4896 : Dining philosophers demo for unified kernel.
- https://gerrit.zephyrproject.org/r/4892 : unified: Invoke kernel object initialization with SYS_INIT macro
- https://gerrit.zephyrproject.org/r/3311 : include/crypto: Crypto abstraction header
- https://gerrit.zephyrproject.org/r/4893 : unified: Build kernel objects as a static library
- https://gerrit.zephyrproject.org/r/4881 : nano_work: Don't assert if work is pending on submit
- https://gerrit.zephyrproject.org/r/4890 : unified: Enable tickless idle test
- https://gerrit.zephyrproject.org/r/4868 : unified: Add tickless idle support for x86
- https://gerrit.zephyrproject.org/r/4889 : unified: Add tickless idle support for ARM
- https://gerrit.zephyrproject.org/r/4948 : Bluetooth: SMP: Remove unused static const
- https://gerrit.zephyrproject.org/r/4950 : Bluetooth: Add debug keys support to HCI ECC emulation code
- https://gerrit.zephyrproject.org/r/4949 : Bluetooth: SMP: Fix unused static variable

MERGED within last 24 hours:
- https://gerrit.zephyrproject.org/r/4954 : k_timer: Don't allocate dynamic timers by default
- https://gerrit.zephyrproject.org/r/4932 : x86: add _init_irq_gate and use it in gen_idt
- https://gerrit.zephyrproject.org/r/4867 : unified: Eliminate useless check in idle thread
- https://gerrit.zephyrproject.org/r/4895 : unified: implement k_uptime_{get,delta}()
- https://gerrit.zephyrproject.org/r/4863 : unified: Remove references to obsolete task_timeout
- https://gerrit.zephyrproject.org/r/4864 : unified: Remove obsolete wait_q.h macros
- https://gerrit.zephyrproject.org/r/4865 : unified: Remove #if 0 code block from wait_q.h
- https://gerrit.zephyrproject.org/r/4894 : unified: move basic ticks-to-ms conversion to kernel.h
- https://gerrit.zephyrproject.org/r/4862 : unified: Replace _nano_get_earliest_deadline()
- https://gerrit.zephyrproject.org/r/4866 : unified: Remove unused _nano_get_earliest_deadline()
- https://gerrit.zephyrproject.org/r/4926 : unified: Remove check in _reschedule_threads()
- https://gerrit.zephyrproject.org/r/4929 : apic: set initial PM state at build time
- https://gerrit.zephyrproject.org/r/4924 : ioapic: make init-time RTE masking optional
- https://gerrit.zephyrproject.org/r/4870 : unified: Make test_pend unified capable.
- https://gerrit.zephyrproject.org/r/4869 : unified: Add legacy task_offload_to_fiber() routine
- https://gerrit.zephyrproject.org/r/4928 : kernel: remove lingering irq_connect_dynamic() references
- https://gerrit.zephyrproject.org/r/4927 : init.h: use a counter when naming system devices
- https://gerrit.zephyrproject.org/r/4937 : net: yaip: Improve net_context_connect documentation
- https://gerrit.zephyrproject.org/r/4935 : net: yaip: cc2520: Let's provide ll addr in LE already
- https://gerrit.zephyrproject.org/r/4942 : net: yaip: Fix creating neighbour without l2addr
- https://gerrit.zephyrproject.org/r/4941 : net: yaip: Fix link address length calculation
- https://gerrit.zephyrproject.org/r/4944 : net: yaip: Fix handling ra_neighbor
- https://gerrit.zephyrproject.org/r/4945 : net: yaip: Fix handling onlink prefix
- https://gerrit.zephyrproject.org/r/4446 : rfc: ksdk: Add KSDK ENET driver.


Nanokernel stack border protection

Tidy(ChunHua) Jiang <tidyjiang@...>
 

Hi All,

The nanokernel uses an array as stack memory space, but there is no border protection when push data to the stack. When the array is already full, it will cause array overfow, leading to unpredictable behavior.

Why not add the border protection? When the array is full, it returns an error code to user.

Is it necessary ?

Thanks.


tidyjiang(a)163.com


Re: [RFC] Ring buffers

Boie, Andrew P
 

On Fri, 2016-09-23 at 14:56 -0700, Andy Ross wrote:
Benjamin Walsh wrote (on Friday, September 23, 2016 2:36PM):

I think that we should still have the code to under kernel/ though,
and rename APIs to k_ring_buf_<whatever>.
Naming isn't a big deal, but I'll reiterate my previous point: a ring
buffer is a data structure, not an OS abstraction provided by a
kernel.  It's equally useful to application or subsystem code, or I
dunno, Windows C++ apps.  It's not meaningfully a "Zephyr" thing.

I mean, the dlist implementation is used pervasively in the kernel,
but I don't think anyone would argue it belongs in kernel.h instead
include/misc/dlist.h.
I agree with Andy for reasons above, plus if you *did* move it, it
would be subject to our deprecation policy (maintain both APIs for two
releases with the old one marked __deprecated).

Andrew


Re: [RFC] Ring buffers

Andy Ross
 

Benjamin Walsh wrote (on Friday, September 23, 2016 2:36PM):
I think that we should still have the code to under kernel/ though,
and rename APIs to k_ring_buf_<whatever>.
Naming isn't a big deal, but I'll reiterate my previous point: a ring
buffer is a data structure, not an OS abstraction provided by a
kernel. It's equally useful to application or subsystem code, or I
dunno, Windows C++ apps. It's not meaningfully a "Zephyr" thing.

I mean, the dlist implementation is used pervasively in the kernel,
but I don't think anyone would argue it belongs in kernel.h instead
include/misc/dlist.h.

Andy


Re: [RFC] Ring buffers

Benjamin Walsh <benjamin.walsh@...>
 

On Fri, Sep 23, 2016 at 09:24:46PM +0000, Boie, Andrew P wrote:
On Fri, 2016-09-23 at 16:46 -0400, Dmitriy Korovkin wrote:
Next, when I was saying of making reading thread wait if the buffer
is 
empty and writing thread wait if there is no free space, this does
not 
resolve and does not intend to resolve the problem of several
concurrent 
threads trying to access the ring buffer - such a problem may be
resolved 
with mutexes.
I contend this doesn't belong in the base ring buffer implementation
either. I feel you are assuming that this policy of waiting is
appropriate for all situations and I disagree.

Ring buffers aren't just used in thread context. If an ISR wants to
stick something in the ring buffer, we can't wait here.

Also what if the thead would rather do something else than block if the
buffer is full? This is why the API returns -ENOSPC.

So I maintain that if you want to build something on top of base
ring_buffer.h for the common cases, that is fine, but don't assume they
are what's appropriate for all uses, the base API needs to stay how it
is.
Point taken, and entirely valid. We won't touch the functionality of the
ring buffers.

I think that we should still have the code to under kernel/ though, and
rename APIs to k_ring_buf_<whatever>.


Re: [RFC] Ring buffers

Boie, Andrew P
 

On Fri, 2016-09-23 at 16:46 -0400, Dmitriy Korovkin wrote:
Next, when I was saying of making reading thread wait if the buffer
is 
empty and writing thread wait if there is no free space, this does
not 
resolve and does not intend to resolve the problem of several
concurrent 
threads trying to access the ring buffer - such a problem may be
resolved 
with mutexes.
I contend this doesn't belong in the base ring buffer implementation
either. I feel you are assuming that this policy of waiting is
appropriate for all situations and I disagree.

Ring buffers aren't just used in thread context. If an ISR wants to
stick something in the ring buffer, we can't wait here.

Also what if the thead would rather do something else than block if the
buffer is full? This is why the API returns -ENOSPC.

So I maintain that if you want to build something on top of base
ring_buffer.h for the common cases, that is fine, but don't assume they
are what's appropriate for all uses, the base API needs to stay how it
is.

Andrew


Re: [RFC] Ring buffers

Dmitriy Korovkin
 

On Fri, 23 Sep 2016, Mitsis, Peter wrote:

It occurs to me that a buffered pipe may be leveraged as a generic
ring buffer with concurrency control--though not a light-weight one.
Peter, this is a very good statement. The question is what we get from
ring buffer that we can not get from fifo?

Thanks!
/Dmitriy


Re: [RFC] Ring buffers

Dmitriy Korovkin
 

On Fri, 23 Sep 2016, Boie, Andrew P wrote:

On Fri, 2016-09-23 at 13:27 -0700, Boie, Andrew P wrote:
NACK, for reasons below.
Just to qualify this -- if you wanted to create a general purpose
kernel object type that builds on what is in ring_buffer.h, I would not
be opposed.

But I want the existing definition left as-is as it is intentionally
lightweight and for scenarios where you have a single producer and
single consumer you don't need extra concurrency control.
Just for the clarification.
I never stated the idea of concurrency protocol. As I stated previously,
the idea that is discussed here is to make ring buffers available for
users the same way as it is done for other kernel objects, which is for
unified kernel means placing object declaration into kernel.h. Not that
having heder in another place makes it unavailable, it is a matter of
convenience.

Next, when I was saying of making reading thread wait if the buffer is
empty and writing thread wait if there is no free space, this does not
resolve and does not intend to resolve the problem of several concurrent
threads trying to access the ring buffer - such a problem may be resolved
with mutexes.

My apologies, if I did not make it clear enough, I hope I have done it now
to avoid the further confusion.

Regards,
/Dmitriy.


Re: [RFC] Ring buffers

Mitsis, Peter <Peter.Mitsis@...>
 

It occurs to me that a buffered pipe may be leveraged as a generic ring buffer with concurrency control--though not a light-weight one.

Peter

-----Original Message-----
From: Boie, Andrew P [mailto:andrew.p.boie(a)intel.com]
Sent: September-23-16 4:32 PM
To: devel(a)lists.zephyrproject.org; Korovkin, Dmitriy
Subject: [devel] Re: [RFC] Ring buffers

On Fri, 2016-09-23 at 13:27 -0700, Boie, Andrew P wrote:
NACK, for reasons below.
Just to qualify this -- if you wanted to create a general purpose kernel
object type that builds on what is in ring_buffer.h, I would not be
opposed.

But I want the existing definition left as-is as it is intentionally
lightweight and for scenarios where you have a single producer and single
consumer you don't need extra concurrency control.

Andrew


Re: [RFC] Ring buffers

Boie, Andrew P
 

On Fri, 2016-09-23 at 13:27 -0700, Boie, Andrew P wrote:
NACK, for reasons below.
Just to qualify this -- if you wanted to create a general purpose
kernel object type that builds on what is in ring_buffer.h, I would not
be opposed.

But I want the existing definition left as-is as it is intentionally
lightweight and for scenarios where you have a single producer and
single consumer you don't need extra concurrency control.

Andrew


Re: [RFC] Ring buffers

Boie, Andrew P
 

On Fri, 2016-09-23 at 15:33 -0400, Dmitriy Korovkin wrote:
Colleagues, any comment on this?

In it's current state the ring buffers appear as internal kernel
objects 
used by event loger and ipm console.

The proposal here is to make them kernel objects available for
application 
developers, that includes the following steps:
NACK, for reasons below.


- move the ring buffers declaration to include/kernel.h and chage
routine
The header is currently in misc/ring_buffer.h. This is available to
applications, why do you want to move it?

   names according to the public API naming convention;
Why do you think the names need to be changed?


- pend writing threads for a certain timeout if no space available;

- pend reading threads (with timeout) if the buffer is empty.
No, don't bake this stuff into the basic ring buffer structure. If you
want to implement this kind of protocl you can build on top of the
existing ring buffer code.

It says as much in the documentation:

/**
 * @brief Place an entry into the ring buffer
 *
 * Concurrency control is not implemented, however no synchronization
is needed
 * between put() and get() operations as they independently work on the
 * tail and head values, respectively.
 * Any use-cases involving multiple producers will need to synchronize
use
 * of this function, by either disabling preemption or using a mutex.
 */

There is no one-size-fits-all concurrency model that makes everyone
happy, these considerations are application specific. The fact that
concurrency control is not included is definitely by design.

Andrew


Re: [RFC] Ring buffers

Dmitriy Korovkin
 

On Fri, 23 Sep 2016, Andy Ross wrote:

Korovkin, Dmitriy (Wind River) wrote (on Friday, September 23, 2016 12:33PM):
- move the ring buffers declaration to include/kernel.h and chage routine
names according to the public API naming convention;

- pend writing threads for a certain timeout if no space available;

- pend reading threads (with timeout) if the buffer is empty.
That sounds more like a hybrid between a ring buffer (a data
structure) and a fifo (an IPC mechanism). I think there's value for a
generic buffer for bytewise streamlike data which isn't tied to a
particular idea of syncronization. Maybe split them up and have a
"stream" IPC abstraction that the user can wrap around a ring buffer
they provide?
User can always add a semaphore. Or specify K_NO_WAIT as a timeout value.

/Dmitriy

Andy


Re: [RFC] Ring buffers

Andy Ross
 

Korovkin, Dmitriy (Wind River) wrote (on Friday, September 23, 2016 12:33PM):
- move the ring buffers declaration to include/kernel.h and chage routine
names according to the public API naming convention;

- pend writing threads for a certain timeout if no space available;

- pend reading threads (with timeout) if the buffer is empty.
That sounds more like a hybrid between a ring buffer (a data
structure) and a fifo (an IPC mechanism). I think there's value for a
generic buffer for bytewise streamlike data which isn't tied to a
particular idea of syncronization. Maybe split them up and have a
"stream" IPC abstraction that the user can wrap around a ring buffer
they provide?

Andy


[RFC] Ring buffers

Dmitriy Korovkin
 

Colleagues, any comment on this?

In it's current state the ring buffers appear as internal kernel objects
used by event loger and ipm console.

The proposal here is to make them kernel objects available for application
developers, that includes the following steps:

- move the ring buffers declaration to include/kernel.h and chage routine
names according to the public API naming convention;

- pend writing threads for a certain timeout if no space available;

- pend reading threads (with timeout) if the buffer is empty.

Is there a need for this?

Regards,
Dmitriy Korovkin


Re: Timer utility function to use single timer

Boie, Andrew P
 

On Fri, 2016-09-23 at 09:41 -0700, Andy Ross wrote:
Would anyone object to a patch that set CONFIG_NUM_DYNAMIC_TIMERS to
zero by default and disabled the API unless it was 1 or greater?  Or
maybe deprecating it for future removal?
+1 on deprecation!

Andrew


Re: Timer utility function to use single timer

Andy Ross
 

Korovkin, Dmitriy wrote:
On Fri, 23 Sep 2016, Andy Ross wrote:
Would anyone object to a patch that set CONFIG_NUM_DYNAMIC_TIMERS to
zero by default and disabled the API unless it was 1 or greater? Or
maybe deprecating it for future removal?
I was also thinking on setting this to 0 and save some space.
/Dmitriy
https://gerrit.zephyrproject.org/r/#/c/4954/


Re: Timer utility function to use single timer

Dmitriy Korovkin
 

On Fri, 23 Sep 2016, Andy Ross wrote:

Bag, Amit K wrote:
In Zephyr there will be a chance to run out of timers at times and
we will not get the timer handle to use for our modules.

Means whenever we required a timer for our module we can call a
timer API defined in zephyr
(i.e. *task_timer_alloc(),task_timer_start(),task_timer_stop, etc*)
I'm a little curious about this API design too. AFAICT, it's always
legal to statically allocate a k_timer struct of your own and
initialize and use it at runtime. If you need to know you won't run
out of timers, you can be guaranteed not to lose this one.

The allocate/free API looks like it's just a convenience wrapper to
allow sharing of timer objects between usages if you know they aren't
all going to be needed simultaneously.

The problem though, is that to get this facility Zephyr allocates 10
(by default) k_timer objects in a pool and shares only those.
Obviously Zephyr has no heap, so it can't share the memory as anything
but timers. And these aren't tiny objects. My quick manual count
says that they're 64 bytes a piece, so that's half a kb of RAM that
we're allocating in every default-configured app just to save a
handful of bytes in apps that want to use the "sharing" convenience
API. That seems like a bad trade to me.

Would anyone object to a patch that set CONFIG_NUM_DYNAMIC_TIMERS to
zero by default and disabled the API unless it was 1 or greater? Or
maybe deprecating it for future removal?
I was also thinking on setting this to 0 and save some space.
/Dmitriy

Andy


Re: RFC: Initializing and placement of thread/fiber/task stacks

Mitsis, Peter <Peter.Mitsis@...>
 

Apparently redefining '__stack' for both a thread's stack alignment and stack section placement will not work as we have many instances where those stacks are embedded within other structures.

Thus, I would like to modify the original proposal to the following:


1. Introduce a new tag (__stack_section) that will put thread stacks into the appropriate linker section. If CONFIG_INIT_STACKS is enabled, this stack section will be initialized to 0xaa's at boot time.

2. Rename the existing '__stack' tag to '__stack_align'. This is to make it clear that the tag is strictly for ensuring proper alignment, and that it does not become confused with '__stack_section'.

3. Update the thread initialization code to initialize its stack to 0xaa's only if ...

a. CONFIG_INIT_STACKS is enabled

b. AND the stack is not in the stack section (for when the stack is embedded in another structure).

Thoughts, comments, objections?

Peter Mitsis

From: Mitsis, Peter [mailto:Peter.Mitsis(a)windriver.com]
Sent: September-21-16 4:21 PM
To: devel(a)lists.zephyrproject.org
Subject: [devel] RFC: Initializing and placement of thread/fiber/task stacks

Any objections/comments to the following?

Thread stacks (as defined by the __stack attribute) should be placed into their own section. By default this section should be a part of the .noinit section so that they do not impact run time performance. However, if the Kconfig option INIT_STACKS is enabled, then the stacks should be placed into their own unique section and their stack space should be initialized at boot time to a known value (0xaa).

Although this will result in a longer boot time (when INIT_STACKS) is enabled, it means that aside from the potential boot time penalty, there is never a stack initialization run time penalty when creating a thread. This may be particularly beneficial should a thread be restarted or thread-space be reused as the stack will not be re-initialized. Note that this changes the semantics of thread stack usage to mean thread stack usage for all threads that use that stack.

Peter Mitsis


Re: Timer utility function to use single timer

Andy Ross
 

Bag, Amit K wrote:
In Zephyr there will be a chance to run out of timers at times and
we will not get the timer handle to use for our modules.

Means whenever we required a timer for our module we can call a
timer API defined in zephyr
(i.e. *task_timer_alloc(),task_timer_start(),task_timer_stop, etc*)
I'm a little curious about this API design too. AFAICT, it's always
legal to statically allocate a k_timer struct of your own and
initialize and use it at runtime. If you need to know you won't run
out of timers, you can be guaranteed not to lose this one.

The allocate/free API looks like it's just a convenience wrapper to
allow sharing of timer objects between usages if you know they aren't
all going to be needed simultaneously.

The problem though, is that to get this facility Zephyr allocates 10
(by default) k_timer objects in a pool and shares only those.
Obviously Zephyr has no heap, so it can't share the memory as anything
but timers. And these aren't tiny objects. My quick manual count
says that they're 64 bytes a piece, so that's half a kb of RAM that
we're allocating in every default-configured app just to save a
handful of bytes in apps that want to use the "sharing" convenience
API. That seems like a bad trade to me.

Would anyone object to a patch that set CONFIG_NUM_DYNAMIC_TIMERS to
zero by default and disabled the API unless it was 1 or greater? Or
maybe deprecating it for future removal?

Andy


Re: RFC: Thread abort handler in unified kernel

Andy Ross
 

Dmitriy Korovkin wrote:
Zephyr microkernel tasks allow user to set the task abort handler by
invoking task_abort_handler_set(). The abort handler is executed by
microkernel server fiber when it aborts the task.

As moving to unified kernel, it may be a good time to decide if this
functionality is needed in the future (for now it is needed for
compatibility, of course).
What's the use case? We don't really have a working exception/longjmp
framework that could clean up things after a stack unwind, so what
value is an "abort handler" going to provide anyway? A quick grep
says that we don't even have a test case for this (it gets linked in
explicitly by the footprint test, but never called).

I agree we should junk it on general principle.

Andy

7121 - 7140 of 8694