Date   

Re: RFC: BSD Socket (like) API

Jukka Rissanen
 

Hi Paul,

On Sun, 2017-03-26 at 19:06 +0300, Paul Sokolovsky wrote:
Hello,

Support for BSD Sockets API in Zephyr is one of the frequently asked
features from different parties. Possibility of such support was a
topic
of discussions at the recent OpenIoT Summit Portland and Zephyr
Mini-summit Budapest. I'm happy to report that I didn't hear a single
"cons" vote, most people I heard or talked with were positive that
they
either interested in it themselves, or it least OK with it if it's
properly layered and doesn't bloat existing networking API.

So, based on that, Linaro Zephyr team would like to proceed with
bootstrapping work on this, collecting initial requirements, and
starting prototyping. I submitted a Jira Epic
https://jira.zephyrproject.org/browse/ZEP-1921 for this feature,
which
has a detailed, even if maybe somewhat unstructured discussion of
initial ideas/requirements.

I won't paste it here, instead inviting interested parties to read it
there. Instead, here's a summary of the initial ideas:

1. There's no talk about implementing complete 100% (or 99.9%) POSIX
compliant sockets API. We may get there eventually, but that would
require stakeholders for each expensive or hard to implement feature.
The current approach is that we value lightweight nature of Zephyr,
and
looking towards finding a minimal set of changes (additions) to
provide
BSD Sockets *like* API to Zephyr.
The definition of what is BSD Socket *like* system seems to differ from
person to person. For me the current net_context API in Zephyr is quite
BSD socket like, meaning that the API provides similar functions that
are found in BSD socket API like open, close, bind, connect, accept
etc. So it is quite easy to port the application in this respect.

The bigger difference between BSD socket API and Zephyr net_context API
is:
* net_context API uses net_buf to pass data. The net_buf does not
provide linear memory but data needs to be partitioned when sending and
read in chunks when receiving. We have helpers defined in nbuf.h for
handling reading/writing data in this case. The issue with linear
memory case is that it uses much more memory as we need to be prepared
to receive at least 1280 byte size chunks of data (IPv6 min data packet
size).

* The net_context is asynchronous and caller needs to have callbacks
defined. The BSD socket API is synchronous. The net_context can be used
in synchronous way so this is a smaller issue imho.

Having a BSD socket API on top of net_context will use more memory so
if one is concerned about memory consumption, then using native API
should be preferred.


2. The specific featureset of the initial interest is as follows. The
current Z networking API is push-based, where the OS pushes data into
an application (via a callback). Whereas BSD Sockets API is pull-
based,
where an application asks OS for new data to process, when an
application feels like. Implementing this pull-style API is the
initial focus of the effort.

3. The work is going to be guided by porting efforts of an actual
application which needs BSD Sockets API (MicroPython). That would
serve
as an initial (out-of-tree) prototype, with relevant code to be
optimized and merged in-tree for wider reuse.


Consequently, questions to the Zephyr community:

1. Are you interested in BSD Sockets like API?
2. Does the plan above sound good? Any important points to take
care of right from the beginning?
3. Or would you do something differently?
4. Any other feedback is appreciated.



Thanks,
Paul

Cheers,
Jukka


Re: RFC: BSD Socket (like) API

Tomasz Bursztyka
 

Hi Hughes,


https://github.com/apache/incubator-mynewt-core/tree/develop/net/ip/mn_socket/include/mn_socket

I’d like to point to Mynewt’s socket APIs as a reference here, a couple of things we considered:

1- Don’t keep POSIX names unless you are going to be POSIX compliant. When running the system simulated, it’s often helpful to use system sockets to communicate and do things, if you use POSIX names you have conflicts.
1a- This also allows you to have a “true” POSIX mapping layer on top, for people who have more memory and truly want socket-level compatibility.

2- FDs just waste memory, add locking and make things harder to debug, use socket structures.
I don't know Mynewt, but if I want BSD socket API in Zephyr, I would like to see no prefix in front of types and functions.
Sure, it's not going to be 100% Posix compliant, and it's not the point here yes.

But I really want to use socket, bind, listen, as I mostly would in any other OS.

For the FD, I hope we can make any struct pointer to look alike.
Maybe there are limitations however, not sure.


3- Allow for multiple socket providers (https://github.com/apache/incubator-mynewt-core/blob/develop/net/ip/mn_socket/include/mn_socket/mn_socket_ops.h), that way it should be easy to “offload” the IP stack for cases (e.g. WINC1500) where the IP stack is not on-chip, or where somebody wants to use an existing commercial/industrial IP stack.
Offloading is already handled through net_context, so it will seamlessly work already.

Br,

Tomasz


Re: RFC: BSD Socket (like) API

Sterling Hughes <sterling@...>
 

+1

https://github.com/apache/incubator-mynewt-core/tree/develop/net/ip/mn_socket/include/mn_socket

I’d like to point to Mynewt’s socket APIs as a reference here, a couple of things we considered:

1- Don’t keep POSIX names unless you are going to be POSIX compliant. When running the system simulated, it’s often helpful to use system sockets to communicate and do things, if you use POSIX names you have conflicts.
1a- This also allows you to have a “true” POSIX mapping layer on top, for people who have more memory and truly want socket-level compatibility.

2- FDs just waste memory, add locking and make things harder to debug, use socket structures.

3- Allow for multiple socket providers (https://github.com/apache/incubator-mynewt-core/blob/develop/net/ip/mn_socket/include/mn_socket/mn_socket_ops.h), that way it should be easy to “offload” the IP stack for cases (e.g. WINC1500) where the IP stack is not on-chip, or where somebody wants to use an existing commercial/industrial IP stack.

4- If you are interested in unifying this API with Mynewt, we’d be happy to talk about agreeing on a unified API for sub-embedded sockets.

Best,

Sterling

On 26 Mar 2017, at 9:06, Paul Sokolovsky wrote:

Hello,

Support for BSD Sockets API in Zephyr is one of the frequently asked
features from different parties. Possibility of such support was a topic
of discussions at the recent OpenIoT Summit Portland and Zephyr
Mini-summit Budapest. I'm happy to report that I didn't hear a single
"cons" vote, most people I heard or talked with were positive that they
either interested in it themselves, or it least OK with it if it's
properly layered and doesn't bloat existing networking API.

So, based on that, Linaro Zephyr team would like to proceed with
bootstrapping work on this, collecting initial requirements, and
starting prototyping. I submitted a Jira Epic
https://jira.zephyrproject.org/browse/ZEP-1921 for this feature, which
has a detailed, even if maybe somewhat unstructured discussion of
initial ideas/requirements.

I won't paste it here, instead inviting interested parties to read it
there. Instead, here's a summary of the initial ideas:

1. There's no talk about implementing complete 100% (or 99.9%) POSIX
compliant sockets API. We may get there eventually, but that would
require stakeholders for each expensive or hard to implement feature.
The current approach is that we value lightweight nature of Zephyr, and
looking towards finding a minimal set of changes (additions) to provide
BSD Sockets *like* API to Zephyr.

2. The specific featureset of the initial interest is as follows. The
current Z networking API is push-based, where the OS pushes data into
an application (via a callback). Whereas BSD Sockets API is pull-based,
where an application asks OS for new data to process, when an
application feels like. Implementing this pull-style API is the
initial focus of the effort.

3. The work is going to be guided by porting efforts of an actual
application which needs BSD Sockets API (MicroPython). That would serve
as an initial (out-of-tree) prototype, with relevant code to be
optimized and merged in-tree for wider reuse.


Consequently, questions to the Zephyr community:

1. Are you interested in BSD Sockets like API?
2. Does the plan above sound good? Any important points to take
care of right from the beginning?
3. Or would you do something differently?
4. Any other feedback is appreciated.



Thanks,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


RFC: BSD Socket (like) API

Paul Sokolovsky
 

Hello,

Support for BSD Sockets API in Zephyr is one of the frequently asked
features from different parties. Possibility of such support was a topic
of discussions at the recent OpenIoT Summit Portland and Zephyr
Mini-summit Budapest. I'm happy to report that I didn't hear a single
"cons" vote, most people I heard or talked with were positive that they
either interested in it themselves, or it least OK with it if it's
properly layered and doesn't bloat existing networking API.

So, based on that, Linaro Zephyr team would like to proceed with
bootstrapping work on this, collecting initial requirements, and
starting prototyping. I submitted a Jira Epic
https://jira.zephyrproject.org/browse/ZEP-1921 for this feature, which
has a detailed, even if maybe somewhat unstructured discussion of
initial ideas/requirements.

I won't paste it here, instead inviting interested parties to read it
there. Instead, here's a summary of the initial ideas:

1. There's no talk about implementing complete 100% (or 99.9%) POSIX
compliant sockets API. We may get there eventually, but that would
require stakeholders for each expensive or hard to implement feature.
The current approach is that we value lightweight nature of Zephyr, and
looking towards finding a minimal set of changes (additions) to provide
BSD Sockets *like* API to Zephyr.

2. The specific featureset of the initial interest is as follows. The
current Z networking API is push-based, where the OS pushes data into
an application (via a callback). Whereas BSD Sockets API is pull-based,
where an application asks OS for new data to process, when an
application feels like. Implementing this pull-style API is the
initial focus of the effort.

3. The work is going to be guided by porting efforts of an actual
application which needs BSD Sockets API (MicroPython). That would serve
as an initial (out-of-tree) prototype, with relevant code to be
optimized and merged in-tree for wider reuse.


Consequently, questions to the Zephyr community:

1. Are you interested in BSD Sockets like API?
2. Does the plan above sound good? Any important points to take
care of right from the beginning?
3. Or would you do something differently?
4. Any other feedback is appreciated.



Thanks,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: NXP FRDM-K64F and OpenOCD from 0.9.0 SDK issue

Piotr Król <piotr.krol@...>
 

On 03/22/2017 03:42 PM, Piotr Król wrote:


On 03/21/2017 09:24 PM, Maureen Helm wrote:
Hi Piotr,
Hi Maureen,
Hi all,


I will describe this better in next email and probably contact OpenOCD
community meanwhile.
I did more research and found 2 configuration that work for me. First use CMSIS-DAP 0226, but generate previously mentioned errors and is little bit slow. Second use Segger JLink v2 firmware from:

http://www.nxp.com/products/software-and-tools/run-time-software/kinetis-software-and-tools/ides-for-kinetis-mcus/opensda-serial-and-debug-adapter:OPENSDA?tid=vanOpenSDA#FRDM-K64F

It requires Segger binaries provided with KDS, but is very fast and reliable.

One other mistake I made was not use `load` instruction before starting debugging session that's why breakpoints didn't work.

I describe my experience on blog for people having this kind of problems in future:

https://blog.3mdeb.com/2017/03/18/development-environment-for-zephyros-on-nxp-frdm-k64f/

Best Regards,
--
Piotr Król
Embedded Systems Consultant
http://3mdeb.com | @3mdeb_com


Re: LWM2M: Call for proposals

Joakim Eriksson <joakim.eriksson@...>
 

Yes, that is correct - that is the one we are working with and is most current.

Best regards,
— Joakim



On 24 Mar 2017, at 21:46, Anas Nashif <nashif@...> wrote:

Hi,


On Tue, Mar 21, 2017 at 6:47 PM, Ricardo Salveti de Araujo <ricardo.salveti@...> wrote:
On Tue, Mar 21, 2017 at 5:22 PM, Joakim Eriksson <joakim.eriksson@...> wrote:
> Hello Anas,
>
> I would like to try our LWM2M client only implementation in Zephyr and as recently have been working hard
> to get it more portable and independent of Contiki it should be fairly easy I hope.

Is your implementation available anywhere public already?

I might be mistaken, but I think this is the standalone version:


Joakim, is this correct?

Anas

_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@...
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


Re: LWM2M: Call for proposals

Anas Nashif
 

Hi,


On Tue, Mar 21, 2017 at 6:47 PM, Ricardo Salveti de Araujo <ricardo.salveti@...> wrote:
On Tue, Mar 21, 2017 at 5:22 PM, Joakim Eriksson <joakim.eriksson@...> wrote:
> Hello Anas,
>
> I would like to try our LWM2M client only implementation in Zephyr and as recently have been working hard
> to get it more portable and independent of Contiki it should be fairly easy I hope.

Is your implementation available anywhere public already?

I might be mistaken, but I think this is the standalone version:


Joakim, is this correct?

Anas


Re: Kernel MS Precision

Marti Bolivar <marti.bolivar@...>
 

On 24 March 2017 at 10:16, Marti Bolivar <marti.bolivar@linaro.org> wrote:
On 24 March 2017 at 10:05, Benjamin Walsh <benjamin.walsh@windriver.com> wrote:
On Fri, Mar 24, 2017 at 08:24:17AM +0000, Andreas Lenz wrote:
Hi Ben,

#define US_TIMEOUT(us) \
(int32_t)((((uint32_t)(us)) & 0x3fffffff) | 0x80000000)
// ^^^^^^^^^^^^^^^^^^^^^^^^
// keep the two upper bits as control bits just in
// case '10' would mean 'microseconds', '11' could
// mean something else
You could also use the full bits and add one additional byte to
specify the unit of the number.

Timers store their unit together with duration and period. For example
k_timer_start(timer, 100, 0, K_MSECONDS)
k_timer_start(timer, 100, 0, K_USECONDS)
Yeah, but that is not backwards-compatible with the API. And that only
works for timers, not the other APIs that take timeouts. Although, that
might be irrelevant.

For the "mean something else", I have a use case for low-priority, or
lazy timers.

They don't prevent the kernel to go into idle and expire later when
the system wakes up again.
Interesting idea. That could be a new API for timers though, it doesn't
have to modify an already existing one.
In a similar vein, could you add a new timer API that takes units, and
(conditioned on a config option to avoid 64 bit math on targets that
want to avoid it) implement the existing timer API on top of it for
compatibility?

void k_timer_start_prec(timer, duration, period, units);
int64_t k_timer_remaining_get(timer, units);
Meant k_timer_remaining_get_prec above. Sorry.


Marti


Re: Kernel MS Precision

Marti Bolivar <marti.bolivar@...>
 

On 24 March 2017 at 10:05, Benjamin Walsh <benjamin.walsh@windriver.com> wrote:
On Fri, Mar 24, 2017 at 08:24:17AM +0000, Andreas Lenz wrote:
Hi Ben,

#define US_TIMEOUT(us) \
(int32_t)((((uint32_t)(us)) & 0x3fffffff) | 0x80000000)
// ^^^^^^^^^^^^^^^^^^^^^^^^
// keep the two upper bits as control bits just in
// case '10' would mean 'microseconds', '11' could
// mean something else
You could also use the full bits and add one additional byte to
specify the unit of the number.

Timers store their unit together with duration and period. For example
k_timer_start(timer, 100, 0, K_MSECONDS)
k_timer_start(timer, 100, 0, K_USECONDS)
Yeah, but that is not backwards-compatible with the API. And that only
works for timers, not the other APIs that take timeouts. Although, that
might be irrelevant.

For the "mean something else", I have a use case for low-priority, or
lazy timers.

They don't prevent the kernel to go into idle and expire later when
the system wakes up again.
Interesting idea. That could be a new API for timers though, it doesn't
have to modify an already existing one.
In a similar vein, could you add a new timer API that takes units, and
(conditioned on a config option to avoid 64 bit math on targets that
want to avoid it) implement the existing timer API on top of it for
compatibility?

void k_timer_start_prec(timer, duration, period, units);
int64_t k_timer_remaining_get(timer, units);

Marti


Re: RFC: Random numbers

Luiz Augusto von Dentz
 

Hi,

On Wed, Mar 22, 2017 at 3:40 PM, Luiz Augusto von Dentz
<luiz.dentz@gmail.com> wrote:
Hi Marcus,

On Wed, Mar 22, 2017 at 2:34 PM, Marcus Shawcroft
<marcus.shawcroft@gmail.com> wrote:
Hi Luiz

On 22 March 2017 at 11:26, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
Hi Marcus,

Lets move the discussion of
https://gerrit.zephyrproject.org/r/#/c/12341 here since it should be
quite important to get it right if we intend Zephyr to be somewhat
secure OS.
My last set of comments in gerrit and this RFC crossed, I'll repost my
comments here in the thread:

> Maybe sys_urand32_get in addition to sys_rand32_get so we mimic
> /dev/urandom and /dev/random. sys_urand32_get might be PRNG based
> and should be considerably faster considering sys_rand32_get can
> block if it doesn't have enough entropy.
This seems reasonable. It would be good to choose names that more
clearly articulate the TRNG / PRNG aspect of their behaviour, its an
important distinction. In my mind the 'u' distinction is not
'obvious' enough. I would also advocate that any new interfaces we
add should drop the uint32_t chunks of entropy and instead adopt a
more flexible interface along the lines of:
From a developer with no much expertise into what does TRNG/PRNG
really means, myself included, Im not sure how using this terms would
improve the situation, in fact I think it would confuse people. Also
after reading a bit more about TRNG there doesn't seem to have a
solution that wouldn't involve a dedicated hardware, perhaps because
of that the Linux /dev/random and /dev/urandom manpage only talks
about CPRNG.

To me it is much more important we define these in terms of behavior,
which should them translate into care or not care about entropy
quality. With that in mind we may decide to add a timeout parameter to
the random number generator and then use that to decide the quality of
the entropy to use, if the user cannot wait then perhaps using
HMAC_PRNG shall be sufficient, otherwise it shall read for the entropy
pool directly.

int some_function_that_gets_entropy(uint8_t *buffer, uint16_t length);
I'd suggest something like this:

int sys_random_get(uint8_t *buffer, uint16_t length, uint32_t timeout);
int sys_random_put(const uint8_t *buffer, uint16_t length, uint32_t timeout);

I was intending to use a k_msgq to implement the entropy pool, but if
we put and get byte a byte I think I might have to reconsider, or
perhaps handle the chunks internally by calling multiple times
k_msgq_put and k_msgq_get but Im not sure I will be able to honor the
timeout properly so perhaps it would be a better idea to define a
minimal entropy size, if the caller needs more than that then it
should call it multiple times.

> > On systems with copious, low cost HW entropy we could simply wire
> > sys_prng_get() to the hw entropy source and bypass the prng
> > completely.
>
> Btw, isn't depending on one source of entropy alone bad/broken? I
> understand it is currently like this because the did not exist any
> way to collect entropy from other sources, but now we are talking
> about introducing one so we might as well switch from the driver
> given the random number to the driver working as a source of
> entropy which is then collected by random subsystem.
Fair point, if there are multiple sources available then best practice
would be to mix all the sources. I think that this therefore implies
the legacy/existing sys_rand32_get() function should be rewired to
pull entropy from a pool and the pool should be fed by all available
sources. However, I am aware that finding other sources of entropy in
a system is a really hard problem since most if not all can be
externally biased. The interface between a pool and the sources of
entropy is likely to be slightly awkward. On the one hand we have the
"random" drivers that can just be called to produce entropy on demand
(although perhaps with limited bandwidth) in this case a pull
interface works, while on the other hand harvesting entropy from other
parts of the system will likely need to be structured as a push
interface.
I guess we can have both pull and push, for the most part it should be
a push interface feeding the entropy pool, but as soon the pool runs
out or we need a new seed we should attempt to pull, obviously the
pull method shall only be used in case the user have provide a
timeout, that way the driver can go ahead and take that time to
generate more entropy and when it is done wake up the thread waiting
it.

We may also add a k_work to request more entropy from the driver in
case we are sort of entropy in the pool, that should prevent errors
when users need a random number immediately that could otherwise be
provided e.g. HMAC_PRNG but that fails since it needs to be reseeded.
It turns out I was wrong in guessing how it works in Linux, in fact
both /dev/random and /dev/urandom uses PRNG, the only difference is
how they read as random does reads from a pool which collects entropy
_after_ PRNG gets reseed while urandom just reads directly for PRNG
generator:

http://www.2uo.de/myths-about-urandom/

We also probably need an entropy estimation, de-biasing and whitening
before reseeding, or we trust the sources do that properly but Im
afraid we might need some form of whitening anyway.


> Btw, regarding the implementation sys_urand32_get, if you agree
> with that, that might use sys_rand32_get to seed.
This structure seems reasonable to me.

Cheers
/Marcus


--
Luiz Augusto von Dentz


--
Luiz Augusto von Dentz


Re: Kernel MS Precision

Benjamin Walsh <benjamin.walsh@...>
 

On Fri, Mar 24, 2017 at 08:24:17AM +0000, Andreas Lenz wrote:
Hi Ben,

#define US_TIMEOUT(us) \
(int32_t)((((uint32_t)(us)) & 0x3fffffff) | 0x80000000)
// ^^^^^^^^^^^^^^^^^^^^^^^^
// keep the two upper bits as control bits just in
// case '10' would mean 'microseconds', '11' could
// mean something else
You could also use the full bits and add one additional byte to
specify the unit of the number.

Timers store their unit together with duration and period. For example
k_timer_start(timer, 100, 0, K_MSECONDS)
k_timer_start(timer, 100, 0, K_USECONDS)
Yeah, but that is not backwards-compatible with the API. And that only
works for timers, not the other APIs that take timeouts. Although, that
might be irrelevant.

For the "mean something else", I have a use case for low-priority, or
lazy timers.

They don't prevent the kernel to go into idle and expire later when
the system wakes up again.
Interesting idea. That could be a new API for timers though, it doesn't
have to modify an already existing one.

k_timer_start_lazy(timer, <timeout>);

Actually, it would probably have to be handled differently as well,
since the current implementation of timeouts does not handle having more
expired ticks than the next timer to expire, and this condition would
happen with this new feature when the kernel is in tickless idle.

What I have in mind is battery monitoring where checks should be done
about once every hour, but only when the system is active.

However, K_FOREVER might be problematic as the time can wrap.

Best regards,
Andreas


Re: dhcp integration into the platform

Luiz Augusto von Dentz
 

Hi Marcus,

On Fri, Mar 24, 2017 at 11:59 AM, Marcus Shawcroft
<marcus.shawcroft@gmail.com> wrote:
Hi,


On 23 March 2017 at 19:26, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
Hi Marcus,

On Thu, Mar 23, 2017 at 6:51 PM, Marcus Shawcroft
<marcus.shawcroft@gmail.com> wrote:
Hi,

The network interface patches proposed as a result of this thread have
generated a fair amount of discussion both in patch reviews and in
IRC. Now would seem like a good time to summarize where we are and
pull together some of the various discussion points that have been
raised.

Current status:

A bunch of preparatory patches to dhcpv4 have been merged. Notable changes:
- Public interface now provides net_dhcpv4_start(iface) and
net_dhcpv4_stop(iface).
- Various initialization issues that would prevent concurrent dhcpv4
operation on multiple ifaces are resolved.
- dhcpv4 will now remove leased resources from the network stack on
lease lapse/release.

There is one more small but significant dhcpv4 patch outstanding that
catches L2 up/down events and kicks the appropriate dhcpv4 machinery
per interface. This patch is currently blocked pending necessary
support in net_if (see below). Once this patch is in place an
application will be able to start (and stop dhcpv4) per interface as
now. Once started dhcpv4 will catch L2 up/down events and acquire,
renew and release leases as required, Eventually the responsibility
to call call net_dhcpv4_start/stop() may be moved from an application
to a 'connection manager'... but that is for the future.

The 'net_if' patches are in their third iteration and have generated
by far the most discussion.

The objective of the net_if patches is to arrange for L2 up/down
network management events to be raised when a functional L2 iface
becomes available for use, or conversely becomes unavailable. These
events can then be caught by dhcpv4 in order for dhcpv4 to manage
IP/L3 configuration.

In the current net_if implementation there are two significant
functions: net_if_up and net_if_down(). These functions call the
underlying L2 enable() callback, set and clear the net_if
NET_IF_ENABLED flag and raise NET_EVENT_IF_UP/DOWN network management
events.

After re-reading various comments and discussion on the existing patch
set I've come to the conclusion that there are two different world
views of the conceptual purpose of net_if_up() and net_if_down().

View 1:
net_if_up/down provide an interface for a higher/management layer to
communicate downwards and mark an iface as enabled or disabled
(irrespective of the state of the underlying L2)

This world view is supported by:
- these functions call down to the enable() callback in the underlying
L2 (ie they direcly call L2 telling it whether to enable or disable).
- in the absence of a connection manager the network stack hardwires a
call to net_if_up() for every iface at system boot (net_if_post_init).

View 2:
net_if_up/down provide an interface for an underlying L2 to
communicate upwards that an iface as up/working.

This world view is supported by:
- the bluetooth stack calls net_if_up/down on ipsp connect/disconnect
- the net_if_up/down terminology suggests this behaviour (as opposed
to being explicitly called enable/disable)

Conceptually there are four APIs here: enable/disable and up/down.
The former two provide a management interface that allows a higher
layer to requested that an iface is enabled or disabled, likely called
by a connection manager or equivalent. The latter two allow the stack
below the iface to report upwards whether or not an enabled iface
actually has a link up or not.

The l2 enable callback conceptually belongs with the enable/disable
interface. The network management event up/down signalling
conceptually belongs with the up/down interface.

In the current tree I think we have a slightly odd merge of the two
concepts where some code treats net_if_up/down() as if they implement
enable/disable semantics, while other code treats
net_if_up()/net_if_down() as if they implement up/down semantics.
Notably we have the network stack initialization code hardwiring
net_if_up() on all interfaces and we have L2 enable hung on
net_if_up/down() both of these behaviours associated with an
enable/disable semantic yet we also have the BT stack using
net_if_up/down() as a notification mechanism that L2 is up/down. (It
appears to me that for an iface associated with BT, the iface will be
up'd at system boot and then re-up'd on ipsp connect )
Can anyone add some insight on this issue? The code looks to me as if
net_if_up/down() were originally added to provide an enable/disable
semantic, but we have BT using them in what appears to be an up/down
semantic. In the context of BT ifaces' , why do we have net_if_up()
being called twice on the iface. Initially at system boot and then
again on ipsp_connect ?
Enable/disable was added as an interface to net_if_up and net_if_down,
but since these functions are only supposed to be called from the L2
driver we could actually remove the enable/disable interface since it
is just calling the L2 driver which is what is calling net_if_up in
the first place.
I guess we end up with this design because we wanted net_if_up to turn
the NET_IF_UP during the init procedure, that way drivers that don't
have link detection, those that don't implement enable callback, just
bypass it. Is this that hard to figure out from the code?

Various points that have come up in discussion (please correct me if I
misrepresent or miss some point of view):

1) Should we have enable/disable. The general view seems to be that
we don't have a solid use case for enable/disable therefore we should
not have them.

2) BT advertise should be disabled if a network interface is
disabled(). IMO this is actually the use case that suggests we should
keep enable/disable.

3) Should we have 1 or 2 net_if flags. The general view seems to be
that we should have only 1, I think in practice this is driven by
whether we keep or remove an enable/disable API.

4) Physical interfaces should not power up power down as a result of
L2 enable/disable, that should all be handled via a separate power
management API.


There are (at least) two ways forward:

1) We drop the enable/disable semantic. This implies:
- We remove the L2 enable() callback completely.
- We remove hardwired net_if_up() calls when the network stack boots.
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops).
This may be racy since the net_if_post_init may not have been called
yet which means the RX and TX thread may not be ready.

- BT stays as it is (advertise hardwired on)
Note that the enable/disable semantic was introduced for L2 link
detection,
which is why it is an L2/LL API. Now from the discussion we
had in the IRC what we seem to be really missing is a L3/IP interface
to tell when that layer is available so the application can start
sending packets. We did agree that we need to make samples that react
to L3/IP being up not L2/LL which should probably remain just to start
the procedure to acquire IP address, etc, so by given this option it
either means we did not understand each other or you did not agree
after all the discussions we had.
I agree, there is a need for a mechanism for applications to
synchronize with L3. However, that is not related to the issue of how
dhcp synchronizes with L2, hence I've deliberately not addressed it in
this thread. That can wait until we have L2 sorted.
dhcp shall subscribe for NET_EVENT_IF_UP, this is in fact already used
in some sample, which should probably be changed to wait for an IP
event instead. Perhaps you want to change the semantic of
NET_EVENT_IF_UP
to signal when both L2 and L3 are up, that is of course possible but
then we need a new event to signal L2 is up, e.g: NET_EVENT_L2_UP,
dhcp would subscribe to that and later emit NET_EVENT_IF_UP when IP is
up, that is all possible but it doesn't have anything to do with
l2->enable/disable as these callbacks are for L2 as the name suggests.

2) We keep the enable/disable semantic. This implies:
- We split net_if_up/down into net_if_enable/disable() and
net_if_up/down() such that net_if_enable calls l2->enable() while
net_if_up/down deals with NET_IF_UP and raising net_event_if_up/down
- The hardwired net_if_up() calls at network stack boot are switched
to call net_if_enable()
- The BT L2 enable callback is used to turn advertising on/off
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops). (BT already does this)
Either this is mixing layer L3/IP states with L2/LL, or you do want to
introduce runtime RFKILL concept, which is it? If this is for L3/IP
then that should not mess up with L2 API, for runtime RFKILL this
should be done in the L1 driver so we disable everything, including
interrupts and could possible power down the radio. Thoughts?
Neither. I don't believe there is any L3 in either proposal here, if
you think there is then can you be specific about where you see it?

My understanding of rfkill is that it is a mechanism to allow some
management agent or the user to kill all RF output in a device. The
reference to BT advertising above is actually in response to your IRC
comment that if the interface is disabled we should not be advertising
in bluetooth, I interpreted which I've interpreted as BT should not
advertise IPSP rather than as "RFKILL everything", did i miss
understand?

My intention here is not to change the BT behaviour. My interest is
in finding a way forward to split the current mixed net_if/L2
enable/disable/up/down behaviour embedded in net_if_up() and
net_if_down() into a distinct enable/disable and up/down, specifically
splitting out the up/down semantic such that network devices can
communicate link up/down upwards through the stack and dhcp can catch
those notifications.

Cheers
/Marcus


--
Luiz Augusto von Dentz


Re: dhcp integration into the platform

Marcus Shawcroft <marcus.shawcroft@...>
 

Hi,


On 23 March 2017 at 19:26, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
Hi Marcus,

On Thu, Mar 23, 2017 at 6:51 PM, Marcus Shawcroft
<marcus.shawcroft@gmail.com> wrote:
Hi,

The network interface patches proposed as a result of this thread have
generated a fair amount of discussion both in patch reviews and in
IRC. Now would seem like a good time to summarize where we are and
pull together some of the various discussion points that have been
raised.

Current status:

A bunch of preparatory patches to dhcpv4 have been merged. Notable changes:
- Public interface now provides net_dhcpv4_start(iface) and
net_dhcpv4_stop(iface).
- Various initialization issues that would prevent concurrent dhcpv4
operation on multiple ifaces are resolved.
- dhcpv4 will now remove leased resources from the network stack on
lease lapse/release.

There is one more small but significant dhcpv4 patch outstanding that
catches L2 up/down events and kicks the appropriate dhcpv4 machinery
per interface. This patch is currently blocked pending necessary
support in net_if (see below). Once this patch is in place an
application will be able to start (and stop dhcpv4) per interface as
now. Once started dhcpv4 will catch L2 up/down events and acquire,
renew and release leases as required, Eventually the responsibility
to call call net_dhcpv4_start/stop() may be moved from an application
to a 'connection manager'... but that is for the future.

The 'net_if' patches are in their third iteration and have generated
by far the most discussion.

The objective of the net_if patches is to arrange for L2 up/down
network management events to be raised when a functional L2 iface
becomes available for use, or conversely becomes unavailable. These
events can then be caught by dhcpv4 in order for dhcpv4 to manage
IP/L3 configuration.

In the current net_if implementation there are two significant
functions: net_if_up and net_if_down(). These functions call the
underlying L2 enable() callback, set and clear the net_if
NET_IF_ENABLED flag and raise NET_EVENT_IF_UP/DOWN network management
events.

After re-reading various comments and discussion on the existing patch
set I've come to the conclusion that there are two different world
views of the conceptual purpose of net_if_up() and net_if_down().

View 1:
net_if_up/down provide an interface for a higher/management layer to
communicate downwards and mark an iface as enabled or disabled
(irrespective of the state of the underlying L2)

This world view is supported by:
- these functions call down to the enable() callback in the underlying
L2 (ie they direcly call L2 telling it whether to enable or disable).
- in the absence of a connection manager the network stack hardwires a
call to net_if_up() for every iface at system boot (net_if_post_init).

View 2:
net_if_up/down provide an interface for an underlying L2 to
communicate upwards that an iface as up/working.

This world view is supported by:
- the bluetooth stack calls net_if_up/down on ipsp connect/disconnect
- the net_if_up/down terminology suggests this behaviour (as opposed
to being explicitly called enable/disable)

Conceptually there are four APIs here: enable/disable and up/down.
The former two provide a management interface that allows a higher
layer to requested that an iface is enabled or disabled, likely called
by a connection manager or equivalent. The latter two allow the stack
below the iface to report upwards whether or not an enabled iface
actually has a link up or not.

The l2 enable callback conceptually belongs with the enable/disable
interface. The network management event up/down signalling
conceptually belongs with the up/down interface.

In the current tree I think we have a slightly odd merge of the two
concepts where some code treats net_if_up/down() as if they implement
enable/disable semantics, while other code treats
net_if_up()/net_if_down() as if they implement up/down semantics.
Notably we have the network stack initialization code hardwiring
net_if_up() on all interfaces and we have L2 enable hung on
net_if_up/down() both of these behaviours associated with an
enable/disable semantic yet we also have the BT stack using
net_if_up/down() as a notification mechanism that L2 is up/down. (It
appears to me that for an iface associated with BT, the iface will be
up'd at system boot and then re-up'd on ipsp connect )
Can anyone add some insight on this issue? The code looks to me as if
net_if_up/down() were originally added to provide an enable/disable
semantic, but we have BT using them in what appears to be an up/down
semantic. In the context of BT ifaces' , why do we have net_if_up()
being called twice on the iface. Initially at system boot and then
again on ipsp_connect ?

Various points that have come up in discussion (please correct me if I
misrepresent or miss some point of view):

1) Should we have enable/disable. The general view seems to be that
we don't have a solid use case for enable/disable therefore we should
not have them.

2) BT advertise should be disabled if a network interface is
disabled(). IMO this is actually the use case that suggests we should
keep enable/disable.

3) Should we have 1 or 2 net_if flags. The general view seems to be
that we should have only 1, I think in practice this is driven by
whether we keep or remove an enable/disable API.

4) Physical interfaces should not power up power down as a result of
L2 enable/disable, that should all be handled via a separate power
management API.


There are (at least) two ways forward:

1) We drop the enable/disable semantic. This implies:
- We remove the L2 enable() callback completely.
- We remove hardwired net_if_up() calls when the network stack boots.
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops).
- BT stays as it is (advertise hardwired on)
Note that the enable/disable semantic was introduced for L2 link
detection,
which is why it is an L2/LL API. Now from the discussion we
had in the IRC what we seem to be really missing is a L3/IP interface
to tell when that layer is available so the application can start
sending packets. We did agree that we need to make samples that react
to L3/IP being up not L2/LL which should probably remain just to start
the procedure to acquire IP address, etc, so by given this option it
either means we did not understand each other or you did not agree
after all the discussions we had.
I agree, there is a need for a mechanism for applications to
synchronize with L3. However, that is not related to the issue of how
dhcp synchronizes with L2, hence I've deliberately not addressed it in
this thread. That can wait until we have L2 sorted.

2) We keep the enable/disable semantic. This implies:
- We split net_if_up/down into net_if_enable/disable() and
net_if_up/down() such that net_if_enable calls l2->enable() while
net_if_up/down deals with NET_IF_UP and raising net_event_if_up/down
- The hardwired net_if_up() calls at network stack boot are switched
to call net_if_enable()
- The BT L2 enable callback is used to turn advertising on/off
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops). (BT already does this)
Either this is mixing layer L3/IP states with L2/LL, or you do want to
introduce runtime RFKILL concept, which is it? If this is for L3/IP
then that should not mess up with L2 API, for runtime RFKILL this
should be done in the L1 driver so we disable everything, including
interrupts and could possible power down the radio. Thoughts?
Neither. I don't believe there is any L3 in either proposal here, if
you think there is then can you be specific about where you see it?

My understanding of rfkill is that it is a mechanism to allow some
management agent or the user to kill all RF output in a device. The
reference to BT advertising above is actually in response to your IRC
comment that if the interface is disabled we should not be advertising
in bluetooth, I interpreted which I've interpreted as BT should not
advertise IPSP rather than as "RFKILL everything", did i miss
understand?

My intention here is not to change the BT behaviour. My interest is
in finding a way forward to split the current mixed net_if/L2
enable/disable/up/down behaviour embedded in net_if_up() and
net_if_down() into a distinct enable/disable and up/down, specifically
splitting out the up/down semantic such that network devices can
communicate link up/down upwards through the stack and dhcp can catch
those notifications.

Cheers
/Marcus


Newlib c Library

Parka <patka@...>
 

Hello,
The SDK has a patched newlib c Library. I'm compiling my own Crosscompiler and
I want to ask If it's a official Patch or a Patch made by you? If you made
the Patch, can I get it or the Source Code of the newlib c Library?

Karmazyn Patrick


Re: Kernel MS Precision

Andreas Lenz
 

Hi Ben,

#define US_TIMEOUT(us) \
(int32_t)((((uint32_t)(us)) & 0x3fffffff) | 0x80000000)
// ^^^^^^^^^^^^^^^^^^^^^^^^
// keep the two upper bits as control bits just in
// case '10' would mean 'microseconds', '11' could
// mean something else
You could also use the full bits and add one additional byte to specify the unit of the number.
Timers store their unit together with duration and period. For example
k_timer_start(timer, 100, 0, K_MSECONDS)
k_timer_start(timer, 100, 0, K_USECONDS)

For the "mean something else", I have a use case for low-priority, or lazy timers.
They don't prevent the kernel to go into idle and expire later when the system wakes up again.
What I have in mind is battery monitoring where checks should be done about once every hour, but only when the system is active.
However, K_FOREVER might be problematic as the time can wrap.

Best regards,
Andreas


Re: dhcp integration into the platform

Gil Pitney
 

Option 1) would be ideal for the upcoming WiFi offload devices (like
TI CC3220), which do full TCP/IP offload onto a co-processor,
essentially bypassing the L2 layer.

In that case, there is no need for an l2->enable() call.

It seems that Option 1) makes the most sense for the offload use case:
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops).
The DHCP client is also offloaded onto the network coprocessor, but
could be configured off by default to use the Zephyr DHCP. Still, it
would be nice to allow DHCP to be offloaded in the future as well
(saving code space, power).


On 23 March 2017 at 12:26, Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:
Hi Marcus,

On Thu, Mar 23, 2017 at 6:51 PM, Marcus Shawcroft
<marcus.shawcroft@gmail.com> wrote:
Hi,

The network interface patches proposed as a result of this thread have
generated a fair amount of discussion both in patch reviews and in
IRC. Now would seem like a good time to summarize where we are and
pull together some of the various discussion points that have been
raised.

Current status:

A bunch of preparatory patches to dhcpv4 have been merged. Notable changes:
- Public interface now provides net_dhcpv4_start(iface) and
net_dhcpv4_stop(iface).
- Various initialization issues that would prevent concurrent dhcpv4
operation on multiple ifaces are resolved.
- dhcpv4 will now remove leased resources from the network stack on
lease lapse/release.

There is one more small but significant dhcpv4 patch outstanding that
catches L2 up/down events and kicks the appropriate dhcpv4 machinery
per interface. This patch is currently blocked pending necessary
support in net_if (see below). Once this patch is in place an
application will be able to start (and stop dhcpv4) per interface as
now. Once started dhcpv4 will catch L2 up/down events and acquire,
renew and release leases as required, Eventually the responsibility
to call call net_dhcpv4_start/stop() may be moved from an application
to a 'connection manager'... but that is for the future.

The 'net_if' patches are in their third iteration and have generated
by far the most discussion.

The objective of the net_if patches is to arrange for L2 up/down
network management events to be raised when a functional L2 iface
becomes available for use, or conversely becomes unavailable. These
events can then be caught by dhcpv4 in order for dhcpv4 to manage
IP/L3 configuration.

In the current net_if implementation there are two significant
functions: net_if_up and net_if_down(). These functions call the
underlying L2 enable() callback, set and clear the net_if
NET_IF_ENABLED flag and raise NET_EVENT_IF_UP/DOWN network management
events.

After re-reading various comments and discussion on the existing patch
set I've come to the conclusion that there are two different world
views of the conceptual purpose of net_if_up() and net_if_down().

View 1:
net_if_up/down provide an interface for a higher/management layer to
communicate downwards and mark an iface as enabled or disabled
(irrespective of the state of the underlying L2)

This world view is supported by:
- these functions call down to the enable() callback in the underlying
L2 (ie they direcly call L2 telling it whether to enable or disable).
- in the absence of a connection manager the network stack hardwires a
call to net_if_up() for every iface at system boot (net_if_post_init).

View 2:
net_if_up/down provide an interface for an underlying L2 to
communicate upwards that an iface as up/working.

This world view is supported by:
- the bluetooth stack calls net_if_up/down on ipsp connect/disconnect
- the net_if_up/down terminology suggests this behaviour (as opposed
to being explicitly called enable/disable)

Conceptually there are four APIs here: enable/disable and up/down.
The former two provide a management interface that allows a higher
layer to requested that an iface is enabled or disabled, likely called
by a connection manager or equivalent. The latter two allow the stack
below the iface to report upwards whether or not an enabled iface
actually has a link up or not.

The l2 enable callback conceptually belongs with the enable/disable
interface. The network management event up/down signalling
conceptually belongs with the up/down interface.

In the current tree I think we have a slightly odd merge of the two
concepts where some code treats net_if_up/down() as if they implement
enable/disable semantics, while other code treats
net_if_up()/net_if_down() as if they implement up/down semantics.
Notably we have the network stack initialization code hardwiring
net_if_up() on all interfaces and we have L2 enable hung on
net_if_up/down() both of these behaviours associated with an
enable/disable semantic yet we also have the BT stack using
net_if_up/down() as a notification mechanism that L2 is up/down. (It
appears to me that for an iface associated with BT, the iface will be
up'd at system boot and then re-up'd on ipsp connect )

Various points that have come up in discussion (please correct me if I
misrepresent or miss some point of view):

1) Should we have enable/disable. The general view seems to be that
we don't have a solid use case for enable/disable therefore we should
not have them.

2) BT advertise should be disabled if a network interface is
disabled(). IMO this is actually the use case that suggests we should
keep enable/disable.

3) Should we have 1 or 2 net_if flags. The general view seems to be
that we should have only 1, I think in practice this is driven by
whether we keep or remove an enable/disable API.

4) Physical interfaces should not power up power down as a result of
L2 enable/disable, that should all be handled via a separate power
management API.


There are (at least) two ways forward:

1) We drop the enable/disable semantic. This implies:
- We remove the L2 enable() callback completely.
- We remove hardwired net_if_up() calls when the network stack boots.
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops).
- BT stays as it is (advertise hardwired on)
Note that the enable/disable semantic was introduced for L2 link
detection, which is why it is an L2/LL API. Now from the discussion we
had in the IRC what we seem to be really missing is a L3/IP interface
to tell when that layer is available so the application can start
sending packets. We did agree that we need to make samples that react
to L3/IP being up not L2/LL which should probably remain just to start
the procedure to acquire IP address, etc, so by given this option it
either means we did not understand each other or you did not agree
after all the discussions we had.

2) We keep the enable/disable semantic. This implies:
- We split net_if_up/down into net_if_enable/disable() and
net_if_up/down() such that net_if_enable calls l2->enable() while
net_if_up/down deals with NET_IF_UP and raising net_event_if_up/down
- The hardwired net_if_up() calls at network stack boot are switched
to call net_if_enable()
- The BT L2 enable callback is used to turn advertising on/off
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops). (BT already does this)
Either this is mixing layer L3/IP states with L2/LL, or you do want to
introduce runtime RFKILL concept, which is it? If this is for L3/IP
then that should not mess up with L2 API, for runtime RFKILL this
should be done in the L1 driver so we disable everything, including
interrupts and could possible power down the radio. Thoughts?

In option 1 we remove the mechanism we have to communicate to the BT
stack that advertising should be on/off.

IMHO route 2 is a better way forward.

Thoughts?

/Marcus


--
Luiz Augusto von Dentz


Re: dhcp integration into the platform

Luiz Augusto von Dentz
 

Hi Marcus,

On Thu, Mar 23, 2017 at 6:51 PM, Marcus Shawcroft
<marcus.shawcroft@gmail.com> wrote:
Hi,

The network interface patches proposed as a result of this thread have
generated a fair amount of discussion both in patch reviews and in
IRC. Now would seem like a good time to summarize where we are and
pull together some of the various discussion points that have been
raised.

Current status:

A bunch of preparatory patches to dhcpv4 have been merged. Notable changes:
- Public interface now provides net_dhcpv4_start(iface) and
net_dhcpv4_stop(iface).
- Various initialization issues that would prevent concurrent dhcpv4
operation on multiple ifaces are resolved.
- dhcpv4 will now remove leased resources from the network stack on
lease lapse/release.

There is one more small but significant dhcpv4 patch outstanding that
catches L2 up/down events and kicks the appropriate dhcpv4 machinery
per interface. This patch is currently blocked pending necessary
support in net_if (see below). Once this patch is in place an
application will be able to start (and stop dhcpv4) per interface as
now. Once started dhcpv4 will catch L2 up/down events and acquire,
renew and release leases as required, Eventually the responsibility
to call call net_dhcpv4_start/stop() may be moved from an application
to a 'connection manager'... but that is for the future.

The 'net_if' patches are in their third iteration and have generated
by far the most discussion.

The objective of the net_if patches is to arrange for L2 up/down
network management events to be raised when a functional L2 iface
becomes available for use, or conversely becomes unavailable. These
events can then be caught by dhcpv4 in order for dhcpv4 to manage
IP/L3 configuration.

In the current net_if implementation there are two significant
functions: net_if_up and net_if_down(). These functions call the
underlying L2 enable() callback, set and clear the net_if
NET_IF_ENABLED flag and raise NET_EVENT_IF_UP/DOWN network management
events.

After re-reading various comments and discussion on the existing patch
set I've come to the conclusion that there are two different world
views of the conceptual purpose of net_if_up() and net_if_down().

View 1:
net_if_up/down provide an interface for a higher/management layer to
communicate downwards and mark an iface as enabled or disabled
(irrespective of the state of the underlying L2)

This world view is supported by:
- these functions call down to the enable() callback in the underlying
L2 (ie they direcly call L2 telling it whether to enable or disable).
- in the absence of a connection manager the network stack hardwires a
call to net_if_up() for every iface at system boot (net_if_post_init).

View 2:
net_if_up/down provide an interface for an underlying L2 to
communicate upwards that an iface as up/working.

This world view is supported by:
- the bluetooth stack calls net_if_up/down on ipsp connect/disconnect
- the net_if_up/down terminology suggests this behaviour (as opposed
to being explicitly called enable/disable)

Conceptually there are four APIs here: enable/disable and up/down.
The former two provide a management interface that allows a higher
layer to requested that an iface is enabled or disabled, likely called
by a connection manager or equivalent. The latter two allow the stack
below the iface to report upwards whether or not an enabled iface
actually has a link up or not.

The l2 enable callback conceptually belongs with the enable/disable
interface. The network management event up/down signalling
conceptually belongs with the up/down interface.

In the current tree I think we have a slightly odd merge of the two
concepts where some code treats net_if_up/down() as if they implement
enable/disable semantics, while other code treats
net_if_up()/net_if_down() as if they implement up/down semantics.
Notably we have the network stack initialization code hardwiring
net_if_up() on all interfaces and we have L2 enable hung on
net_if_up/down() both of these behaviours associated with an
enable/disable semantic yet we also have the BT stack using
net_if_up/down() as a notification mechanism that L2 is up/down. (It
appears to me that for an iface associated with BT, the iface will be
up'd at system boot and then re-up'd on ipsp connect )

Various points that have come up in discussion (please correct me if I
misrepresent or miss some point of view):

1) Should we have enable/disable. The general view seems to be that
we don't have a solid use case for enable/disable therefore we should
not have them.

2) BT advertise should be disabled if a network interface is
disabled(). IMO this is actually the use case that suggests we should
keep enable/disable.

3) Should we have 1 or 2 net_if flags. The general view seems to be
that we should have only 1, I think in practice this is driven by
whether we keep or remove an enable/disable API.

4) Physical interfaces should not power up power down as a result of
L2 enable/disable, that should all be handled via a separate power
management API.


There are (at least) two ways forward:

1) We drop the enable/disable semantic. This implies:
- We remove the L2 enable() callback completely.
- We remove hardwired net_if_up() calls when the network stack boots.
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops).
- BT stays as it is (advertise hardwired on)
Note that the enable/disable semantic was introduced for L2 link
detection, which is why it is an L2/LL API. Now from the discussion we
had in the IRC what we seem to be really missing is a L3/IP interface
to tell when that layer is available so the application can start
sending packets. We did agree that we need to make samples that react
to L3/IP being up not L2/LL which should probably remain just to start
the procedure to acquire IP address, etc, so by given this option it
either means we did not understand each other or you did not agree
after all the discussions we had.

2) We keep the enable/disable semantic. This implies:
- We split net_if_up/down into net_if_enable/disable() and
net_if_up/down() such that net_if_enable calls l2->enable() while
net_if_up/down deals with NET_IF_UP and raising net_event_if_up/down
- The hardwired net_if_up() calls at network stack boot are switched
to call net_if_enable()
- The BT L2 enable callback is used to turn advertising on/off
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops). (BT already does this)
Either this is mixing layer L3/IP states with L2/LL, or you do want to
introduce runtime RFKILL concept, which is it? If this is for L3/IP
then that should not mess up with L2 API, for runtime RFKILL this
should be done in the L1 driver so we disable everything, including
interrupts and could possible power down the radio. Thoughts?

In option 1 we remove the mechanism we have to communicate to the BT
stack that advertising should be on/off.

IMHO route 2 is a better way forward.

Thoughts?

/Marcus


--
Luiz Augusto von Dentz


subject

VISHWANATH REDDY <vishwanathreddy1503@...>
 

sir i am doing weather forecasting using zephyr OS. i am deploying arduino 101 and BME 280 sensor to collect data for forecasting. i found less resources about the programming part of Zephyr OS in internet as well as in zephyr project site. i hope you will be going to guide me for the further things.

--
Thank you.
Regards,
Vishwanath. Reddy


hello sir

VISHWANATH REDDY <vishwanathreddy1503@...>
 


Glad to see your reply for my message. may i know sir which sector you belongs to in ZEPHYR os development.
--
Thank you.
Regards,
Vishwanath. Reddy


Re: dhcp integration into the platform

Marcus Shawcroft <marcus.shawcroft@...>
 

Hi,

The network interface patches proposed as a result of this thread have
generated a fair amount of discussion both in patch reviews and in
IRC. Now would seem like a good time to summarize where we are and
pull together some of the various discussion points that have been
raised.

Current status:

A bunch of preparatory patches to dhcpv4 have been merged. Notable changes:
- Public interface now provides net_dhcpv4_start(iface) and
net_dhcpv4_stop(iface).
- Various initialization issues that would prevent concurrent dhcpv4
operation on multiple ifaces are resolved.
- dhcpv4 will now remove leased resources from the network stack on
lease lapse/release.

There is one more small but significant dhcpv4 patch outstanding that
catches L2 up/down events and kicks the appropriate dhcpv4 machinery
per interface. This patch is currently blocked pending necessary
support in net_if (see below). Once this patch is in place an
application will be able to start (and stop dhcpv4) per interface as
now. Once started dhcpv4 will catch L2 up/down events and acquire,
renew and release leases as required, Eventually the responsibility
to call call net_dhcpv4_start/stop() may be moved from an application
to a 'connection manager'... but that is for the future.

The 'net_if' patches are in their third iteration and have generated
by far the most discussion.

The objective of the net_if patches is to arrange for L2 up/down
network management events to be raised when a functional L2 iface
becomes available for use, or conversely becomes unavailable. These
events can then be caught by dhcpv4 in order for dhcpv4 to manage
IP/L3 configuration.

In the current net_if implementation there are two significant
functions: net_if_up and net_if_down(). These functions call the
underlying L2 enable() callback, set and clear the net_if
NET_IF_ENABLED flag and raise NET_EVENT_IF_UP/DOWN network management
events.

After re-reading various comments and discussion on the existing patch
set I've come to the conclusion that there are two different world
views of the conceptual purpose of net_if_up() and net_if_down().

View 1:
net_if_up/down provide an interface for a higher/management layer to
communicate downwards and mark an iface as enabled or disabled
(irrespective of the state of the underlying L2)

This world view is supported by:
- these functions call down to the enable() callback in the underlying
L2 (ie they direcly call L2 telling it whether to enable or disable).
- in the absence of a connection manager the network stack hardwires a
call to net_if_up() for every iface at system boot (net_if_post_init).

View 2:
net_if_up/down provide an interface for an underlying L2 to
communicate upwards that an iface as up/working.

This world view is supported by:
- the bluetooth stack calls net_if_up/down on ipsp connect/disconnect
- the net_if_up/down terminology suggests this behaviour (as opposed
to being explicitly called enable/disable)

Conceptually there are four APIs here: enable/disable and up/down.
The former two provide a management interface that allows a higher
layer to requested that an iface is enabled or disabled, likely called
by a connection manager or equivalent. The latter two allow the stack
below the iface to report upwards whether or not an enabled iface
actually has a link up or not.

The l2 enable callback conceptually belongs with the enable/disable
interface. The network management event up/down signalling
conceptually belongs with the up/down interface.

In the current tree I think we have a slightly odd merge of the two
concepts where some code treats net_if_up/down() as if they implement
enable/disable semantics, while other code treats
net_if_up()/net_if_down() as if they implement up/down semantics.
Notably we have the network stack initialization code hardwiring
net_if_up() on all interfaces and we have L2 enable hung on
net_if_up/down() both of these behaviours associated with an
enable/disable semantic yet we also have the BT stack using
net_if_up/down() as a notification mechanism that L2 is up/down. (It
appears to me that for an iface associated with BT, the iface will be
up'd at system boot and then re-up'd on ipsp connect )

Various points that have come up in discussion (please correct me if I
misrepresent or miss some point of view):

1) Should we have enable/disable. The general view seems to be that
we don't have a solid use case for enable/disable therefore we should
not have them.

2) BT advertise should be disabled if a network interface is
disabled(). IMO this is actually the use case that suggests we should
keep enable/disable.

3) Should we have 1 or 2 net_if flags. The general view seems to be
that we should have only 1, I think in practice this is driven by
whether we keep or remove an enable/disable API.

4) Physical interfaces should not power up power down as a result of
L2 enable/disable, that should all be handled via a separate power
management API.


There are (at least) two ways forward:

1) We drop the enable/disable semantic. This implies:
- We remove the L2 enable() callback completely.
- We remove hardwired net_if_up() calls when the network stack boots.
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops).
- BT stays as it is (advertise hardwired on)

2) We keep the enable/disable semantic. This implies:
- We split net_if_up/down into net_if_enable/disable() and
net_if_up/down() such that net_if_enable calls l2->enable() while
net_if_up/down deals with NET_IF_UP and raising net_event_if_up/down
- The hardwired net_if_up() calls at network stack boot are switched
to call net_if_enable()
- The BT L2 enable callback is used to turn advertising on/off
- Every network device needs to call net_if_up() once it has a link
established (and net_if_down when it drops). (BT already does this)

In option 1 we remove the mechanism we have to communicate to the BT
stack that advertising should be on/off.

IMHO route 2 is a better way forward.

Thoughts?

/Marcus

5161 - 5180 of 7811