Date   

Re: Is there tutorials for Zephyr ticker/mayfly?

Chettimada, Vinayak Kariappa
 

Hi biwa,

Please be aware that ticker/mayfly are not Zephyr OS public interfaces,
i.e. applications/samples shall not use them and they will not follow the API deprecation rules,
implementation and functions are private to controller subsystem, and subject to change.

Comments are inline below...

On 11 Oct 2017, at 16:19, biwa <sjbiwa@gmail.com> wrote:

Thanks for all.

I can not understand ticker/mayfly yet.

My 'UNCLEAR POINT' is,
:ticker
What is node?
A ticker node is a timeout or expiry object. This is a node as in a node in a linked list.
What is user?
A ticker user is an identification number of the execution context that is calling the ticker APIs.
As mentioned before from historical reasons the implementation being barebones,
call to APIs are identified using user ids for context-safety.
What is slot?
Slot represents the desired time space (duration) occupancy in the timeline by the requesting
ticker node’s timeout callback.
What is TRIGGER/WORKER/JOB?
These are the ticker's execution contexts.
Trigger is the execution context asserted/signalled/ready to process a timeout.
Trigger could be a timer ISR.
Worker execution context is the one calling the timeout callbacks of the the expired ticker node.
Job execution context handles the ticker’s scheduling operations, all API calls get processed in
this context.
By “execution context” I am referring to the likes of ISRs, threads, tasks, tasklets, and work queues so on.

:mayfly
What is CALLEE/CALLER?
Callee as in callee function. Caller as in caller of the callee function.

Regards,
Vinayak


Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Boie, Andrew P
 

There has not been any public talk in mailing list about
userspace/kernel separation and how it affects IP stack etc. so it is
a bit difficult to say anything about this.
That's true, but we can/should think how it may be affected, or we'll be caught
in the cold water with them, and may come up with random on-spot designs to
address such future requirements, instead of something well thought out.
The userspace work has progressed to the point where we have enough confidence in the API design to open up the design discussion to a larger audience; until now enough things have been in flux (or uncertain) such that we've kept the discussion to the working group we established for it.

What we are trying to do is get something feature-complete into the tree for the upcoming 1.10 release, with partial test case coverage and initial documentation, on an 'experimental' basis; i.e. APIs and policies are subject to change. Then polish everything up for the 1.11 release, which would be the official debut of this feature.

I have to admit my knowledge of the network stack is quite poor, but broadly speaking here are a set of slides recently drafted which goes into some detail about what sort of kernel objects are accessible from user threads and the sort of restrictions they have. We expect to expose a subset of existing kernel APIs to user threads, and all driver APIs which don't involve registration of callbacks. Please feel free to leave comments in the document, or on this list.

https://docs.google.com/presentation/d/195ciwFwv7s0MX4AvL0KFB1iHm1_gRoXmn54mjS5fki8/edit?usp=sharing

I suspect the biggest implication for the network stack is that it uses registration of callbacks heavily, and it's forbidden to allow user threads to directly register callbacks that run in supervisor mode. But you can get around this (for example) by having the callback do minimal processing of the incoming data and signal a semaphore to have a user mode worker thread do the rest of the work. We are also looking into supporting user-mode workqueues. We also don't (yet) have a clear picture on what support for k_poll APIs we will have for userspace.

There's also the question of memory buffers, there would need to be some care taken that any buffers used by the stack that are exposed to the application contain purely data and no internal data structures private to the kernel. This constraint is why we don't provide system call interfaces to k_queue APIs.

Ideally in the fullness of time, we could migrate some parts of the network protocol stack to run in user mode, which I think would enhance the security of the system.

At the moment, current implementation effort is centered around getting our test cases running in user mode, and getting started on the formal documentation.

HTH,
Andrew


Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Paul Sokolovsky
 

Hello Jukka,

On Wed, 11 Oct 2017 13:06:25 +0300
Jukka Rissanen <jukka.rissanen@linux.intel.com> wrote:

[]
A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more
data
than MTU allows (accounting also for protocol headers). This
solution is rooted in the well-known POSIX semantics of "short
writes" - an application can request an arbitrary amount of data to
be written, but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an
application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 .
Again,
at that time, there was no consensus about way to solve it, so it
was implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context
APIs. There is at least one issue with this as it is currently not
easy to pass information to application how much data we are able to
send, so currently it would be either that we could send all the data
or none of it.
To clarify, there's no need "to pass information to application" per
se. However, the IP stack itself has to know the size of packet headers
at the time of packet creation. IIRC, that was the concern raised for
#119. So, I propose to work towards resolving that issue, and there're
at least 2 approaches:

1. To actually make the stack work like that ("plan ahead"), which might
require some non-trivial refactoring.
2. Or, just conservatively reserve the highest value for the header
size, even if that may mean that some packets will contain less
payload than maximum.

[]

Note that currently we do not have IPv4 fragmentation support
implemented, and IPv6 fragmentation is also disabled by default.
Reason for this is that the fragmentation requires lot of extra
memory to be used which might not be necessary in usual cases. Having
TCP segments split needs much less memory.
Perhaps. But POSIX "short write" approach would require ~ zero extra
memory.

I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a
different point of view.


Consider a case when an application wants to
send a big amount of constant data, e.g. 900KB. It can be a system
with e.g. 1MB of flash and 64KB of RAM, an app sitting in ~100KB
of flash, the rest containing constant data to send. Following an
"split oversized packet" approach wouldn't help - an app wouldn't be
able to create an oversized packet of 900K - there's simply not
enough
RAM for it. So, it would need to handle such a case differently
anyway.
Of course your application is constrained by available memory and
other limits by your hw.
I don't think this comment does good to the usecase presented. Of
course, any app is constrained by such, but the algorithm based on POSIX
"short write" approach allows to cover wider usecases in a simpler
manner.

[]

Please note that BSD socket API is fully optional and not always
available. You cannot rely it to be present especially if you want to
minimize memory consumption. We need more general solution instead of
something that is only available for BSD sockets.
As clarified in the response to Anas, in no way I propose BSD Sockets
API as an alternative to native API. However, I do propose some
design/implementation choices from BSD Sockets API to be adopted for
the native API.

[]

There has not been any public talk in mailing list about
userspace/kernel separation and how it affects IP stack etc. so it is
a bit difficult to say anything about this.
That's true, but we can/should think how it may be affected, or we'll
be caught in the cold water with them, and may come up with random
on-spot designs to address such future requirements, instead of
something well thought out.

[]

--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Paul Sokolovsky
 

Hello,

On Wed, 11 Oct 2017 14:56:02 +0000
"Nashif, Anas" <anas.nashif@intel.com> wrote:

Paul,

You gave very detailed background information and listed issues we
had in the past but it was not clear what you are proposing,
Yes, looking at Jukka's response, I must have failed miserably to
convey what I propose. I propose:

1. To reject approach to the send MTU handling in
https://github.com/zephyrproject-rtos/zephyr/pull/1330

2. To adopt approach from
https://github.com/zephyrproject-rtos/zephyr/pull/119 , which however
may need further work to address concerns raised against it.


we do
have sockets already, are you suggesting we should move everything to
use sockets?
No, I don't suggest that (here).

Is the socket interface ready for this? Then there is
the usual comments being made whenever we discuss the IP stack
related to memory usage and footprint (here made by Jukka), can we
please quantify this and provide more data and context? For example I
would be interested in numbers showing how much more memory/flash do
we consume when sockets are used vs the same implementation using low
level APIs.
To have such numbers, first socket-based implementations of various
application-level protocols would need to exist. They currently don't,
and I personally don't think it's worthy investment of effort, at least
with the current state of affairs, when there're still known issues in
the underlying stack.

So, I'm left with just speculating that it's better to cross-adopt
approaches between native API and sockets API, instead of making them
diverge. That was the point of my post.

What is the penalty and is it justifiable, given that
using sockets would give us a more portable solution and would allow
the random user/developer to implement protocols more easily.

So my request is to have a more details proposals with going into the
history of this and how we can move forward from here and what such a
proposal would mean to existing code and protocols not using
sockets...
I exactly tried to go thru the history of the question, with the
relevant links. Hopefully the summary above clarifies the essence of
the proposal.

Thanks,
Paul


Anas


-----Original Message-----
From: Jukka Rissanen [mailto:jukka.rissanen@linux.intel.com]
Sent: Wednesday, October 11, 2017 6:06 AM
To: Paul Sokolovsky <paul.sokolovsky@linaro.org>;
devel@lists.zephyrproject.org; Tomasz Bursztyka
<tomasz.bursztyka@linux.intel.com>; David Brown
<david.brown@linaro.org>; Kumar Gala <kumar.gala@linaro.org>; Nashif,
Anas <anas.nashif@intel.com> Subject: Re: BSD Sockets in mainline,
and how that affects design decisions for the rest of IP stack (e.g.
send MTU handling)

Hi,

On Tue, 2017-10-10 at 21:50 +0300, Paul Sokolovsky wrote:
Hello,


A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more
data than MTU allows (accounting also for protocol headers). This
solution is rooted in the well-known POSIX semantics of "short
writes" - an application can request an arbitrary amount of data to
be written, but a system is free to process less data, based on
system resource availability. Amount of processed data is returned,
and an application is expected to retry the operation for the
remaining data. It was posted as
https://github.com/zephyrproject-rtos/zephyr/pull/119 . Again,
at that time, there was no consensus about way to solve it, so it
was implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context
APIs. There is at least one issue with this as it is currently not
easy to pass information to application how much data we are able to
send, so currently it would be either that we could send all the data
or none of it.


Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted.
It works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and
splits this packet into several packets with a valid length. A
comment immediately received (not by me) was that this patch just
duplicates in an adhoc way IP fragmentation support as required by
TCP/IP protocol.
Note that currently we do not have IPv4 fragmentation support
implemented, and IPv6 fragmentation is also disabled by default.
Reason for this is that the fragmentation requires lot of extra
memory to be used which might not be necessary in usual cases. Having
TCP segments split needs much less memory.


I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a
different point of view.


Consider a case when an application wants to send a big amount of
constant data, e.g. 900KB. It can be a system with e.g. 1MB of
flash and 64KB of RAM, an app sitting in ~100KB of flash, the rest
containing constant data to send. Following an "split oversized
packet" approach wouldn't help - an app wouldn't be able to create
an oversized packet of 900K - there's simply not enough RAM for it.
So, it would need to handle such a case differently anyway.
Of course your application is constrained by available memory and
other limits by your hw.

But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until completion
anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense
to go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).
Please note that BSD socket API is fully optional and not always
available. You cannot rely it to be present especially if you want to
minimize memory consumption. We need more general solution instead of
something that is only available for BSD sockets.



Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is
not needed for applications using BSD Sockets. There's at least
another issue solved on BSD Sockets level, but not on the native
API. There's an ongoing effort to separate kernel and userspace,
and BSD Sockets offer an automagic solution for that, while native
API allows a user app to access straight to the kernel networking
buffer, so there's a lot to solve there yet. Going like that, it
may turn out that native adhoc API, which initially was intended to
small and efficient, will grow bigger and more complex (== harder
to stabilize, containing more bugs) than something based on well
tried and tested approach like POSIX.
There has not been any public talk in mailing list about
userspace/kernel separation and how it affects IP stack etc. so it is
a bit difficult to say anything about this.



So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and
overall implications on the design/implementation. There're many
more details than presented above, and the devil is definitely in
details, there's no absolutely "right" solution, it's a compromise.
I hope that Jukka and Tomasz, who are proponents of the second
(GH-1330) approach can correct me on the benefits of it.
You are unnecessarily creating this scenario about pro or against
solution. I have an example application in
https://github.com/zephyrpro ject-rtos/zephyr/pull/980 that needs to
send large (several kb) file to outside world using HTTP, and I am
trying so solve it efficiently. The application will not use BSD
sockets.



Thanks,
Paul

Jukka


--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Luiz Augusto von Dentz
 

Hi Anas,

On Wed, Oct 11, 2017 at 5:56 PM, Nashif, Anas <anas.nashif@intel.com> wrote:
Paul,

You gave very detailed background information and listed issues we had in the past but it was not clear what you are proposing, we do have sockets already, are you suggesting we should move everything to use sockets? Is the socket interface ready for this?
Then there is the usual comments being made whenever we discuss the IP stack related to memory usage and footprint (here made by Jukka), can we please quantify this and provide more data and context? For example I would be interested in numbers showing how much more memory/flash do we consume when sockets are used vs the same implementation using low level APIs. What is the penalty and is it justifiable, given that using sockets would give us a more portable solution and would allow the random user/developer to implement protocols more easily.
Afaik a lot of ram is spent on buffers and if we can't do zero-copy
that means at very least one extra buffer has to exist to move data
around, fine-tuning the buffer size is also tricky especially using
small chunks which is prefered but will take several more calls and
copies into the stack, on the other hand, bigger buffers may bump the
memory footprint but provide better latency. Btw, this sort of trades
will just increase with the addition of kernel and userspace
separation, regardless in which layer that would sit at one point the
kernel will have to copy data from userspace in which case we may have
not just one copy per socket but 2, socket->stack->driver or perhaps 3
if the driver is using a HAL not compatible with net_buf.

So my request is to have a more details proposals with going into the history of this and how we can move forward from here and what such a proposal would mean to existing code and protocols not using sockets...

Anas


-----Original Message-----
From: Jukka Rissanen [mailto:jukka.rissanen@linux.intel.com]
Sent: Wednesday, October 11, 2017 6:06 AM
To: Paul Sokolovsky <paul.sokolovsky@linaro.org>; devel@lists.zephyrproject.org; Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>; David Brown <david.brown@linaro.org>; Kumar Gala <kumar.gala@linaro.org>; Nashif, Anas <anas.nashif@intel.com>
Subject: Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Hi,

On Tue, 2017-10-10 at 21:50 +0300, Paul Sokolovsky wrote:
Hello,


A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written, but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 .
Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context APIs. There is at least one issue with this as it is currently not easy to pass information to application how much data we are able to send, so currently it would be either that we could send all the data or none of it.


Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.
Note that currently we do not have IPv4 fragmentation support implemented, and IPv6 fragmentation is also disabled by default. Reason for this is that the fragmentation requires lot of extra memory to be used which might not be necessary in usual cases. Having TCP segments split needs much less memory.


I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a different point of view.


Consider a case when an application wants to send a big amount of
constant data, e.g. 900KB. It can be a system with e.g. 1MB of flash
and 64KB of RAM, an app sitting in ~100KB of flash, the rest
containing constant data to send. Following an "split oversized
packet" approach wouldn't help - an app wouldn't be able to create an
oversized packet of 900K - there's simply not enough RAM for it. So,
it would need to handle such a case differently anyway.
Of course your application is constrained by available memory and other limits by your hw.

But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until completion
anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).
Please note that BSD socket API is fully optional and not always available. You cannot rely it to be present especially if you want to minimize memory consumption. We need more general solution instead of something that is only available for BSD sockets.



Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is not
needed for applications using BSD Sockets. There's at least another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like
POSIX.
There has not been any public talk in mailing list about userspace/kernel separation and how it affects IP stack etc. so it is a bit difficult to say anything about this.



So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and overall
implications on the design/implementation. There're many more details
than presented above, and the devil is definitely in details, there's
no absolutely "right" solution, it's a compromise. I hope that Jukka
and Tomasz, who are proponents of the second (GH-1330) approach can
correct me on the benefits of it.
You are unnecessarily creating this scenario about pro or against solution. I have an example application in https://github.com/zephyrpro
ject-rtos/zephyr/pull/980 that needs to send large (several kb) file to outside world using HTTP, and I am trying so solve it efficiently. The application will not use BSD sockets.



Thanks,
Paul

Jukka

_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


--
Luiz Augusto von Dentz


Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Nashif, Anas
 

Paul,

You gave very detailed background information and listed issues we had in the past but it was not clear what you are proposing, we do have sockets already, are you suggesting we should move everything to use sockets? Is the socket interface ready for this?
Then there is the usual comments being made whenever we discuss the IP stack related to memory usage and footprint (here made by Jukka), can we please quantify this and provide more data and context? For example I would be interested in numbers showing how much more memory/flash do we consume when sockets are used vs the same implementation using low level APIs. What is the penalty and is it justifiable, given that using sockets would give us a more portable solution and would allow the random user/developer to implement protocols more easily.

So my request is to have a more details proposals with going into the history of this and how we can move forward from here and what such a proposal would mean to existing code and protocols not using sockets...

Anas

-----Original Message-----
From: Jukka Rissanen [mailto:jukka.rissanen@linux.intel.com]
Sent: Wednesday, October 11, 2017 6:06 AM
To: Paul Sokolovsky <paul.sokolovsky@linaro.org>; devel@lists.zephyrproject.org; Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>; David Brown <david.brown@linaro.org>; Kumar Gala <kumar.gala@linaro.org>; Nashif, Anas <anas.nashif@intel.com>
Subject: Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Hi,

On Tue, 2017-10-10 at 21:50 +0300, Paul Sokolovsky wrote:
Hello,


A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written, but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 .
Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context APIs. There is at least one issue with this as it is currently not easy to pass information to application how much data we are able to send, so currently it would be either that we could send all the data or none of it.


Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.
Note that currently we do not have IPv4 fragmentation support implemented, and IPv6 fragmentation is also disabled by default. Reason for this is that the fragmentation requires lot of extra memory to be used which might not be necessary in usual cases. Having TCP segments split needs much less memory.


I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a different point of view.


Consider a case when an application wants to send a big amount of
constant data, e.g. 900KB. It can be a system with e.g. 1MB of flash
and 64KB of RAM, an app sitting in ~100KB of flash, the rest
containing constant data to send. Following an "split oversized
packet" approach wouldn't help - an app wouldn't be able to create an
oversized packet of 900K - there's simply not enough RAM for it. So,
it would need to handle such a case differently anyway.
Of course your application is constrained by available memory and other limits by your hw.

But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until completion
anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).
Please note that BSD socket API is fully optional and not always available. You cannot rely it to be present especially if you want to minimize memory consumption. We need more general solution instead of something that is only available for BSD sockets.



Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is not
needed for applications using BSD Sockets. There's at least another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like
POSIX.
There has not been any public talk in mailing list about userspace/kernel separation and how it affects IP stack etc. so it is a bit difficult to say anything about this.



So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and overall
implications on the design/implementation. There're many more details
than presented above, and the devil is definitely in details, there's
no absolutely "right" solution, it's a compromise. I hope that Jukka
and Tomasz, who are proponents of the second (GH-1330) approach can
correct me on the benefits of it.
You are unnecessarily creating this scenario about pro or against solution. I have an example application in https://github.com/zephyrpro
ject-rtos/zephyr/pull/980 that needs to send large (several kb) file to outside world using HTTP, and I am trying so solve it efficiently. The application will not use BSD sockets.



Thanks,
Paul

Jukka


Re: Is there tutorials for Zephyr ticker/mayfly?

loquat3
 

Thanks for all.

I can not understand ticker/mayfly yet.

My 'UNCLEAR POINT' is,
:ticker
What is node?
What is user?
What is slot?
What is TRIGGER/WORKER/JOB?

:mayfly
What is CALLEE/CALLER?


2017-10-09 16:27 GMT+09:00 Chettimada, Vinayak Kariappa <vinayak.kariappa.chettimada@...>:

Hi biwa,

 

There are no tutorial or documentation of ticker or mayfly in the Zephyr repository.

 

Ticker and Mayfly implementation are specific to BLE controller scheduling and they are barebones implementation contributed to Zephyr Project.

We are constantly refactoring the implementation to use Zephyr OS features.

 

Some of the continued issues needing contributions are:

https://github.com/zephyrproject-rtos/zephyr/issues/2244

https://github.com/zephyrproject-rtos/zephyr/issues/2247

https://github.com/zephyrproject-rtos/zephyr/issues/2248

 

To be short, Mayfly schedule functions to be run deferred in another execution context.

Currently the BLE controller uses them to call functions in interrupt contexts.

Zephyr threads or work queues that suffice the controller needs will replace mayfly.

 

If you can be more specific on what you are interested in Ticker/Mayfly, I can provide more details.

 

Regards,

Vinayak

 

 

 

From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-bounces@lists.zephyrproject.org] On Behalf Of Cufi, Carles
Sent: Saturday, October 07, 2017 3:48 PM
To: biwa <sjbiwa@...>; zephyr-devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

Hi there,

 

No, unfortunately there are no tutorials or even documentation about the ticker or the mayfly. That said, their author is Vinayak from Nordic, and you can reach him on IRC, he’s usually there. Try the channel #zephyr-bt on freenode.net.

 

Regards,

 

Carles

 

From: <zephyr-devel-bounces@lists.zephyrproject.org> on behalf of biwa <sjbiwa@...>
Date: Saturday, 7 October 2017 at 04:00
To: "zephyr-devel@lists.zephyrproject.org" <zephyr-devel@lists.zephyrproject.org>
Subject: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

I am studying ZephyrOS.

Are there detailed tutorials for studying zephyrOS's ticker/mayfly?



Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Jukka Rissanen
 

Hi,

On Tue, 2017-10-10 at 21:50 +0300, Paul Sokolovsky wrote:
Hello,


A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more
data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written,
but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an
application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 .
Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context
APIs. There is at least one issue with this as it is currently not
easy to pass information to application how much data we are able to
send, so currently it would be either that we could send all the data
or none of it.


Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.
Note that currently we do not have IPv4 fragmentation support
implemented, and IPv6 fragmentation is also disabled by default. Reason
for this is that the fragmentation requires lot of extra memory to be
used which might not be necessary in usual cases. Having TCP segments
split needs much less memory.


I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a different
point of view.


Consider a case when an application wants to
send a big amount of constant data, e.g. 900KB. It can be a system
with e.g. 1MB of flash and 64KB of RAM, an app sitting in ~100KB
of flash, the rest containing constant data to send. Following an
"split oversized packet" approach wouldn't help - an app wouldn't be
able to create an oversized packet of 900K - there's simply not
enough
RAM for it. So, it would need to handle such a case differently
anyway.
Of course your application is constrained by available memory and other
limits by your hw.

But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until
completion anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense
to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).
Please note that BSD socket API is fully optional and not always
available. You cannot rely it to be present especially if you want to
minimize memory consumption. We need more general solution instead of
something that is only available for BSD sockets.



Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is
not needed for applications using BSD Sockets. There's at least
another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like
POSIX.
There has not been any public talk in mailing list about
userspace/kernel separation and how it affects IP stack etc. so it is a
bit difficult to say anything about this.



So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and
overall
implications on the design/implementation. There're many more
details than presented above, and the devil is definitely in details,
there's no absolutely "right" solution, it's a compromise. I hope
that
Jukka and Tomasz, who are proponents of the second (GH-1330) approach
can correct me on the benefits of it.
You are unnecessarily creating this scenario about pro or against
solution. I have an example application in https://github.com/zephyrpro
ject-rtos/zephyr/pull/980 that needs to send large (several kb) file to
outside world using HTTP, and I am trying so solve it efficiently. The
application will not use BSD sockets.



Thanks,
Paul

Jukka


Linar Connecto 2017 presentations

Maciek Borzecki <maciek.borzecki@...>
 

Some Zephyr related presentations for those who could not attend
Linaro Connect in SF:

- Deploy STM32 family on Zephyr – SFO17-102
http://connect.linaro.org/resource/sfo17/sfo17-102/
- An update on MCUBoot (The IoT Bootloader) – SFO17-118
http://connect.linaro.org/resource/sfo17/sfo17-118/
- Using SoC Vendor HALs in the Zephyr Project – SFO17-112
http://connect.linaro.org/resource/sfo17/sfo17-112/
- New Zephyr features: LWM2M / FOTA Framework – SFO17-113
http://connect.linaro.org/resource/sfo17/sfo17-113/
- BSD Sockets API in Zephyr RTOS – SFO17-108
http://connect.linaro.org/resource/sfo17/sfo17-108/

Mynewt, but interesting nonetheless:
- Modular middleware components in Apache Mynewt OS – SFO17-507
http://connect.linaro.org/resource/sfo17/sfo17-507/

--
Maciek Borzecki


Re: [Zephyr-devil] Tiny tile Zephyr implementation

Graham Stott <gbcstott1@...>
 

I meant to also say to look at projects for Arduino/Genuine 101 as tiny Tile is a scaled down version of this board.

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Graham Stott
Sent: Tuesday, October 10, 2017 2:01 PM
To: 'Jie Zhou' <zhoujie@...>; zephyr-devel@...
Subject: Re: [Zephyr-devel] Tinytile Zephyr implementation

 

As tinyTile is basically just a version of the Intel® Curie™ module, you can look for projects under Curie. Yes. It has additional pins but the rest is the same

 

Graham

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Jie Zhou
Sent: Tuesday, October 10, 2017 1:21 AM
To: zephyr-devel@...
Subject: [Zephyr-devel] Tinytile Zephyr implementation

 

Hi All,

 

Has anyone done anything with tinyTILE? Since the board is quite new I was wondering if someone has done a project with Zephyr OS on tinyTILE. The specs of the chip looks promising for IoT implementations. Any info or the setup you are using will help. I'm trying to evaluate the capability of tinyTILE with Zephyr.

 

Thanks,

Jie


Re: Tinytile Zephyr implementation

Graham Stott <gbcstott1@...>
 

As tinyTile is basically just a version of the Intel® Curie™ module, you can look for projects under Curie. Yes. It has additional pins but the rest is the same

 

Graham

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Jie Zhou
Sent: Tuesday, October 10, 2017 1:21 AM
To: zephyr-devel@...
Subject: [Zephyr-devel] Tinytile Zephyr implementation

 

Hi All,

 

Has anyone done anything with tinyTILE? Since the board is quite new I was wondering if someone has done a project with Zephyr OS on tinyTILE. The specs of the chip looks promising for IoT implementations. Any info or the setup you are using will help. I'm trying to evaluate the capability of tinyTILE with Zephyr.

 

Thanks,

Jie


BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Paul Sokolovsky
 

Hello,

There was an RFC on this list to implement BSD Sockets API
compatibility layer for Zephyr some 6 months ago. Majority of that
functionality went into the 1.9 release (with some additional pieces
still going into).

Before and while working on sockets, there were a number of issues with
the native stack discovered/documented, and solutions for some were
proposed. At that time they were rather tentative and experimental, and
there was no consensus how to resolve them, so as a proof of concept,
they were implemented just in the Sockets layer.

An example is handling of the send MTU, originally
https://jira.zephyrproject.org/browse/ZEP-1998 , now
https://github.com/zephyrproject-rtos/zephyr/issues/3439 . The essence
of the issue is that native networking API functions to create an
outgoing packet don't control packet size in any way. It's easy to
create an oversized packet which will fail during an actual send
operation.

A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written, but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 . Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.

Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.

I would like to raise an additional argument while POSIX-inspired
approach may be better. Consider a case when an application wants to
send a big amount of constant data, e.g. 900KB. It can be a system
with e.g. 1MB of flash and 64KB of RAM, an app sitting in ~100KB
of flash, the rest containing constant data to send. Following an
"split oversized packet" approach wouldn't help - an app wouldn't be
able to create an oversized packet of 900K - there's simply not enough
RAM for it. So, it would need to handle such a case differently anyway.
But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until
completion anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).

Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is
not needed for applications using BSD Sockets. There's at least another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like POSIX.

So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and overall
implications on the design/implementation. There're many more
details than presented above, and the devil is definitely in details,
there's no absolutely "right" solution, it's a compromise. I hope that
Jukka and Tomasz, who are proponents of the second (GH-1330) approach
can correct me on the benefits of it.


Thanks,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Tinytile Zephyr implementation

Jie Zhou <zhoujie@...>
 

Hi All,

Has anyone done anything with tinyTILE? Since the board is quite new I was wondering if someone has done a project with Zephyr OS on tinyTILE. The specs of the chip looks promising for IoT implementations. Any info or the setup you are using will help. I'm trying to evaluate the capability of tinyTILE with Zephyr.

Thanks,
Jie


Re: Any plan to support ST BlueNRG chips?

Erwan Gouriou
 

Hi Aaron,

BlueNRG chip are actually already supported (even if deeper testing would be welcome).
To activate it, you could have a look to Disco L475 IoT board configuration.

Cheers
Erwan



Any plan to support ST BlueNRG chips?

Aaron Xu
 

Hi,

I saw zephyr will support ST BlueNRG in 1.8 in a PPT file. 
But looks zephyr still did't support it.

Do you any information about supporting BlueNRG planing?


Regards,
Aaron


Re: Bluetooth mesh_demo crashes

Johan Hedberg
 

On Mon, Oct 09, 2017, Steve Brown wrote:
Yes, it looks indeed like there's a missing initialization of this
queue. I think this is a regression that slipped in at some point in
the recent past, and got uncovered now when the local network
interface is always enabled (instead of being behind a Kconfig
option).

Instead of calling k_fifo_init however, I think it's more efficient
to use a static initializer for this like I've done in the attached
patch. Can you confirm that this also fixes the issue for you? I'll
then create a pull request out of it.
It did fix the problem.
Thanks for testing! The PR is here:

https://github.com/zephyrproject-rtos/zephyr/pull/4238

Johan


Re: Bluetooth mesh_demo crashes

Steve Brown
 

Johan,

On Mon, 2017-10-09 at 19:22 +0300, Johan Hedberg wrote:
Hi Steve,

On Mon, Oct 09, 2017, Steve Brown wrote:
The sample mesh_demo crashes on my nrf52840_pca10056 during
configuration with a data access violation.

It looks like it's caused by bt_mesh.local_queue not being
initialized.

I added a k_fifo_init to mesh/net.c:bt_mesh_net_init and it seemed
to
correct the problem.

Can somebody familiar with the code confirm this?
Yes, it looks indeed like there's a missing initialization of this
queue. I think this is a regression that slipped in at some point in
the
recent past, and got uncovered now when the local network interface
is
always enabled (instead of being behind a Kconfig option).

Instead of calling k_fifo_init however, I think it's more efficient
to
use a static initializer for this like I've done in the attached
patch.
Can you confirm that this also fixes the issue for you? I'll then
create
a pull request out of it.

Johan
It did fix the problem.

Thanks,

Steve


Re: Bluetooth mesh_demo crashes

Johan Hedberg
 

Hi Steve,

On Mon, Oct 09, 2017, Steve Brown wrote:
The sample mesh_demo crashes on my nrf52840_pca10056 during
configuration with a data access violation.

It looks like it's caused by bt_mesh.local_queue not being initialized.

I added a k_fifo_init to mesh/net.c:bt_mesh_net_init and it seemed to
correct the problem.

Can somebody familiar with the code confirm this?
Yes, it looks indeed like there's a missing initialization of this
queue. I think this is a regression that slipped in at some point in the
recent past, and got uncovered now when the local network interface is
always enabled (instead of being behind a Kconfig option).

Instead of calling k_fifo_init however, I think it's more efficient to
use a static initializer for this like I've done in the attached patch.
Can you confirm that this also fixes the issue for you? I'll then create
a pull request out of it.

Johan


Bluetooth mesh_demo crashes

Steve Brown
 

The sample mesh_demo crashes on my nrf52840_pca10056 during
configuration with a data access violation.

It looks like it's caused by bt_mesh.local_queue not being initialized.

I added a k_fifo_init to mesh/net.c:bt_mesh_net_init and it seemed to
correct the problem.

Can somebody familiar with the code confirm this?

Thanks,

Steve


Re: Is there tutorials for Zephyr ticker/mayfly?

Chettimada, Vinayak Kariappa
 

Hi biwa,

 

There are no tutorial or documentation of ticker or mayfly in the Zephyr repository.

 

Ticker and Mayfly implementation are specific to BLE controller scheduling and they are barebones implementation contributed to Zephyr Project.

We are constantly refactoring the implementation to use Zephyr OS features.

 

Some of the continued issues needing contributions are:

https://github.com/zephyrproject-rtos/zephyr/issues/2244

https://github.com/zephyrproject-rtos/zephyr/issues/2247

https://github.com/zephyrproject-rtos/zephyr/issues/2248

 

To be short, Mayfly schedule functions to be run deferred in another execution context.

Currently the BLE controller uses them to call functions in interrupt contexts.

Zephyr threads or work queues that suffice the controller needs will replace mayfly.

 

If you can be more specific on what you are interested in Ticker/Mayfly, I can provide more details.

 

Regards,

Vinayak

 

 

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Cufi, Carles
Sent: Saturday, October 07, 2017 3:48 PM
To: biwa <sjbiwa@...>; zephyr-devel@...
Subject: Re: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

Hi there,

 

No, unfortunately there are no tutorials or even documentation about the ticker or the mayfly. That said, their author is Vinayak from Nordic, and you can reach him on IRC, he’s usually there. Try the channel #zephyr-bt on freenode.net.

 

Regards,

 

Carles

 

From: <zephyr-devel-bounces@...> on behalf of biwa <sjbiwa@...>
Date: Saturday, 7 October 2017 at 04:00
To: "zephyr-devel@..." <zephyr-devel@...>
Subject: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

I am studying ZephyrOS.

Are there detailed tutorials for studying zephyrOS's ticker/mayfly?

4641 - 4660 of 8109