Date   

Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Luiz Augusto von Dentz
 

Hi Anas,

On Wed, Oct 11, 2017 at 5:56 PM, Nashif, Anas <anas.nashif@intel.com> wrote:
Paul,

You gave very detailed background information and listed issues we had in the past but it was not clear what you are proposing, we do have sockets already, are you suggesting we should move everything to use sockets? Is the socket interface ready for this?
Then there is the usual comments being made whenever we discuss the IP stack related to memory usage and footprint (here made by Jukka), can we please quantify this and provide more data and context? For example I would be interested in numbers showing how much more memory/flash do we consume when sockets are used vs the same implementation using low level APIs. What is the penalty and is it justifiable, given that using sockets would give us a more portable solution and would allow the random user/developer to implement protocols more easily.
Afaik a lot of ram is spent on buffers and if we can't do zero-copy
that means at very least one extra buffer has to exist to move data
around, fine-tuning the buffer size is also tricky especially using
small chunks which is prefered but will take several more calls and
copies into the stack, on the other hand, bigger buffers may bump the
memory footprint but provide better latency. Btw, this sort of trades
will just increase with the addition of kernel and userspace
separation, regardless in which layer that would sit at one point the
kernel will have to copy data from userspace in which case we may have
not just one copy per socket but 2, socket->stack->driver or perhaps 3
if the driver is using a HAL not compatible with net_buf.

So my request is to have a more details proposals with going into the history of this and how we can move forward from here and what such a proposal would mean to existing code and protocols not using sockets...

Anas


-----Original Message-----
From: Jukka Rissanen [mailto:jukka.rissanen@linux.intel.com]
Sent: Wednesday, October 11, 2017 6:06 AM
To: Paul Sokolovsky <paul.sokolovsky@linaro.org>; devel@lists.zephyrproject.org; Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>; David Brown <david.brown@linaro.org>; Kumar Gala <kumar.gala@linaro.org>; Nashif, Anas <anas.nashif@intel.com>
Subject: Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Hi,

On Tue, 2017-10-10 at 21:50 +0300, Paul Sokolovsky wrote:
Hello,


A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written, but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 .
Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context APIs. There is at least one issue with this as it is currently not easy to pass information to application how much data we are able to send, so currently it would be either that we could send all the data or none of it.


Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.
Note that currently we do not have IPv4 fragmentation support implemented, and IPv6 fragmentation is also disabled by default. Reason for this is that the fragmentation requires lot of extra memory to be used which might not be necessary in usual cases. Having TCP segments split needs much less memory.


I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a different point of view.


Consider a case when an application wants to send a big amount of
constant data, e.g. 900KB. It can be a system with e.g. 1MB of flash
and 64KB of RAM, an app sitting in ~100KB of flash, the rest
containing constant data to send. Following an "split oversized
packet" approach wouldn't help - an app wouldn't be able to create an
oversized packet of 900K - there's simply not enough RAM for it. So,
it would need to handle such a case differently anyway.
Of course your application is constrained by available memory and other limits by your hw.

But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until completion
anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).
Please note that BSD socket API is fully optional and not always available. You cannot rely it to be present especially if you want to minimize memory consumption. We need more general solution instead of something that is only available for BSD sockets.



Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is not
needed for applications using BSD Sockets. There's at least another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like
POSIX.
There has not been any public talk in mailing list about userspace/kernel separation and how it affects IP stack etc. so it is a bit difficult to say anything about this.



So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and overall
implications on the design/implementation. There're many more details
than presented above, and the devil is definitely in details, there's
no absolutely "right" solution, it's a compromise. I hope that Jukka
and Tomasz, who are proponents of the second (GH-1330) approach can
correct me on the benefits of it.
You are unnecessarily creating this scenario about pro or against solution. I have an example application in https://github.com/zephyrpro
ject-rtos/zephyr/pull/980 that needs to send large (several kb) file to outside world using HTTP, and I am trying so solve it efficiently. The application will not use BSD sockets.



Thanks,
Paul

Jukka

_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


--
Luiz Augusto von Dentz


Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Nashif, Anas
 

Paul,

You gave very detailed background information and listed issues we had in the past but it was not clear what you are proposing, we do have sockets already, are you suggesting we should move everything to use sockets? Is the socket interface ready for this?
Then there is the usual comments being made whenever we discuss the IP stack related to memory usage and footprint (here made by Jukka), can we please quantify this and provide more data and context? For example I would be interested in numbers showing how much more memory/flash do we consume when sockets are used vs the same implementation using low level APIs. What is the penalty and is it justifiable, given that using sockets would give us a more portable solution and would allow the random user/developer to implement protocols more easily.

So my request is to have a more details proposals with going into the history of this and how we can move forward from here and what such a proposal would mean to existing code and protocols not using sockets...

Anas

-----Original Message-----
From: Jukka Rissanen [mailto:jukka.rissanen@linux.intel.com]
Sent: Wednesday, October 11, 2017 6:06 AM
To: Paul Sokolovsky <paul.sokolovsky@linaro.org>; devel@lists.zephyrproject.org; Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>; David Brown <david.brown@linaro.org>; Kumar Gala <kumar.gala@linaro.org>; Nashif, Anas <anas.nashif@intel.com>
Subject: Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Hi,

On Tue, 2017-10-10 at 21:50 +0300, Paul Sokolovsky wrote:
Hello,


A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written, but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 .
Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context APIs. There is at least one issue with this as it is currently not easy to pass information to application how much data we are able to send, so currently it would be either that we could send all the data or none of it.


Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.
Note that currently we do not have IPv4 fragmentation support implemented, and IPv6 fragmentation is also disabled by default. Reason for this is that the fragmentation requires lot of extra memory to be used which might not be necessary in usual cases. Having TCP segments split needs much less memory.


I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a different point of view.


Consider a case when an application wants to send a big amount of
constant data, e.g. 900KB. It can be a system with e.g. 1MB of flash
and 64KB of RAM, an app sitting in ~100KB of flash, the rest
containing constant data to send. Following an "split oversized
packet" approach wouldn't help - an app wouldn't be able to create an
oversized packet of 900K - there's simply not enough RAM for it. So,
it would need to handle such a case differently anyway.
Of course your application is constrained by available memory and other limits by your hw.

But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until completion
anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).
Please note that BSD socket API is fully optional and not always available. You cannot rely it to be present especially if you want to minimize memory consumption. We need more general solution instead of something that is only available for BSD sockets.



Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is not
needed for applications using BSD Sockets. There's at least another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like
POSIX.
There has not been any public talk in mailing list about userspace/kernel separation and how it affects IP stack etc. so it is a bit difficult to say anything about this.



So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and overall
implications on the design/implementation. There're many more details
than presented above, and the devil is definitely in details, there's
no absolutely "right" solution, it's a compromise. I hope that Jukka
and Tomasz, who are proponents of the second (GH-1330) approach can
correct me on the benefits of it.
You are unnecessarily creating this scenario about pro or against solution. I have an example application in https://github.com/zephyrpro
ject-rtos/zephyr/pull/980 that needs to send large (several kb) file to outside world using HTTP, and I am trying so solve it efficiently. The application will not use BSD sockets.



Thanks,
Paul

Jukka


Re: Is there tutorials for Zephyr ticker/mayfly?

loquat3
 

Thanks for all.

I can not understand ticker/mayfly yet.

My 'UNCLEAR POINT' is,
:ticker
What is node?
What is user?
What is slot?
What is TRIGGER/WORKER/JOB?

:mayfly
What is CALLEE/CALLER?


2017-10-09 16:27 GMT+09:00 Chettimada, Vinayak Kariappa <vinayak.kariappa.chettimada@...>:

Hi biwa,

 

There are no tutorial or documentation of ticker or mayfly in the Zephyr repository.

 

Ticker and Mayfly implementation are specific to BLE controller scheduling and they are barebones implementation contributed to Zephyr Project.

We are constantly refactoring the implementation to use Zephyr OS features.

 

Some of the continued issues needing contributions are:

https://github.com/zephyrproject-rtos/zephyr/issues/2244

https://github.com/zephyrproject-rtos/zephyr/issues/2247

https://github.com/zephyrproject-rtos/zephyr/issues/2248

 

To be short, Mayfly schedule functions to be run deferred in another execution context.

Currently the BLE controller uses them to call functions in interrupt contexts.

Zephyr threads or work queues that suffice the controller needs will replace mayfly.

 

If you can be more specific on what you are interested in Ticker/Mayfly, I can provide more details.

 

Regards,

Vinayak

 

 

 

From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-bounces@lists.zephyrproject.org] On Behalf Of Cufi, Carles
Sent: Saturday, October 07, 2017 3:48 PM
To: biwa <sjbiwa@...>; zephyr-devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

Hi there,

 

No, unfortunately there are no tutorials or even documentation about the ticker or the mayfly. That said, their author is Vinayak from Nordic, and you can reach him on IRC, he’s usually there. Try the channel #zephyr-bt on freenode.net.

 

Regards,

 

Carles

 

From: <zephyr-devel-bounces@lists.zephyrproject.org> on behalf of biwa <sjbiwa@...>
Date: Saturday, 7 October 2017 at 04:00
To: "zephyr-devel@lists.zephyrproject.org" <zephyr-devel@lists.zephyrproject.org>
Subject: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

I am studying ZephyrOS.

Are there detailed tutorials for studying zephyrOS's ticker/mayfly?



Re: BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Jukka Rissanen
 

Hi,

On Tue, 2017-10-10 at 21:50 +0300, Paul Sokolovsky wrote:
Hello,


A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more
data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written,
but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an
application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 .
Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.
We can certainly implement something like this for the net_context
APIs. There is at least one issue with this as it is currently not
easy to pass information to application how much data we are able to
send, so currently it would be either that we could send all the data
or none of it.


Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.
Note that currently we do not have IPv4 fragmentation support
implemented, and IPv6 fragmentation is also disabled by default. Reason
for this is that the fragmentation requires lot of extra memory to be
used which might not be necessary in usual cases. Having TCP segments
split needs much less memory.


I would like to raise an additional argument while POSIX-inspired
approach may be better.
I would say there is no better or worse approach here. Just a different
point of view.


Consider a case when an application wants to
send a big amount of constant data, e.g. 900KB. It can be a system
with e.g. 1MB of flash and 64KB of RAM, an app sitting in ~100KB
of flash, the rest containing constant data to send. Following an
"split oversized packet" approach wouldn't help - an app wouldn't be
able to create an oversized packet of 900K - there's simply not
enough
RAM for it. So, it would need to handle such a case differently
anyway.
Of course your application is constrained by available memory and other
limits by your hw.

But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until
completion anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense
to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).
Please note that BSD socket API is fully optional and not always
available. You cannot rely it to be present especially if you want to
minimize memory consumption. We need more general solution instead of
something that is only available for BSD sockets.



Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is
not needed for applications using BSD Sockets. There's at least
another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like
POSIX.
There has not been any public talk in mailing list about
userspace/kernel separation and how it affects IP stack etc. so it is a
bit difficult to say anything about this.



So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and
overall
implications on the design/implementation. There're many more
details than presented above, and the devil is definitely in details,
there's no absolutely "right" solution, it's a compromise. I hope
that
Jukka and Tomasz, who are proponents of the second (GH-1330) approach
can correct me on the benefits of it.
You are unnecessarily creating this scenario about pro or against
solution. I have an example application in https://github.com/zephyrpro
ject-rtos/zephyr/pull/980 that needs to send large (several kb) file to
outside world using HTTP, and I am trying so solve it efficiently. The
application will not use BSD sockets.



Thanks,
Paul

Jukka


Linar Connecto 2017 presentations

Maciek Borzecki <maciek.borzecki@...>
 

Some Zephyr related presentations for those who could not attend
Linaro Connect in SF:

- Deploy STM32 family on Zephyr – SFO17-102
http://connect.linaro.org/resource/sfo17/sfo17-102/
- An update on MCUBoot (The IoT Bootloader) – SFO17-118
http://connect.linaro.org/resource/sfo17/sfo17-118/
- Using SoC Vendor HALs in the Zephyr Project – SFO17-112
http://connect.linaro.org/resource/sfo17/sfo17-112/
- New Zephyr features: LWM2M / FOTA Framework – SFO17-113
http://connect.linaro.org/resource/sfo17/sfo17-113/
- BSD Sockets API in Zephyr RTOS – SFO17-108
http://connect.linaro.org/resource/sfo17/sfo17-108/

Mynewt, but interesting nonetheless:
- Modular middleware components in Apache Mynewt OS – SFO17-507
http://connect.linaro.org/resource/sfo17/sfo17-507/

--
Maciek Borzecki


Re: [Zephyr-devil] Tiny tile Zephyr implementation

Graham Stott <gbcstott1@...>
 

I meant to also say to look at projects for Arduino/Genuine 101 as tiny Tile is a scaled down version of this board.

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Graham Stott
Sent: Tuesday, October 10, 2017 2:01 PM
To: 'Jie Zhou' <zhoujie@...>; zephyr-devel@...
Subject: Re: [Zephyr-devel] Tinytile Zephyr implementation

 

As tinyTile is basically just a version of the Intel® Curie™ module, you can look for projects under Curie. Yes. It has additional pins but the rest is the same

 

Graham

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Jie Zhou
Sent: Tuesday, October 10, 2017 1:21 AM
To: zephyr-devel@...
Subject: [Zephyr-devel] Tinytile Zephyr implementation

 

Hi All,

 

Has anyone done anything with tinyTILE? Since the board is quite new I was wondering if someone has done a project with Zephyr OS on tinyTILE. The specs of the chip looks promising for IoT implementations. Any info or the setup you are using will help. I'm trying to evaluate the capability of tinyTILE with Zephyr.

 

Thanks,

Jie


Re: Tinytile Zephyr implementation

Graham Stott <gbcstott1@...>
 

As tinyTile is basically just a version of the Intel® Curie™ module, you can look for projects under Curie. Yes. It has additional pins but the rest is the same

 

Graham

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Jie Zhou
Sent: Tuesday, October 10, 2017 1:21 AM
To: zephyr-devel@...
Subject: [Zephyr-devel] Tinytile Zephyr implementation

 

Hi All,

 

Has anyone done anything with tinyTILE? Since the board is quite new I was wondering if someone has done a project with Zephyr OS on tinyTILE. The specs of the chip looks promising for IoT implementations. Any info or the setup you are using will help. I'm trying to evaluate the capability of tinyTILE with Zephyr.

 

Thanks,

Jie


BSD Sockets in mainline, and how that affects design decisions for the rest of IP stack (e.g. send MTU handling)

Paul Sokolovsky
 

Hello,

There was an RFC on this list to implement BSD Sockets API
compatibility layer for Zephyr some 6 months ago. Majority of that
functionality went into the 1.9 release (with some additional pieces
still going into).

Before and while working on sockets, there were a number of issues with
the native stack discovered/documented, and solutions for some were
proposed. At that time they were rather tentative and experimental, and
there was no consensus how to resolve them, so as a proof of concept,
they were implemented just in the Sockets layer.

An example is handling of the send MTU, originally
https://jira.zephyrproject.org/browse/ZEP-1998 , now
https://github.com/zephyrproject-rtos/zephyr/issues/3439 . The essence
of the issue is that native networking API functions to create an
outgoing packet don't control packet size in any way. It's easy to
create an oversized packet which will fail during an actual send
operation.

A solution originally proposed was that the mentioned API functions
should take an MTU into account, and not allow a user to add more data
than MTU allows (accounting also for protocol headers). This solution
is rooted in the well-known POSIX semantics of "short writes" - an
application can request an arbitrary amount of data to be written, but
a system is free to process less data, based on system resource
availability. Amount of processed data is returned, and an application
is expected to retry the operation for the remaining data. It was
posted as https://github.com/zephyrproject-rtos/zephyr/pull/119 . Again,
at that time, there was no consensus about way to solve it, so it was
implemented only for BSD Sockets API.

Much later,
https://github.com/zephyrproject-rtos/zephyr/pull/1330 was posted. It
works in following way: it allows an application to create an
oversized packet, but a stack does a separate pass over it and splits
this packet into several packets with a valid length. A comment
immediately received (not by me) was that this patch just duplicates
in an adhoc way IP fragmentation support as required by TCP/IP
protocol.

I would like to raise an additional argument while POSIX-inspired
approach may be better. Consider a case when an application wants to
send a big amount of constant data, e.g. 900KB. It can be a system
with e.g. 1MB of flash and 64KB of RAM, an app sitting in ~100KB
of flash, the rest containing constant data to send. Following an
"split oversized packet" approach wouldn't help - an app wouldn't be
able to create an oversized packet of 900K - there's simply not enough
RAM for it. So, it would need to handle such a case differently anyway.
But POSIX-based approach, would allow to handle it right away - any
application need to be prepared to retry operation until
completion anyway, the amount of data is not important.


That's the essence of the question this RFC poses: given that the
POSIX-based approach is already in the mainline, does it make sense to
go for a Zephyr-special, adhoc solutions for a problem (and as
mentioned at the beginning, there can be more issues with a similar
choice).

Answering "yes" may have interesting implications. For example, the
code in https://github.com/zephyrproject-rtos/zephyr/pull/1330 is
not needed for applications using BSD Sockets. There's at least another
issue solved on BSD Sockets level, but not on the native API. There's
an ongoing effort to separate kernel and userspace, and BSD Sockets
offer an automagic solution for that, while native API allows a user
app to access straight to the kernel networking buffer, so there's a
lot to solve there yet. Going like that, it may turn out that native
adhoc API, which initially was intended to small and efficient, will
grow bigger and more complex (== harder to stabilize, containing more
bugs) than something based on well tried and tested approach like POSIX.

So, it would be nice if the networking stack, and overall Zephyr
architecture stakeholders consider both a particular issue and overall
implications on the design/implementation. There're many more
details than presented above, and the devil is definitely in details,
there's no absolutely "right" solution, it's a compromise. I hope that
Jukka and Tomasz, who are proponents of the second (GH-1330) approach
can correct me on the benefits of it.


Thanks,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Tinytile Zephyr implementation

Jie Zhou <zhoujie@...>
 

Hi All,

Has anyone done anything with tinyTILE? Since the board is quite new I was wondering if someone has done a project with Zephyr OS on tinyTILE. The specs of the chip looks promising for IoT implementations. Any info or the setup you are using will help. I'm trying to evaluate the capability of tinyTILE with Zephyr.

Thanks,
Jie


Re: Any plan to support ST BlueNRG chips?

Erwan Gouriou
 

Hi Aaron,

BlueNRG chip are actually already supported (even if deeper testing would be welcome).
To activate it, you could have a look to Disco L475 IoT board configuration.

Cheers
Erwan



Any plan to support ST BlueNRG chips?

Aaron Xu
 

Hi,

I saw zephyr will support ST BlueNRG in 1.8 in a PPT file. 
But looks zephyr still did't support it.

Do you any information about supporting BlueNRG planing?


Regards,
Aaron


Re: Bluetooth mesh_demo crashes

Johan Hedberg
 

On Mon, Oct 09, 2017, Steve Brown wrote:
Yes, it looks indeed like there's a missing initialization of this
queue. I think this is a regression that slipped in at some point in
the recent past, and got uncovered now when the local network
interface is always enabled (instead of being behind a Kconfig
option).

Instead of calling k_fifo_init however, I think it's more efficient
to use a static initializer for this like I've done in the attached
patch. Can you confirm that this also fixes the issue for you? I'll
then create a pull request out of it.
It did fix the problem.
Thanks for testing! The PR is here:

https://github.com/zephyrproject-rtos/zephyr/pull/4238

Johan


Re: Bluetooth mesh_demo crashes

Steve Brown
 

Johan,

On Mon, 2017-10-09 at 19:22 +0300, Johan Hedberg wrote:
Hi Steve,

On Mon, Oct 09, 2017, Steve Brown wrote:
The sample mesh_demo crashes on my nrf52840_pca10056 during
configuration with a data access violation.

It looks like it's caused by bt_mesh.local_queue not being
initialized.

I added a k_fifo_init to mesh/net.c:bt_mesh_net_init and it seemed
to
correct the problem.

Can somebody familiar with the code confirm this?
Yes, it looks indeed like there's a missing initialization of this
queue. I think this is a regression that slipped in at some point in
the
recent past, and got uncovered now when the local network interface
is
always enabled (instead of being behind a Kconfig option).

Instead of calling k_fifo_init however, I think it's more efficient
to
use a static initializer for this like I've done in the attached
patch.
Can you confirm that this also fixes the issue for you? I'll then
create
a pull request out of it.

Johan
It did fix the problem.

Thanks,

Steve


Re: Bluetooth mesh_demo crashes

Johan Hedberg
 

Hi Steve,

On Mon, Oct 09, 2017, Steve Brown wrote:
The sample mesh_demo crashes on my nrf52840_pca10056 during
configuration with a data access violation.

It looks like it's caused by bt_mesh.local_queue not being initialized.

I added a k_fifo_init to mesh/net.c:bt_mesh_net_init and it seemed to
correct the problem.

Can somebody familiar with the code confirm this?
Yes, it looks indeed like there's a missing initialization of this
queue. I think this is a regression that slipped in at some point in the
recent past, and got uncovered now when the local network interface is
always enabled (instead of being behind a Kconfig option).

Instead of calling k_fifo_init however, I think it's more efficient to
use a static initializer for this like I've done in the attached patch.
Can you confirm that this also fixes the issue for you? I'll then create
a pull request out of it.

Johan


Bluetooth mesh_demo crashes

Steve Brown
 

The sample mesh_demo crashes on my nrf52840_pca10056 during
configuration with a data access violation.

It looks like it's caused by bt_mesh.local_queue not being initialized.

I added a k_fifo_init to mesh/net.c:bt_mesh_net_init and it seemed to
correct the problem.

Can somebody familiar with the code confirm this?

Thanks,

Steve


Re: Is there tutorials for Zephyr ticker/mayfly?

Chettimada, Vinayak Kariappa
 

Hi biwa,

 

There are no tutorial or documentation of ticker or mayfly in the Zephyr repository.

 

Ticker and Mayfly implementation are specific to BLE controller scheduling and they are barebones implementation contributed to Zephyr Project.

We are constantly refactoring the implementation to use Zephyr OS features.

 

Some of the continued issues needing contributions are:

https://github.com/zephyrproject-rtos/zephyr/issues/2244

https://github.com/zephyrproject-rtos/zephyr/issues/2247

https://github.com/zephyrproject-rtos/zephyr/issues/2248

 

To be short, Mayfly schedule functions to be run deferred in another execution context.

Currently the BLE controller uses them to call functions in interrupt contexts.

Zephyr threads or work queues that suffice the controller needs will replace mayfly.

 

If you can be more specific on what you are interested in Ticker/Mayfly, I can provide more details.

 

Regards,

Vinayak

 

 

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of Cufi, Carles
Sent: Saturday, October 07, 2017 3:48 PM
To: biwa <sjbiwa@...>; zephyr-devel@...
Subject: Re: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

Hi there,

 

No, unfortunately there are no tutorials or even documentation about the ticker or the mayfly. That said, their author is Vinayak from Nordic, and you can reach him on IRC, he’s usually there. Try the channel #zephyr-bt on freenode.net.

 

Regards,

 

Carles

 

From: <zephyr-devel-bounces@...> on behalf of biwa <sjbiwa@...>
Date: Saturday, 7 October 2017 at 04:00
To: "zephyr-devel@..." <zephyr-devel@...>
Subject: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

I am studying ZephyrOS.

Are there detailed tutorials for studying zephyrOS's ticker/mayfly?


Re: Is there tutorials for Zephyr ticker/mayfly?

Carles Cufi
 

Hi there,

 

No, unfortunately there are no tutorials or even documentation about the ticker or the mayfly. That said, their author is Vinayak from Nordic, and you can reach him on IRC, he’s usually there. Try the channel #zephyr-bt on freenode.net.

 

Regards,

 

Carles

 

From: <zephyr-devel-bounces@...> on behalf of biwa <sjbiwa@...>
Date: Saturday, 7 October 2017 at 04:00
To: "zephyr-devel@..." <zephyr-devel@...>
Subject: [Zephyr-devel] Is there tutorials for Zephyr ticker/mayfly?

 

I am studying ZephyrOS.

Are there detailed tutorials for studying zephyrOS's ticker/mayfly?


Is there tutorials for Zephyr ticker/mayfly?

loquat3
 

I am studying ZephyrOS.

Are there detailed tutorials for studying zephyrOS's ticker/mayfly?


Re: 802.15.4 stack question

David Leach
 

Tomasz,

 

I understand that implementations vary from the IEEE standard. I tend to treat the standard as the reference and try to grep the implementation based on this framework. I guess I need to dig into net_app…

 

On data frames, at least for version 2011 (didn't check later ones), nothing forces AR to be 1, see 5.2.2.2.
Moreover if your protocol handles re-transmission on its own you don't want 15.4 to add-up.
Also, until 2015 version, which is not supported atm, this "re-transmission" mechanism from 15.4 is just plain
crap: ACK frames are unidentified, non-secured, etc...

The observation we had from my 802.11 days is that it was generally better to have retries at the RF level due to the tighter timings then to have the higher layer protocols handle the retries (which they will do anyway if the 802.11 retry mechanism exhausts the retry limit). I don’t have as much experience with the network characteristics of an 802.15.4 setup but I would think that it would also benefit from the tighter retry logic timing of the protocol… but this is a knob to tune when setting up a network I guess.

 

David

 

 

 

From: Tomasz Bursztyka [mailto:tomasz.bursztyka@...]
Sent: Friday, October 06, 2017 2:59 AM
To: David Leach <david.leach@...>; zephyr-devel@...
Subject: Re: [Zephyr-devel] 802.15.4 stack question

 

Hi David,

 To clarify my general question, what I’m trying to understand is how do we generally expect to initialize/configure the 802.15.4 stack within our system or application? The IEEE802.15.4 specification defines several SAPs for MCPS and for MLME which is what I would expect to see on an interface (like 802.11 which I’m more familiar with)


You won't see the exact same interfaces a described in the specs. PHYs don't even follow it either btw.
Actually a good amount of the specs will probably never end up being coded at all (FFD part for instance).


. I can see some of the logical mappings between IEEE802.15.4 MLME operations to the NET_REQUEST_IEEE802154_CMD types we have in Zephyr but not all of them yet (still grepping the code/logic). And I see we have these net_mgmt() functions to set these values. Is it through these net_mgmt() functions that we are expected to setup the values?


Yes, and you probably just want to use net_app lib for it.


 With respect to the AR bit, I don’t understand your response about the AR bit in the frame control. If a frame is being transmitted from an IEEE Std 802 protocol then the AR field should follow the needs of the frame where a broadcast or group address frame has the AR field set to 0 and directed data frames have the AR field set to 1. The other 802.15.4 management frames all each have their own rules on setting or clearing this bit from what I can tell. I don’t get why there is even an API to enable/disable this as the frame type should be driving the needs of this bit.


Looks like forcing AR bit setting to 0 on broadcast type got forgotten, as it used to be always 0.
For MAC command frame it's properly handled.

On data frames, at least for version 2011 (didn't check later ones), nothing forces AR to be 1, see 5.2.2.2.
Moreover if your protocol handles re-transmission on its own you don't want 15.4 to add-up.
Also, until 2015 version, which is not supported atm, this "re-transmission" mechanism from 15.4 is just plain
crap: ACK frames are unidentified, non-secured, etc...

Br,

Tomasz


Re: 802.15.4 stack question

Tomasz Bursztyka
 

Hi David,

 To clarify my general question, what I’m trying to understand is how do we generally expect to initialize/configure the 802.15.4 stack within our system or application? The IEEE802.15.4 specification defines several SAPs for MCPS and for MLME which is what I would expect to see on an interface (like 802.11 which I’m more familiar with)


You won't see the exact same interfaces a described in the specs. PHYs don't even follow it either btw.
Actually a good amount of the specs will probably never end up being coded at all (FFD part for instance).

. I can see some of the logical mappings between IEEE802.15.4 MLME operations to the NET_REQUEST_IEEE802154_CMD types we have in Zephyr but not all of them yet (still grepping the code/logic). And I see we have these net_mgmt() functions to set these values. Is it through these net_mgmt() functions that we are expected to setup the values?


Yes, and you probably just want to use net_app lib for it.

 With respect to the AR bit, I don’t understand your response about the AR bit in the frame control. If a frame is being transmitted from an IEEE Std 802 protocol then the AR field should follow the needs of the frame where a broadcast or group address frame has the AR field set to 0 and directed data frames have the AR field set to 1. The other 802.15.4 management frames all each have their own rules on setting or clearing this bit from what I can tell. I don’t get why there is even an API to enable/disable this as the frame type should be driving the needs of this bit.


Looks like forcing AR bit setting to 0 on broadcast type got forgotten, as it used to be always 0.
For MAC command frame it's properly handled.

On data frames, at least for version 2011 (didn't check later ones), nothing forces AR to be 1, see 5.2.2.2.
Moreover if your protocol handles re-transmission on its own you don't want 15.4 to add-up.
Also, until 2015 version, which is not supported atm, this "re-transmission" mechanism from 15.4 is just plain
crap: ACK frames are unidentified, non-secured, etc...

Br,

Tomasz

4361 - 4380 of 7825