Date   

BLE attribute handle

Ani
 

Hello, 

I wanted to know what is meant by attribute handle in bt_gatt_write_without_response

Thanks and Regards, 
Anila


Re: Minimal Zephyr build

Andy Gross
 

On 29 March 2017 at 16:17, David Brown <david.brown@linaro.org> wrote:
At the past mini summit (Austin), a few of us had discussions about
making builds using Zephyr that are more minimal. My specific use
case is about the boot loader, which has very few requirements:

- It needs a flash driver.
- It may need a UART.
- It may need access to crypto hardware (not currently).

Currently, there is quite a bit of code brought in by this that isn't
really needed (for example, there is only a single thread). (A mynewt
build of mcuboot ends up about 10K smaller than a Zephyr build).

Vincenzo Frascino did a little work to conditionalize some of this,
but I was wondering what people think might be the best approach.
Do we have a rough analysis of the cost of different features with
regards to space? That would make targeting of things to remove a
little easier. Low hanging fruit and what not.

The approach taken by Mynewt (where mcuboot comes from), is to
separate the kernel from the HAL. They are able to build mcuboot with
just the HAL.

Are there other uses for a more minimal version of Zephyr. I realize
we got rid of the nano kernel, but perhaps being able to work without
the scheduler, or timers, etc might be more generally useful.

Andy


Re: Minimal Zephyr build

Nashif, Anas
 

David,
If I am not mistaken we have done some changes to address this and were able to get similar footprint on one of the platforms. Can you share some more details how you got the 10k difference and what are the configurations/boards you are using? A quick test I have done with minimal configuration and single thread shows the kernel taking 18%, this will need to be replaced by some logic if you decide to do a split, so I am no sure if this is a big gain.

Anas

-----Original Message-----
From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-bounces@lists.zephyrproject.org] On Behalf Of David Brown
Sent: Wednesday, March 29, 2017 5:17 PM
To: zephyr-devel@lists.zephyrproject.org
Subject: [Zephyr-devel] Minimal Zephyr build

At the past mini summit (Austin), a few of us had discussions about making builds using Zephyr that are more minimal. My specific use case is about the boot loader, which has very few requirements:

- It needs a flash driver.
- It may need a UART.
- It may need access to crypto hardware (not currently).

Currently, there is quite a bit of code brought in by this that isn't really needed (for example, there is only a single thread). (A mynewt build of mcuboot ends up about 10K smaller than a Zephyr build).

Vincenzo Frascino did a little work to conditionalize some of this, but I was wondering what people think might be the best approach.

The approach taken by Mynewt (where mcuboot comes from), is to separate the kernel from the HAL. They are able to build mcuboot with just the HAL.

Are there other uses for a more minimal version of Zephyr. I realize we got rid of the nano kernel, but perhaps being able to work without the scheduler, or timers, etc might be more generally useful.

Thanks,
David
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


Minimal Zephyr build

David Brown
 

At the past mini summit (Austin), a few of us had discussions about
making builds using Zephyr that are more minimal. My specific use
case is about the boot loader, which has very few requirements:

- It needs a flash driver.
- It may need a UART.
- It may need access to crypto hardware (not currently).

Currently, there is quite a bit of code brought in by this that isn't
really needed (for example, there is only a single thread). (A mynewt
build of mcuboot ends up about 10K smaller than a Zephyr build).

Vincenzo Frascino did a little work to conditionalize some of this,
but I was wondering what people think might be the best approach.

The approach taken by Mynewt (where mcuboot comes from), is to
separate the kernel from the HAL. They are able to build mcuboot with
just the HAL.

Are there other uses for a more minimal version of Zephyr. I realize
we got rid of the nano kernel, but perhaps being able to work without
the scheduler, or timers, etc might be more generally useful.

Thanks,
David


Re: undefined reference to `_legacy_sleep'

Nashif, Anas
 

You should not include legacy.h directly, just include zephyr.h

 

legacy.h will be dropped for 1.8 and you should be using new APIs, not a legacy API like task_sleep.

 

 

Anas

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of kk
Sent: Wednesday, March 29, 2017 12:34 PM
To: zephyr-devel@...
Subject: [Zephyr-devel] undefined reference to `_legacy_sleep'

 

Hi all

When I use the "task_sleep" function, I include the file legacy.h, but the compiler tells:
    undefined reference to `_legacy_sleep'

I search the source code in zephyr, its definition in ./kernel/legacy_timer.c, how can I solve this problem?

Thanks


Re: undefined reference to `_legacy_sleep'

Patrice Buriez
 

Did you set the speed (baud rate) at 115200 for minicom?

Did “make flash” actually work?

- You need a Flyswatter2 for JTAG method.

- You need to export “ZEPHYR_FLASH_OVER_DFU=y” for the DFU (USB) method, only available for now on Zephyr git master (i.e. not in Zephyr v1.7).

- Any error message reported?

If using DFU method, did you reset the board again after flashing?

 

From: zephyr-devel-bounces@... [mailto:zephyr-devel-bounces@...] On Behalf Of kk
Sent: Wednesday, March 29, 2017 6:54 PM
To: Briano, Ivan <ivan.briano@...>
Cc: zephyr-devel@...
Subject: Re: [Zephyr-devel] undefined reference to `_legacy_sleep'

 

Thanks very much!

Actually, I want to write a string to serial (I want to see "Hello World! x86" on minicom), base on the sample "hello world", my board is arduino 101, it did not work, but it works on board qemu_x86.

~~~~~~~~~~~~~~~~~~~~~~
I connect my arduino 101 to minicom, I have set the serial port:
    ttyUSB0 8N1

I use the Adafruit 4 pin cable (PL2303)

    black Ground connect GND on arduino 101

    green Receive connect TX->1 on arduino 101

    white Transmit connect  RX->1 on arduino 101

I run the hello_world program:
    make BOARD=arduino_101 flash
~~~~~~~~~~~~~~~~~~~~~~

How can I write a string to minicom?

 

On Thu, Mar 30, 2017 at 12:40 AM, Briano, Ivan <ivan.briano@...> wrote:

On Thu, 2017-03-30 at 00:34 +0800, kk wrote:

Hi all

When I use the "task_sleep" function, I include the file legacy.h, but the compiler tells:
    undefined reference to `_legacy_sleep'

I search the source code in zephyr, its definition in ./kernel/legacy_timer.c, how can I solve this problem?

 

Add CONFIG_LEGACY_KERNEL=y to your prj.conf

 

 

Thanks

_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@...
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel

 

Intel Corporation NV/SA
Kings Square, Veldkant 31
2550 Kontich
RPM (Bruxelles) 0415.497.718.
Citibank, Brussels, account 570/1031255/09

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.


Re: undefined reference to `_legacy_sleep'

kk <pinganddu90@...>
 

Thanks very much!
Actually, I want to write a string to serial (I want to see "Hello World! x86" on minicom), base on the sample "hello world", my board is arduino 101, it did not work, but it works on board qemu_x86.

~~~~~~~~~~~~~~~~~~~~~~
I connect my arduino 101 to minicom, I have set the serial port:
    ttyUSB0 8N1
I use the Adafruit 4 pin cable (PL2303)
    black Ground connect GND on arduino 101
    green Receive connect TX->1 on arduino 101
    white Transmit connect  RX->1 on arduino 101
I run the hello_world program:
    make BOARD=arduino_101 flash
~~~~~~~~~~~~~~~~~~~~~~

How can I write a string to minicom?


On Thu, Mar 30, 2017 at 12:40 AM, Briano, Ivan <ivan.briano@...> wrote:
On Thu, 2017-03-30 at 00:34 +0800, kk wrote:
Hi all

When I use the "task_sleep" function, I include the file legacy.h, but the compiler tells:
    undefined reference to `_legacy_sleep'
I search the source code in zephyr, its definition in ./kernel/legacy_timer.c, how can I solve this problem?

Add CONFIG_LEGACY_KERNEL=y to your prj.conf


Thanks
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


Re: undefined reference to `_legacy_sleep'

Briano, Ivan <ivan.briano@...>
 

On Thu, 2017-03-30 at 00:34 +0800, kk wrote:
Hi all

When I use the "task_sleep" function, I include the file legacy.h, but the compiler tells:
    undefined reference to `_legacy_sleep'
I search the source code in zephyr, its definition in ./kernel/legacy_timer.c, how can I solve this problem?

Add CONFIG_LEGACY_KERNEL=y to your prj.conf


Thanks
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@...
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


undefined reference to `_legacy_sleep'

kk <pinganddu90@...>
 

Hi all

When I use the "task_sleep" function, I include the file legacy.h, but the compiler tells:
    undefined reference to `_legacy_sleep'
I search the source code in zephyr, its definition in ./kernel/legacy_timer.c, how can I solve this problem?

Thanks


i2c_burst_write API

Erwan Gouriou
 

Hi all,


I'm having trouble using i2c_burst_write to configure several registers
in one shot in a sensor (using i2c_stm32lx.c driver).

According to API description, it seems to be the adequate use:
" This routine writes multiple bytes to an internal address of an
I2C device synchronously."

Following implementation, it generates 2 messages as follow:
Slave ADdress|Write * SUB-Address (register of the sensor where I'd like to start writing)
Slave ADdress|Write * DATA
It doesn't work for me since sensor is expecting one single message:
SAD|W * SUB * ADD

I've found several examples of sensor supporting the later but not the former.
Though, in Zephyr code, I've found some uses of i2c_burst_write API,
hence I tend to think it should work somehow in some cases.

Hence my questions to you:

Are there actually several ways to address an I2C slave for burst write?
> If yes, I'll just avoid using this API. But it might be worse specifying that
it might not fit all slaves implementations.

Is that due to underlying I2C drivers that do some messages concatenation
magic before sending to slave devices?
> If yes, maybe this is a sign API should be adapted to be more transparent

Any other possibility that I don't have in mind right now?


Thanks
Erwan



 







Re: RFC: BSD Socket (like) API

Luiz Augusto von Dentz
 

Hi Paul,

On Wed, Mar 29, 2017 at 1:31 AM, Paul Sokolovsky
<paul.sokolovsky@linaro.org> wrote:
Hello Luiz Augusto,

On Tue, 28 Mar 2017 23:34:25 +0300
Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:

[]

net_context). (We might need a queue per context to properly
emulate the read() API, though, so the reference might not be
exactly to a net_context pool, but the same idea applies.)
Indeed we would have to put a queue per net_context which quickly adds
up to the memory footprint,
As I point in the reply to Leandro, these additions would be #ifdef'ed
on CONFIG_NET_BSD_SOCKETS or something, so application developers will
have a choice whether save memory or have more familiar API. Putting
the queue, etc. into yet different structure (and waste some memory to
link this new structure to net_context) is yet another choice.

luckily we can use k_poll and avoid having
extra threads for each context.
Note that the work I'm going to do is based on the previous experience
with implement BSD-Sockets like pull-style API on top of push-style
native ("raw") lwIP API. There, we were able to achieve that without
any threads (indeed, even with a cooperative RTOS underneath). So, no
planning of having extra threads on the initial stage of work
(push-to-pull "impedance conversion"). And presence of k_poll makes me
positive that we'll be able to implement even poll() or epoll() without
them (but that's 2nd stage of work).

But I guess the fundamental problem is
this is probably intended to interface with real BSD socket/posix
applications/components, which I guess will not be enough for most, so
at the end will it be worth adding this code? What are the things we
want to port on top, can someone present at the mini-summit make a
list?
As my initial RFC message pointed, the initial target is MicroPython
with its "usocket" (micro-socket) module. That's a single app, but it
wraps a large chunk of BSD Sockets API for Python language. And of
course, this RFC is to see what other projects people are interested to
port.
I suppose you need much more than sockets to make a python program to
work on top of zephyr. Now I do agree having python would be very
useful to prototype, or js, but for products it might be a different
story.

Casting a pointer to an integer as suggested might work, but will
most likely be an issue with the common pattern of checking for
error by just comparing the sign of the return value, instead of
the (technically correct) comparison with -1.
We can check if there are any paddings on net_context, if there is it
might not be a bad idea to add an id to them, or really make them
compatible with k_poll.

In general Im much more concern about the stack fragmentation, though.
Adding compatibility layers and offloading works against the stack
itself,
Well, I don't see how proper, configurable (on/off) layering works
against the stack. I'd say, vice-versa, it allows to leverage it for
more usecases. Offloading, let me skip that part ;-).
I was referring to APIs like net_buf or application layer protocols
like coap, while I agree we can make all configurable that doesn't
dismiss the fact these interfaces might be reimplemented on top of the
socket layer. In fact the case of net_buf there is probably no
solution since the BSD Socket interface will most likely be using
plain pointers or iovec, so we lose zero copy and possible add yet
another buffer pool on top.

as they either eat the available memory for enabling features
in the stack
Let users select whether they want to save memory or development
effort by using a conventional API ;-).

or reimplement part of the stack in other layers,
Well, here I, as present at the mini-summit, can quote what Anas said
on this: he literally said - don't do that. If "higher level wrapper"
needs to reimplement or otherwise put upside-down what native layer
does, let's just rework native layer to be (more) POSIX/etc.
compatible. And well, that's pretty much how we already do it, with
several features for 1.7 done to make it closer to POSIX spirit, and
there're more such things in queue (here's the latest merged:
https://gerrit.zephyrproject.org/r/12455). So, no worries, or rather,
we all are aware of the problem, and ready to do homework on it.

not to mention this takes a lot of time that could be spend in other
areas like security and proper accelerators for heavy tasks like for
example crypto.
As a maintainer in other projects, I can easily relate to that thought
(that I, a maintainer, know better what contributors rather be working
on ;-) ). But here's how it works at Linaro: our aim is to reduce/avoid
fragmentation among ARM vendors (and not at the expense of more
fragmentation with other architectures). So if a member comes to us and
says "we'll use Zephyr if ...", we'd better listen, because there's no
lack of alternatives to Zephyr, and if interested parties select
something else instead, there won't be anything good from that to
Zephyr, or to the industry at all, which will continue in a chaos of
several dozens of adhoc (vs full-featured) RTOSes. And yeah, BSD
Sockets is a frequent request we heard for a while. Other things you
mention are in the plans (ours including) too.
That is a fair point, but ultimately the memory constraint comes from
the very same vendors. So are there new boards coming with more
memory? 128 K or more ram and a fair bit more flash? Because I can
tell you in most boards memory is already quite tight and that is
without debug enable, in fact echo server don't even work with qemu
when all debugs options are enabled:

/opt/zephyr-sdk/sysroots/x86_64-pokysdk-linux/usr/libexec/i586-zephyr-elf/gcc/i586-zephyr-elf/6.2.0/real-ld:
zephyr_prebuilt.elf section `noinit' will not fit in region `RAM'
/opt/zephyr-sdk/sysroots/x86_64-pokysdk-linux/usr/libexec/i586-zephyr-elf/gcc/i586-zephyr-elf/6.2.0/real-ld:
region `RAM' overflowed by 5520 bytes


--
Luiz Augusto von Dentz


Re: RFC: BSD Socket (like) API

Tomasz Bursztyka
 

Hi Gil,

2) Enable TCP/IP offload from the socket layer.

The TI CC3220 completely offloads the TCP/IP stack onto a
co-processor, by marshalling BSD socket API calls over SPI to the
network coprocessor.

The current NET_OFFLOAD support in the Zephyr IP stack provides a hook
to call an offload engine. For the TI CC3220, this mapping of
net_context APIs to BSD socket APIs adds some overhead and code
complexity.

However, now that we're talking about adding a BSD socket layer to
Zephyr, offloading directly from the socket layer would be more
natural (something similar to the MyNewt solution, as pointed out by
Sterling).

Otherwise, we have to map BSD sockets -> net_context -> BSD sockets,
with all the required extra overhead of data structures, sync
primitives, and server thread(s) to handle mapping between the two
different networking API usage models.

By doing so, I understand we could potentially bypass the (TBD)
routing table. But that is a use case, AFAIK, not really needed for
the CC3220 typical client IoT node devices.
While I understand your point, I still have to point out this is a very specific product use case
that have to fit within a generic solution, not the other way round.

That said, there is still a solution: a Kconfig option that would allow BSD socket API
bypassing net_context/nbuf if - and ONLY if - your offload device is the unique network
device wired up.

But I don't want to push that right now: Let's get first an offload net device working
withing net_context, let's get Paul having first patches for the BSD socket on top of
net_context. And then, we'll see. My point being if we put to much features/expectations
on first round, it will take too much time to get even the basic feature working.

Then again, the POSIX socket standard specifies that sockets shall
support routing, so maybe we can just make that work somehow?
It will, on top of net_context.

Br,

Tomasz


Re: RFC: BSD Socket (like) API

Marcus Shawcroft <marcus.shawcroft@...>
 

On 27 March 2017 at 12:16, Paul Sokolovsky <paul.sokolovsky@linaro.org> wrote:

2- FDs just waste memory, add locking and make things harder to
debug, use socket structures.
Agree. That was one of the 1st question I got at the minisummit, and my
answer was: "well, we could waste some memory by adding the FD table to
map small integers to underlying structures, but why?". Indeed, by just
casting pointers to integers, we can go a long, long way of using this
API and porting existing apps.
Casting pointers to and from integers is legal, but implementation
defined (c99 6.3.2.3 p5 p6). We should avoid implementation defined
behaviour in the language where possible, especially in public APIs.

/Marcus


Re: RFC: BSD Socket (like) API

Daniel Thompson <daniel.thompson@...>
 

On 28/03/17 22:26, Paul Sokolovsky wrote:
So, now the concern is that we are left with two "BSD-like" APIs doing
essentially the same thing, one purporting to use less data space,
though shifting more complexity and code-space up to the application,
and allowing applications to be written one of two (or both?) ways.
Let me start with the latter - I may imagine a particular application
will want to use either "BSD Socket like API" or "native Zephyr API",
but not both at the same time. If this idea is sound, it may help us
to better structure additions for new API (make them less (least)
intrusive).
I'm afraid I *don't* think this idea is sound!

The very reason we want BSD(-like) sockets is because we are importing a big pile of third party library code into our application.

Importing third-party library code must not impose additional requirements on application code (or *other* third party library code) or integration will be a nightmare.


Daniel.


Reminder: advanced compiler novelties are upon us

Paul Sokolovsky
 

Hello,

This is old news, and there was a related drama at Linux kernel
community long ago. So, with uber-advanced compilers, like GCC 5/6
(Zephyr SDK 0.9 uses 6.2.0), at a certain optimization level (the one
Zephyr uses), if you ever write (unconditionally):

foo->bar

(some macro can write it for you, too)

then compiler thinks that you've signed a contract that foo is non-NULL
(because why would you deference it unconditionally otherwise), so any
later NULL checks will be optimized out. Yet, it could be the case that
you just prototype code, so not all checks are yet there (worse, you
can forget to check for NULL somewhere where you don't immediately
expect it). Then you can get interesting output (and behavior) like:

net_nuf: 0, net_buf == NULL: 0, !net_buf: 0

This is of especial interest for kernel/baremetal programming. For
example, in Zephyr (depending on a platform) dereferencing NULL pointer
is all nice and gets you values back, no crashes, and then beyond that
some parts of your code may be removed.

P.S. That's of course the reason why gcc 4.x branch will live for much,
much longer than even proverbial gcc 2.95 of egcs fork fame.

--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: RFC: BSD Socket (like) API

Paul Sokolovsky
 

Hello Luiz Augusto,

On Tue, 28 Mar 2017 23:34:25 +0300
Luiz Augusto von Dentz <luiz.dentz@gmail.com> wrote:

[]

net_context). (We might need a queue per context to properly
emulate the read() API, though, so the reference might not be
exactly to a net_context pool, but the same idea applies.)
Indeed we would have to put a queue per net_context which quickly adds
up to the memory footprint,
As I point in the reply to Leandro, these additions would be #ifdef'ed
on CONFIG_NET_BSD_SOCKETS or something, so application developers will
have a choice whether save memory or have more familiar API. Putting
the queue, etc. into yet different structure (and waste some memory to
link this new structure to net_context) is yet another choice.

luckily we can use k_poll and avoid having
extra threads for each context.
Note that the work I'm going to do is based on the previous experience
with implement BSD-Sockets like pull-style API on top of push-style
native ("raw") lwIP API. There, we were able to achieve that without
any threads (indeed, even with a cooperative RTOS underneath). So, no
planning of having extra threads on the initial stage of work
(push-to-pull "impedance conversion"). And presence of k_poll makes me
positive that we'll be able to implement even poll() or epoll() without
them (but that's 2nd stage of work).

But I guess the fundamental problem is
this is probably intended to interface with real BSD socket/posix
applications/components, which I guess will not be enough for most, so
at the end will it be worth adding this code? What are the things we
want to port on top, can someone present at the mini-summit make a
list?
As my initial RFC message pointed, the initial target is MicroPython
with its "usocket" (micro-socket) module. That's a single app, but it
wraps a large chunk of BSD Sockets API for Python language. And of
course, this RFC is to see what other projects people are interested to
port.

Casting a pointer to an integer as suggested might work, but will
most likely be an issue with the common pattern of checking for
error by just comparing the sign of the return value, instead of
the (technically correct) comparison with -1.
We can check if there are any paddings on net_context, if there is it
might not be a bad idea to add an id to them, or really make them
compatible with k_poll.

In general Im much more concern about the stack fragmentation, though.
Adding compatibility layers and offloading works against the stack
itself,
Well, I don't see how proper, configurable (on/off) layering works
against the stack. I'd say, vice-versa, it allows to leverage it for
more usecases. Offloading, let me skip that part ;-).

as they either eat the available memory for enabling features
in the stack
Let users select whether they want to save memory or development
effort by using a conventional API ;-).

or reimplement part of the stack in other layers,
Well, here I, as present at the mini-summit, can quote what Anas said
on this: he literally said - don't do that. If "higher level wrapper"
needs to reimplement or otherwise put upside-down what native layer
does, let's just rework native layer to be (more) POSIX/etc.
compatible. And well, that's pretty much how we already do it, with
several features for 1.7 done to make it closer to POSIX spirit, and
there're more such things in queue (here's the latest merged:
https://gerrit.zephyrproject.org/r/12455). So, no worries, or rather,
we all are aware of the problem, and ready to do homework on it.

not to mention this takes a lot of time that could be spend in other
areas like security and proper accelerators for heavy tasks like for
example crypto.
As a maintainer in other projects, I can easily relate to that thought
(that I, a maintainer, know better what contributors rather be working
on ;-) ). But here's how it works at Linaro: our aim is to reduce/avoid
fragmentation among ARM vendors (and not at the expense of more
fragmentation with other architectures). So if a member comes to us and
says "we'll use Zephyr if ...", we'd better listen, because there's no
lack of alternatives to Zephyr, and if interested parties select
something else instead, there won't be anything good from that to
Zephyr, or to the industry at all, which will continue in a chaos of
several dozens of adhoc (vs full-featured) RTOSes. And yeah, BSD
Sockets is a frequent request we heard for a while. Other things you
mention are in the plans (ours including) too.


--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: RFC: BSD Socket (like) API

Paul Sokolovsky
 

Hello Leandro,

On Tue, 28 Mar 2017 09:08:51 -0700
Leandro Pereira <leandro.pereira@intel.com> wrote:

Paul,

On 03/27/2017 04:16 AM, Paul Sokolovsky wrote:

2- FDs just waste memory, add locking and make things harder to
debug, use socket structures.
Agree. That was one of the 1st question I got at the minisummit,
and my answer was: "well, we could waste some memory by adding the
FD table to map small integers to underlying structures, but why?".
Indeed, by just casting pointers to integers, we can go a long,
long way of using this API and porting existing apps.
Right now we have memory pools, so file descriptors could be really
an index to the net_context pool. Since FDs should be treated as
opaque identifiers, this should continue to be fine even if things
change in the future (such as having multiple mempools for struct
net_context).
Sounds like an interesting idea, thanks for sharing, I'll keep it in
mind and will look into it at later stage.

(We might need a queue per context to properly emulate
the read() API, though, so the reference might not be exactly to a
net_context pool, but the same idea applies.)
Right, that's exactly what I'm working on right now. Well, I keep this
per-context queue external so far, but later we'll need to see whether
we want to introduce a separate "socket" object type to host it (and
any other needed fields), or if we can (configurably!) embed them in
net_context. If we agree with the idea that a particular app will use
either native API, or BSD Sockets API, I think this latter choice is
very viable.

Casting a pointer to an integer as suggested might work, but will
most likely be an issue with the common pattern of checking for error
by just comparing the sign of the return value, instead of the
(technically correct) comparison with -1.
Yep, we'd need to look at the corpus of apps to be ported to Zephyr to
see how grave that issue is. That may take some time (to collect needed
set), though if you have any related data/ideas already, please share
them.


Leandro
[]

--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: RFC: BSD Socket (like) API

Paul Sokolovsky
 

Hello Gil,

Thanks for the reply, please let me both agree and not agree to some
points.

On Mon, 27 Mar 2017 16:40:30 -0700
Gil Pitney <gil.pitney@linaro.org> wrote:

I think application developers would prefer to see ONE simple,
standard API for networking on which to build all higher level
networking protocols and applications.
I'm on their side, but such application developers will quickly see
that they can't use "ONE simple, standard API" on the smallest of
devices. I guess small footprint and resource usage is a distinctive
trait of Zephyr, and we should not compromise it. So, let there be
layering, and let different application developers choose what they
like/need.

However, I understand the Zephyr IP stack needs to target highly
memory constrained systems, and the decision has been made that the
standard POSIX APIs are just not suitable.
Ack, we in agreement here, per above.

So, now the concern is that we are left with two "BSD-like" APIs doing
essentially the same thing, one purporting to use less data space,
though shifting more complexity and code-space up to the application,
and allowing applications to be written one of two (or both?) ways.
Let me start with the latter - I may imagine a particular application
will want to use either "BSD Socket like API" or "native Zephyr API",
but not both at the same time. If this idea is sound, it may help us
to better structure additions for new API (make them less (least)
intrusive).

Now about terminology - I avoid "BSD-like" term, and specifically call
new API to be developed "BSD Socket like API", that's long but as much
unambiguous as I could come up with. "like" part is to set the
expectations right, so nobody would get an idea that one will be able
to build Chromium against Zephyr 1.8, or something like that ;-). I
wouldn't call current native API "BSD Sockets like API" at all, because
the definitive trait of BSD Sockets is pull-stile API, which native API
lacks.

That all still may be too subtle and too much word-playing, especially
for someone who's not in all of this context, so ideas for better naming
are welcome, but the basic idea is that current API is "native", while
one to be developed is "BSD Sockets like".


So, then, a couple recommendations:

1) Let's start with the standard POSIX APIs:

http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_10

and devolve from there with a list of exceptions with rationale for
why Zephyr cannot or should not comply with the standard.

For example, APIs dealing with file descriptors are reasonable
exceptions, because Zephyr does not support a POSIX file system.
So, you propose top-bottom approach, whereas I explicitly proposed
bottom-top approach: let's start with finding the most divergent
feature from the current native API, let's try to implement it in terms
of existing API, and see what it takes to do that. Then select next
feature, rinse and repeat. IMHO, that's much better for initial
prototyping phase and matches current development model of Zephyr,
where we extensively grow featureset. We certainly will need to pause
at some point and match what we have against spec some time later.

I'd avoid terms like "BSD-like", as the current Zephyr APIs are
arguably "BSD-like" to some degree - just not standard.
Per above, and per my outlook, it's not, and can be as well called
"native". I'll be happy to adopt any other naming, as long as it makes
matters less confusing.


2) Enable TCP/IP offload from the socket layer.

The TI CC3220 completely offloads the TCP/IP stack onto a
co-processor, by marshalling BSD socket API calls over SPI to the
network coprocessor.

The current NET_OFFLOAD support in the Zephyr IP stack provides a hook
to call an offload engine. For the TI CC3220, this mapping of
net_context APIs to BSD socket APIs adds some overhead and code
complexity.

However, now that we're talking about adding a BSD socket layer to
Zephyr, offloading directly from the socket layer would be more
natural (something similar to the MyNewt solution, as pointed out by
Sterling).

Otherwise, we have to map BSD sockets -> net_context -> BSD sockets,
with all the required extra overhead of data structures, sync
primitives, and server thread(s) to handle mapping between the two
different networking API usage models.

By doing so, I understand we could potentially bypass the (TBD)
routing table. But that is a use case, AFAIK, not really needed for
the CC3220 typical client IoT node devices.

Then again, the POSIX socket standard specifies that sockets shall
support routing, so maybe we can just make that work somehow?


On 27 March 2017 at 06:27, Paul Sokolovsky
<paul.sokolovsky@linaro.org> wrote:
Hello Jukka,

On Mon, 27 Mar 2017 12:37:40 +0300
Jukka Rissanen <jukka.rissanen@linux.intel.com> wrote:

[]
The current approach is that we value lightweight nature of
Zephyr, and
looking towards finding a minimal set of changes (additions) to
provide
BSD Sockets *like* API to Zephyr.
The definition of what is BSD Socket *like* system seems to differ
from person to person.
That's true, and the reason why I informally proposed the
"process-wise" definition above: "minimal set of changes
(additions) to provide BSD Sockets *like* API to Zephyr."

For me the current net_context API in Zephyr
is quite BSD socket like, meaning that the API provides similar
functions that are found in BSD socket API like open, close, bind,
connect, accept etc. So it is quite easy to port the application in
this respect.
That's also true, and I right from the start got a habit to call
net_context "a socket". The API calls you mention are all indeed
work (likely almost) the same. The big difference comes with recv()
call - whereas BSD Sockets API has it conventionally pull-style, in
Zephyr it's push-style, where data gets delivered to an app via a
callback. That's one single feature which makes porting 3rd-party
applications complicated and cumbersome.


The bigger difference between BSD socket API and Zephyr net_context
API is:
* net_context API uses net_buf to pass data. The net_buf does not
provide linear memory but data needs to be partitioned when sending
and read in chunks when receiving. We have helpers defined in
nbuf.h for handling reading/writing data in this case. The issue
with linear memory case is that it uses much more memory as we
need to be prepared to receive at least 1280 byte size chunks of
data (IPv6 min data packet size).
Right. And that part is covered by BSD Sockets' own API - the data
is passed via app-owned buffers, not system-owned buffers. That
means that by definition, BSD Sockets don't support zero-copy
operation. While an obvious drawback, it has its positive sides to,
like it offers possibility for better system vs app separation for
security.

* The net_context is asynchronous and caller needs to have
callbacks defined. The BSD socket API is synchronous. The
net_context can be used in synchronous way so this is a smaller
issue imho.
As you may already noticed, I prefer to call this distinction
"push-style vs pull-style", because it pinpoints the problem
better. The net_context used "in synchronous way" doesn't provide
BSD Sockets behavior for receives. For that to work, incoming (but
unprocessed) data needs to be queue *per socket*, until an app
requests it. And indeed, that's the one big initial change I would
need to make.

Having a BSD socket API on top of net_context will use more memory
so if one is concerned about memory consumption, then using native
API should be preferred.
+100, BSD Sockets like API is not a replacement for native API,
only a helper to port existing applications (mostly libraries in the
real-world cases, I may imagine, but only practice will tell how
people will use it).

[]

--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


Re: RFC: BSD Socket (like) API

Luiz Augusto von Dentz
 

Hi,

On Tue, Mar 28, 2017 at 7:08 PM, Leandro Pereira
<leandro.pereira@intel.com> wrote:
Paul,

On 03/27/2017 04:16 AM, Paul Sokolovsky wrote:

2- FDs just waste memory, add locking and make things harder to
debug, use socket structures.

Agree. That was one of the 1st question I got at the minisummit, and my
answer was: "well, we could waste some memory by adding the FD table to
map small integers to underlying structures, but why?". Indeed, by just
casting pointers to integers, we can go a long, long way of using this
API and porting existing apps.
Right now we have memory pools, so file descriptors could be really an index
to the net_context pool. Since FDs should be treated as opaque identifiers,
this should continue to be fine even if things change in the future (such as
having multiple mempools for struct net_context).
(We might need a queue per context to properly emulate the read() API,
though, so the reference might not be exactly to a net_context pool, but the
same idea applies.)
Indeed we would have to put a queue per net_context which quickly adds
up to the memory footprint, luckily we can use k_poll and avoid having
extra threads for each context. But I guess the fundamental problem is
this is probably intended to interface with real BSD socket/posix
applications/components, which I guess will not be enough for most, so
at the end will it be worth adding this code? What are the things we
want to port on top, can someone present at the mini-summit make a
list?

Casting a pointer to an integer as suggested might work, but will most
likely be an issue with the common pattern of checking for error by just
comparing the sign of the return value, instead of the (technically correct)
comparison with -1.
We can check if there are any paddings on net_context, if there is it
might not be a bad idea to add an id to them, or really make them
compatible with k_poll.

In general Im much more concern about the stack fragmentation, though.
Adding compatibility layers and offloading works against the stack
itself, as they either eat the available memory for enabling features
in the stack or reimplement part of the stack in other layers, not to
mention this takes a lot of time that could be spend in other areas
like security and proper accelerators for heavy tasks like for example
crypto.

--
Luiz Augusto von Dentz


Re: RFC: BSD Socket (like) API

Leandro Pereira
 

Paul,

On 03/27/2017 04:16 AM, Paul Sokolovsky wrote:

2- FDs just waste memory, add locking and make things harder to
debug, use socket structures.
Agree. That was one of the 1st question I got at the minisummit, and my
answer was: "well, we could waste some memory by adding the FD table to
map small integers to underlying structures, but why?". Indeed, by just
casting pointers to integers, we can go a long, long way of using this
API and porting existing apps.
Right now we have memory pools, so file descriptors could be really an index to the net_context pool. Since FDs should be treated as opaque identifiers, this should continue to be fine even if things change in the future (such as having multiple mempools for struct net_context).
(We might need a queue per context to properly emulate the read() API, though, so the reference might not be exactly to a net_context pool, but the same idea applies.)

Casting a pointer to an integer as suggested might work, but will most likely be an issue with the common pattern of checking for error by just comparing the sign of the return value, instead of the (technically correct) comparison with -1.

Leandro

5641 - 5660 of 8333