Date   

Re: Problems managing NBUF DATA pool in the networking stack

Luiz Augusto von Dentz
 

Hi Geoff,


While it is probably a good idea to look for good research solutions
we actually need to make sense what it does, and what doesn't make
sense, for zephyr. Pretty much any layer that requires a lot more
memory, and that include threads that requires dedicated stack, buffer
pools and complexity in general is imo a big no, no for zephyr. That
said the net_buf, which is what nbuf uses, has been based on skb
concept from Linux, the pool work a big differently though since we
don't use memory allocation, but it is not that we haven't look at any
prior art, it just that we don't have any plans for queuing
discipline/network schedulers that perhaps you have in mind.

On Tue, Feb 14, 2017 at 5:24 PM, Geoff Thorpe <geoff.thorpe@nxp.com> wrote:
While I don't personally have answers to these buffer-management questions, I am certain they are well-studied, because they are intermingled with lots of other well-studied questions and use-cases that influence buffer handling, like flow-control, QoS, order-restoration, order-preservation, bridging, forwarding, tunneling, VLANs, and so on. If I recall, the "obvious solutions" usually aren't - i.e. they're either not obvious or not (general) solutions. The buffer-handling change to remediate one problematic use-case usually causes some other equally valid use-case to degenerate.

I guess I'm just saying that we should find prior art and best practice, rather than trying to derive it from first principles and experimentation.

Do we already have in our midst anyone who has familiarity with NPUs, OpenDataPlane, etc? If not, I can put out some feelers.

Cheers
Geoff


-----Original Message-----
From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-bounces@lists.zephyrproject.org] On Behalf Of Jukka Rissanen
Sent: February-14-17 8:46 AM
To: Piotr Mieńkowski <piotr.mienkowski@gmail.com>; zephyr-devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] Problems managing NBUF DATA pool in the networking stack

Hi Piotr,

On Tue, 2017-02-14 at 02:26 +0100, Piotr Mieńkowski wrote:
Hi,
While I agree we should prevent the remote to consume all the
buffer
and possible starve the TX, this is probably due to echo_server
design
that deep copies the buffers from RX to TX, in a normal
application
Indeed the echo server could perhaps be optimized not to deep
copy
thus removing the issue. The wider question here is whether or
not we
want a design rule that effectively states that all applications
should consume and unref their rx buffers before attempting to
allocate tx buffers. This may be convenient for some
applications,
but I'm not convinced that is always the case. Such a design
rule
effectively states that an application that needs to retain or
process
information from request to response must now have somewhere to
store
all of that information between buffers and rules out any form of
incremental processing of an rx buffer interleaved with the
construction of the tx message.
If you read the entire email it would be clearer that I did not
suggest it was fine to rule out incremental processing, in fact I
suggested to add pools per net_context that way the stack itself
will
not have to drop its own packets and stop working because some
context
is taking all its buffers just to create clones.
So, what should be the final solution to the NBUF DATA issue? Do we
want to redesign echo_server sample application to use shallow copy,
should we introduce NBUF DATA pool per context, a separate NBUF DATA
pool for TX and RX? Something else?

In my opinion enforcing too much granularity on allocation of data
buffers, i.e. having a separate nbuf data pool per context, maybe
another one for networking stack will not be optimal. Firstly,
Kconfig would become even more complex and users would have hard time
figuring out a safe set of options. What if we know one context will
not use many data buffers and another one a lot. Should we still
assign the same amount of data buffers per context? Secondly, every
separate data pool will add some spare buffers as a 'margin error'.
Thirdly, Ethernet driver which reserves data buffers for the RX path
has no notion of context, doesn't know which packets are meant for
the networking stack, which one for the application. It would not
know from which data pool to take the buffers. It can only
distinguish between RX and TX path.

In principle, having shared resources is not a bad design approach.
However, we probably should have a way to guarantee a minimum amount
of buffers for the TX path. As a software engineer, if I need to
design a TX path in my networking application and I know that I have
some fixed amount of data buffers available I should be able to
manage it. The same task becomes much more difficult if my fixed
amount of data buffers can at any given moment become zero for
reasons which are beyond my control. This is the case currently.
I agree that having too fine grained setup for the buffers is bad and
should be avoided. The current setup of RX, TX and shared DATA buffers
has worked for UDP quite well. For TCP the situation gets much more
difficult as TCP might hold the nbuf for a while until an ack is
received for those pending packets. TCP code should not affect the
other part of the IP stack and starve the buffers from other part of
the stack.

One option is to have a separate pool for TCP data nbuf's that could be
shared by all the TCP contexts. The TCP code could allocate all the
buffers that need to wait ack from this pool instead of global data
pool. This would avoid allocating a separate pool for each context
which is sub-optimal for memory consumption.

Cheers,
Jukka

_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel
--
Luiz Augusto von Dentz


Build Flag for prerequisites

Richard Peters <mail@...>
 

Hi Community,

I use these flags to link jerryscript against zephyr in my project makefile:

ALL_LIBS += jerry-core
export ALL_LIBS
LDFLAGS_zephyr += "-L $(JERRYOUT)/lib"
export LDFLAGS_zephyr

1. This works, but is this the way it should be done?
2. is there a build variable for zephyr to make targets automatically
built, before zephyr, which i can use in my Makefile?
I think about something like:
DEPS+=jerryscript


Re: did the zephyr kernel support the nested interrupt on all the supported arch?

Boie, Andrew P
 

i did not see other arch`s code but just want to know about, is this
the feature of zephyr kernel? for the strong real-time feature? or
arch related?
Should be supported on architectures where the hardware has support for it.
It is a feature of the kernel.

ARM, ARC, x86 do support it.

Not entirely sure about Nios2 and RiscV.
On Nios II we currently just support the very simple IIC (Internal Interrupt Controller) and nested interrupts aren't possible.

Andrew


Re: did the zephyr kernel support the nested interrupt on all the supported arch?

Chuck Jordan <Chuck.Jordan@...>
 

Should be supported on architectures where the hardware has support for it. It is a feature of the kernel.

ARM, ARC, x86 do support it.

[ChuckJ] To clarify. On ARC, FIRQ can be nested on top of RIRQ. But at this time RIRQ cannot be nested on top of RIRQ.
Some assembly language has to be re-organized in Zephyr to allow for this.
But yes, as a goal, nested interrupts are desired.


Re: Problems managing NBUF DATA pool in the networking stack

Geoff Thorpe <geoff.thorpe@...>
 

While I don't personally have answers to these buffer-management questions, I am certain they are well-studied, because they are intermingled with lots of other well-studied questions and use-cases that influence buffer handling, like flow-control, QoS, order-restoration, order-preservation, bridging, forwarding, tunneling, VLANs, and so on. If I recall, the "obvious solutions" usually aren't - i.e. they're either not obvious or not (general) solutions. The buffer-handling change to remediate one problematic use-case usually causes some other equally valid use-case to degenerate.

I guess I'm just saying that we should find prior art and best practice, rather than trying to derive it from first principles and experimentation.

Do we already have in our midst anyone who has familiarity with NPUs, OpenDataPlane, etc? If not, I can put out some feelers.

Cheers
Geoff

-----Original Message-----
From: zephyr-devel-bounces@lists.zephyrproject.org [mailto:zephyr-devel-bounces@lists.zephyrproject.org] On Behalf Of Jukka Rissanen
Sent: February-14-17 8:46 AM
To: Piotr Mieńkowski <piotr.mienkowski@gmail.com>; zephyr-devel@lists.zephyrproject.org
Subject: Re: [Zephyr-devel] Problems managing NBUF DATA pool in the networking stack

Hi Piotr,

On Tue, 2017-02-14 at 02:26 +0100, Piotr Mieńkowski wrote:
Hi,
While I agree we should prevent the remote to consume all the
buffer
and possible starve the TX, this is probably due to echo_server
design
that deep copies the buffers from RX to TX, in a normal
application
Indeed the echo server could perhaps be optimized not to deep
copy
thus removing the issue.  The wider question here is whether or
not we
want a design rule that effectively states that all applications
should consume and unref their rx buffers before attempting to
allocate tx buffers.   This may be convenient for some
applications,
but I'm not convinced that is always the case.  Such a design
rule
effectively states that an application that needs to retain or
process
information from request to response must now have somewhere to
store
all of that information between buffers and rules out any form of
incremental processing of an rx buffer interleaved with the
construction of the tx message.
If you read the entire email it would be clearer that I did not
suggest it was fine to rule out incremental processing, in fact I
suggested to add pools per net_context that way the stack itself
will
not have to drop its own packets and stop working because some
context
is taking all its buffers just to create clones.
 So, what should be the final solution to the NBUF DATA issue? Do we
want to redesign echo_server sample application to use shallow copy,
should we introduce NBUF DATA pool per context, a separate NBUF DATA
pool for TX and RX? Something else?

In my opinion enforcing too much granularity on allocation of data
buffers, i.e. having a separate nbuf data pool per context, maybe
another one for networking stack will not be optimal. Firstly,
Kconfig would become even more complex and users would have hard time
figuring out a safe set of options. What if we know one context will
not use many data buffers and another one a lot. Should we still
assign the same amount of data buffers per context? Secondly, every
separate data pool will add some spare buffers as a 'margin error'.
Thirdly, Ethernet driver which reserves data buffers for the RX path
has no notion of context, doesn't know which packets are meant for
the networking stack, which one for the application. It would not
know from which data pool to take the buffers. It can only
distinguish between RX and TX path.

In principle, having shared resources is not a bad design approach.
However, we probably should have a way to guarantee a minimum amount
of buffers for the TX path. As a software engineer, if I need to
design a TX path in my networking application and I know that I have
some fixed amount of data buffers available I should be able to
manage it. The same task becomes much more difficult if my fixed
amount of data buffers can at any given moment become zero for
reasons which are beyond my control. This is the case currently.
I agree that having too fine grained setup for the buffers is bad and
should be avoided. The current setup of RX, TX and shared DATA buffers
has worked for UDP quite well. For TCP the situation gets much more
difficult as TCP might hold the nbuf for a while until an ack is
received for those pending packets. TCP code should not affect the
other part of the IP stack and starve the buffers from other part of
the stack.

One option is to have a separate pool for TCP data nbuf's that could be
shared by all the TCP contexts. The TCP code could allocate all the
buffers that need to wait ack from this pool instead of global data
pool. This would avoid allocating a separate pool for each context
which is sub-optimal for memory consumption.

Cheers,
Jukka

_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


Daily Gerrit Digest

donotreply@...
 

NEW within last 24 hours:
- https://gerrit.zephyrproject.org/r/11230 : RFC drivers/timer: Refined version of nRF RTC driver.
- https://gerrit.zephyrproject.org/r/11233 : Bluetooth: SMP: Fix passkey entry for legacy pairing
- https://gerrit.zephyrproject.org/r/11232 : Bluetooth: shell: Fix typo
- https://gerrit.zephyrproject.org/r/11231 : Bluetooth: shell: Fix accessing invalid memory
- https://gerrit.zephyrproject.org/r/11217 : frdm: fixed path and dependencies for extract_dts_includes.py
- https://gerrit.zephyrproject.org/r/11198 : arm: Support for new STM32F4 socs (STM32F407 and STM32F429)
- https://gerrit.zephyrproject.org/r/11191 : tests: gen_isr_table: actually run the IRQ
- https://gerrit.zephyrproject.org/r/11220 : Bluetooth: SDP: Server: Set security level to NONE
- https://gerrit.zephyrproject.org/r/11219 : Bluetooth: SDP: Server: Set correct TX MTU
- https://gerrit.zephyrproject.org/r/11209 : tests: kernel: added testapp profiling_api
- https://gerrit.zephyrproject.org/r/11216 : tests: kernel: added clock_test
- https://gerrit.zephyrproject.org/r/11215 : tests: add PM tests framework for driver PM test case
- https://gerrit.zephyrproject.org/r/11214 : tests: add zephyr pinmux driver api test case
- https://gerrit.zephyrproject.org/r/11213 : samples: net: Add README.rst to echo apps
- https://gerrit.zephyrproject.org/r/11208 : doc: update link to 0.9 SDK
- https://gerrit.zephyrproject.org/r/11207 : tests: add zephyr flash driver api test case
- https://gerrit.zephyrproject.org/r/11206 : tests: add zephyr uart driver api test case
- https://gerrit.zephyrproject.org/r/11205 : tests: kernel: added test cases k_pipe_block_put
- https://gerrit.zephyrproject.org/r/11199 : arm: Support for new ARM boards (discovery STM32F4 and STM32F429)
- https://gerrit.zephyrproject.org/r/11196 : arch: Atmel SAM E70: remove now redundant IRQ id defines
- https://gerrit.zephyrproject.org/r/11193 : arc: enable gen_isr_tables mechanism
- https://gerrit.zephyrproject.org/r/11192 : gen_isr_tables: apply offset to irq parameter
- https://gerrit.zephyrproject.org/r/11189 : riscv32: enable gen_isr_tables mechanism
- https://gerrit.zephyrproject.org/r/11194 : gen_isr_tables: make vector offset a hidden option
- https://gerrit.zephyrproject.org/r/11187 : tests: kernel: add test point k_cpu_atomic_idle

UPDATED within last 24 hours:
- https://gerrit.zephyrproject.org/r/11024 : qemu_cortex_m3: fixed network connectivity
- https://gerrit.zephyrproject.org/r/11112 : Merge remote-tracking branch 'origin/core'
- https://gerrit.zephyrproject.org/r/4489 : Bluetooth: SDP: Server: Support ServiceAttributeRequest
- https://gerrit.zephyrproject.org/r/6716 : Bluetooth: SDP: Server: Refactor data element structure header
- https://gerrit.zephyrproject.org/r/4488 : Bluetooth: SDP: Server: Support ServiceSearchRequest
- https://gerrit.zephyrproject.org/r/9447 : Bluetooth: SDP: Server: Support ServiceSearchAttributeRequest
- https://gerrit.zephyrproject.org/r/11088 : doc: boards: Move nRF5x DK board doc from the wiki to git
- https://gerrit.zephyrproject.org/r/11172 : board: defconfig: Enable WDT for ATMEL SAM MCUs
- https://gerrit.zephyrproject.org/r/11029 : watchdog: Add WDT driver for Atmel SAM SoCs
- https://gerrit.zephyrproject.org/r/10369 : ataes132a: Adds a driver to support ATAES132A device
- https://gerrit.zephyrproject.org/r/10140 : tests/gpio: enable gpio cases to run on more platforms
- https://gerrit.zephyrproject.org/r/11160 : hosttools-tarball.bb: Integrate YAML library into SDK
- https://gerrit.zephyrproject.org/r/3311 : include/crypto: Crypto abstraction header
- https://gerrit.zephyrproject.org/r/11177 : tests: kernel: added test case k_fifo_is_empty
- https://gerrit.zephyrproject.org/r/10814 : Added sensor driver for SI1153. Added proximity sensor_channel entries in sensor.h
- https://gerrit.zephyrproject.org/r/11184 : sensor: add sensor_channel_count function

MERGED within last 24 hours:
- https://gerrit.zephyrproject.org/r/11218 : net: Fix a const specifier issue
- https://gerrit.zephyrproject.org/r/11212 : samples: net: Remove the README file
- https://gerrit.zephyrproject.org/r/11211 : samples: net: Remove obsolete prj_slip.conf from echo-*
- https://gerrit.zephyrproject.org/r/11186 : Merge net branch into master
- https://gerrit.zephyrproject.org/r/11203 : libc/include: Adding time.h
- https://gerrit.zephyrproject.org/r/11197 : arc: move openocd_dbg section
- https://gerrit.zephyrproject.org/r/11195 : scripts: Fix hardwired python path in extract_dts_include.py
- https://gerrit.zephyrproject.org/r/11202 : spi_test: fix variable type mismatches
- https://gerrit.zephyrproject.org/r/11200 : samples: webusb: fix variable type mismatches
- https://gerrit.zephyrproject.org/r/11201 : test_mpool_api: fix variable type mismatches
- https://gerrit.zephyrproject.org/r/11190 : arc: linker.ld: fix BSS section declaration
- https://gerrit.zephyrproject.org/r/11182 : toolchain.gccarmemb: set DTC for building targets that use devicetrees
- https://gerrit.zephyrproject.org/r/11173 : tests: kernel: remove unsupported tests
- https://gerrit.zephyrproject.org/r/11167 : riscv32: move riscv privileged architecture specifics within a common header file
- https://gerrit.zephyrproject.org/r/11101 : samples/net/http: Add HTTP over TLS sample application
- https://gerrit.zephyrproject.org/r/11161 : boards: tinyTILE: enable USB console by default
- https://gerrit.zephyrproject.org/r/9762 : CI: rearchitect with a framework that orchestrates running
- https://gerrit.zephyrproject.org/r/10991 : eth/mcux: Add temporary workaround to unbreak IPv6 ND features.
- https://gerrit.zephyrproject.org/r/11166 : ext/lib/mbedtls: Add the TLS configuration file
- https://gerrit.zephyrproject.org/r/10807 : net/mqtt: Add BT support to MQTT publisher sample


Re: did the zephyr kernel support the nested interrupt on all the supported arch?

Benjamin Walsh <benjamin.walsh@...>
 

Hi,

i have reviewed the cortex-m arch about the interrupt flow, and
found that cortex-m support max 255 interrupt entry and it
allows interrupt with higher prio preempted another one with
lower priority interrupt,

i did not see other arch`s code but just want to know about, is this
the feature of zephyr kernel? for the strong real-time feature? or
arch related?
Should be supported on architectures where the hardware has support for
it. It is a feature of the kernel.

ARM, ARC, x86 do support it.

Not entirely sure about Nios2 and RiscV.

Regards,
Ben

--
Benjamin Walsh, SMTS
WR VxWorks Virtualization Profile
www.windriver.com
Zephyr kernel maintainer
www.zephyrproject.org


Re: Problems managing NBUF DATA pool in the networking stack

Marcus Shawcroft <marcus.shawcroft@...>
 

On 14 February 2017 at 13:46, Jukka Rissanen
<jukka.rissanen@linux.intel.com> wrote:

I agree that having too fine grained setup for the buffers is bad and
should be avoided. The current setup of RX, TX and shared DATA buffers
has worked for UDP quite well.
We do however still need to figure out, for udp, how to prevent:
- rx path starving tx path to dead lock
- multiple tx paths deadlocking (by attempting to acquire buffers incrementally)

Cheers
/Marcus


Re: Problems managing NBUF DATA pool in the networking stack

Jukka Rissanen
 

Hi Piotr,

On Tue, 2017-02-14 at 02:26 +0100, Piotr Mieńkowski wrote:
Hi,
While I agree we should prevent the remote to consume all the
buffer
and possible starve the TX, this is probably due to echo_server
design
that deep copies the buffers from RX to TX, in a normal
application
Indeed the echo server could perhaps be optimized not to deep
copy
thus removing the issue.  The wider question here is whether or
not we
want a design rule that effectively states that all applications
should consume and unref their rx buffers before attempting to
allocate tx buffers.   This may be convenient for some
applications,
but I'm not convinced that is always the case.  Such a design
rule
effectively states that an application that needs to retain or
process
information from request to response must now have somewhere to
store
all of that information between buffers and rules out any form of
incremental processing of an rx buffer interleaved with the
construction of the tx message.
If you read the entire email it would be clearer that I did not
suggest it was fine to rule out incremental processing, in fact I
suggested to add pools per net_context that way the stack itself
will
not have to drop its own packets and stop working because some
context
is taking all its buffers just to create clones.
 So, what should be the final solution to the NBUF DATA issue? Do we
want to redesign echo_server sample application to use shallow copy,
should we introduce NBUF DATA pool per context, a separate NBUF DATA
pool for TX and RX? Something else?

In my opinion enforcing too much granularity on allocation of data
buffers, i.e. having a separate nbuf data pool per context, maybe
another one for networking stack will not be optimal. Firstly,
Kconfig would become even more complex and users would have hard time
figuring out a safe set of options. What if we know one context will
not use many data buffers and another one a lot. Should we still
assign the same amount of data buffers per context? Secondly, every
separate data pool will add some spare buffers as a 'margin error'.
Thirdly, Ethernet driver which reserves data buffers for the RX path
has no notion of context, doesn't know which packets are meant for
the networking stack, which one for the application. It would not
know from which data pool to take the buffers. It can only
distinguish between RX and TX path.

In principle, having shared resources is not a bad design approach.
However, we probably should have a way to guarantee a minimum amount
of buffers for the TX path. As a software engineer, if I need to
design a TX path in my networking application and I know that I have
some fixed amount of data buffers available I should be able to
manage it. The same task becomes much more difficult if my fixed
amount of data buffers can at any given moment become zero for
reasons which are beyond my control. This is the case currently.
I agree that having too fine grained setup for the buffers is bad and
should be avoided. The current setup of RX, TX and shared DATA buffers
has worked for UDP quite well. For TCP the situation gets much more
difficult as TCP might hold the nbuf for a while until an ack is
received for those pending packets. TCP code should not affect the
other part of the IP stack and starve the buffers from other part of
the stack.

One option is to have a separate pool for TCP data nbuf's that could be
shared by all the TCP contexts. The TCP code could allocate all the
buffers that need to wait ack from this pool instead of global data
pool. This would avoid allocating a separate pool for each context
which is sub-optimal for memory consumption.

Cheers,
Jukka


Re: Trouble configuring arduino101 via zephyr

Anjali Asar <anjaliasar@...>
 

Some pins have been pulled up and some have been pulled down. Not able to change it. It seems to be perpetually stuck in that mode. Also pin 8 (Zephyr pin 16) not functioning. Any suggestions?

On 01-Feb-2017 5:19 PM, "Anjali Asar" <anjaliasar@...> wrote:
Hi, not sure if this is the right forum for this, but trying my luck. if not, please do direct me towards the right one.

I'm having trouble flashing the arduino101.
i've followed all the instructions available still, tho it says download complete, the code doesnt seem to have been downloaded.

The code i am trying is the sample code provided, "blinky". The LED doesn't light up. i have tried various pins apart from the on board LED too.

 Is there any extra configuration that i have to do to make it work?
Note: i am using a windows 8 pc


did the zephyr kernel support the nested interrupt on all the supported arch?

曹子龙
 

hi all:

     i have reviewed the cortex-m arch about the interrupt flow, and found that cortex-m support max 255 interrupt entry and  it allows  interrupt with higher prio  preempted  another one with lower priority interrupt, 

i did not see other arch`s code but just want to know about, is this the feature of zephyr kernel? for the strong real-time feature? or  arch related?  

thanks for your  kindly support. 



 


Re: Problems managing NBUF DATA pool in the networking stack

Piotr Mienkowski
 

Hi,

While I agree we should prevent the remote to consume all the buffer
and possible starve the TX, this is probably due to echo_server design
that deep copies the buffers from RX to TX, in a normal application
Indeed the echo server could perhaps be optimized not to deep copy
thus removing the issue.  The wider question here is whether or not we
want a design rule that effectively states that all applications
should consume and unref their rx buffers before attempting to
allocate tx buffers.   This may be convenient for some applications,
but I'm not convinced that is always the case.  Such a design rule
effectively states that an application that needs to retain or process
information from request to response must now have somewhere to store
all of that information between buffers and rules out any form of
incremental processing of an rx buffer interleaved with the
construction of the tx message.
If you read the entire email it would be clearer that I did not
suggest it was fine to rule out incremental processing, in fact I
suggested to add pools per net_context that way the stack itself will
not have to drop its own packets and stop working because some context
is taking all its buffers just to create clones.
So, what should be the final solution to the NBUF DATA issue? Do we want to redesign echo_server sample application to use shallow copy, should we introduce NBUF DATA pool per context, a separate NBUF DATA pool for TX and RX? Something else?

In my opinion enforcing too much granularity on allocation of data buffers, i.e. having a separate nbuf data pool per context, maybe another one for networking stack will not be optimal. Firstly, Kconfig would become even more complex and users would have hard time figuring out a safe set of options. What if we know one context will not use many data buffers and another one a lot. Should we still assign the same amount of data buffers per context? Secondly, every separate data pool will add some spare buffers as a 'margin error'. Thirdly, Ethernet driver which reserves data buffers for the RX path has no notion of context, doesn't know which packets are meant for the networking stack, which one for the application. It would not know from which data pool to take the buffers. It can only distinguish between RX and TX path.

In principle, having shared resources is not a bad design approach. However, we probably should have a way to guarantee a minimum amount of buffers for the TX path. As a software engineer, if I need to design a TX path in my networking application and I know that I have some fixed amount of data buffers available I should be able to manage it. The same task becomes much more difficult if my fixed amount of data buffers can at any given moment become zero for reasons which are beyond my control. This is the case currently.

Regards,
Piotr


Re: Adding support for CC2650 SoC

Gil Pitney
 

Hi Geoffrey,

Good question.

I understand "SimpleLink" to describe a family of connectivity SoC's.

But an SoC can include a heterogeneous mix of CPU core types: ARM
cores (of different types), DSP cores, custom processors.

We may be able to designate only one of the SoC cores as the
"master", and distinguish the Zephyr SoC family based on that master
core; but, master core can depend on the application.

I would argue, since the SoCs all still fall within the SimpleLink
"family", that the cc26xx/ subdirectory can go under the
arch/arm/soc/ti_simplelink/ directory. It seems the cc26xx Kconfig
and its new Device Tree files will still be separate from the cc32xx,
so that the fact their SoCs include different CPU cores should not
cause a conflict.

BR,
- Gil





On 13 February 2017 at 03:12, Geoffrey LE GOURRIEREC
<geoffrey.legourrierec@smile.fr> wrote:
Hello Gil,

I am starting to work on supporting Zephyr on Texas Instruments' "SensorTag"
device.

To this end, I need to add support for the CC2650 MCU. I looked at the
support you
added for the CC32xx family of MCUs, and am having trouble deciding on
wether to
integrate my work or not in the arch/arm/soc/ti_simplelink subdirectory.

This is my first time contributing to Zephyr, and I gather SoCs are
primarily
differentiated regarding the CPU type they use. However, the "SimpleLink"
family
is more of a commercial name, and CC26xx / cc32xx devices, in particular,
differ
in this respect (Cortex M-3 / M-4 respectively). Other families of MCUs
already
supported (e.g. Atmel's SAME70) at least share the CPU type.

Should I use your existing work as common ground?
Or should we reckon "SimpleLink family" is not really usable as a SoC
"family"?

Thanks for your advice,

Best regards,

--
Geoffrey Le Gourriérec


Re: FRDM eth driver and TCP apps

Marcus Shawcroft <marcus.shawcroft@...>
 

On 13 February 2017 at 17:13, Santes, Flavio <flavio.santes@intel.com> wrote:
Hello,

This patch https://gerrit.zephyrproject.org/r/#/c/11176/ is breaking some network applications (i.e. samples/net/mqtt_publisher). A local revert solves the issue for me.
Can you raise a JIRA issue and add some details of the failure you are seeing ?

Thanks
/Marcus


FRDM eth driver and TCP apps

Santes, Flavio <flavio.santes@...>
 

Hello,

This patch https://gerrit.zephyrproject.org/r/#/c/11176/ is breaking some network applications (i.e. samples/net/mqtt_publisher). A local revert solves the issue for me.

Regards,
Flavio


Daily Gerrit Digest

donotreply@...
 

NEW within last 24 hours:
- https://gerrit.zephyrproject.org/r/11183 : sensor: fix typo in sensor.h
- https://gerrit.zephyrproject.org/r/11166 : ext/lib/mbedtls: Add the TLS configuration file
- https://gerrit.zephyrproject.org/r/11182 : toolchain.gccarmemb: set DTC for building targets that use devicetrees
- https://gerrit.zephyrproject.org/r/11185 : tests: kernel: add test point k_delayed_work_remaining_get
- https://gerrit.zephyrproject.org/r/11184 : sensor: add sensor_channel_count function
- https://gerrit.zephyrproject.org/r/11177 : tests: kernel: added test case k_fifo_is_empty
- https://gerrit.zephyrproject.org/r/11174 : tests: kernel: added test case k_is_preempt_thread
- https://gerrit.zephyrproject.org/r/11172 : defconfig: Enable Watchdog for ATMEL SAM SoCs
- https://gerrit.zephyrproject.org/r/11173 : tests: kernel: remove unsupported tests
- https://gerrit.zephyrproject.org/r/11167 : riscv32: move riscv privileged architecture specifics within a common header file

UPDATED within last 24 hours:
- https://gerrit.zephyrproject.org/r/11160 : hosttools-tarball.bb: Integrate YAML library into SDK
- https://gerrit.zephyrproject.org/r/10807 : net/mqtt: Add BT support to MQTT publisher sample
- https://gerrit.zephyrproject.org/r/10804 : net/mqtt: Add support for IBM BlueMix Watson topic format
- https://gerrit.zephyrproject.org/r/5504 : dma: Introduce STM32F4x DMA driver
- https://gerrit.zephyrproject.org/r/10814 : Added sensor driver for SI1153. Added proximity sensor_channel entries in sensor.h
- https://gerrit.zephyrproject.org/r/11029 : watchdog: Add wdt driver for Atmel SAM SoCs
- https://gerrit.zephyrproject.org/r/10991 : eth/mcux: Add temporary workaround to unbreak IPv6 ND features.
- https://gerrit.zephyrproject.org/r/10812 : Added sensor driver for ADXL362
- https://gerrit.zephyrproject.org/r/10902 : eth/mcux: Add basic PHY support.
- https://gerrit.zephyrproject.org/r/11088 : doc: boards: Move nRF5x DK board doc from the wiki to git
- https://gerrit.zephyrproject.org/r/6384 : stm32lx: spi add SPI driver for STM32Lx family
- https://gerrit.zephyrproject.org/r/10369 : ataes132a: Adds a driver to support ATAES132A device
- https://gerrit.zephyrproject.org/r/10645 : Bluetooth: HFP HF: Handling AG Network error
- https://gerrit.zephyrproject.org/r/11101 : samples/net/http: Add HTTP over TLS sample application

MERGED within last 24 hours:
- https://gerrit.zephyrproject.org/r/11176 : eth/mcux: Add basic PHY support.
- https://gerrit.zephyrproject.org/r/11164 : net/http: Add QEMU support to the HTTP server sample app
- https://gerrit.zephyrproject.org/r/11165 : net/http: Improve network configuration routines
- https://gerrit.zephyrproject.org/r/11175 : net/mqtt: Fix inline doc for MQTT
- https://gerrit.zephyrproject.org/r/11170 : net/dns: Update QEMU prj file
- https://gerrit.zephyrproject.org/r/11171 : net: remove obsolete CONFIG_NET_YAIP
- https://gerrit.zephyrproject.org/r/11051 : net/dhcpv4: Fix event/state mismatch
- https://gerrit.zephyrproject.org/r/11052 : net/dhcpv4: Remove unused dhcpv4 offer state
- https://gerrit.zephyrproject.org/r/11093 : net/dhcpv4: Ensure udp header checksum is computed correctly
- https://gerrit.zephyrproject.org/r/11103 : samples: net: Add .conf file for qemu_cortex_m3 in echo_*
- https://gerrit.zephyrproject.org/r/11079 : samples/zoap-server: Update docs with information about libcoap
- https://gerrit.zephyrproject.org/r/11078 : iot/zoap: Improve zoap.h documentation
- https://gerrit.zephyrproject.org/r/11080 : iot/zoap: Fix handling of 16-bytes block-wise transfers
- https://gerrit.zephyrproject.org/r/11081 : iot/zoap: Fix header indentation
- https://gerrit.zephyrproject.org/r/11082 : iot/zoap: Add missing const modifier to header file


Re: did the scheduler need report the misuse of k_sched_lock before k_sleep?

Benjamin Walsh <benjamin.walsh@...>
 

Hi,

i review the code of zephyr of riscv arch, with the aim that to
porting this to the mips arch because the reiscv is the most like
the mips,


when i read the function" __irq_wrapper" , and want to know whether
the it should cope with the situation like this:
...............
k_sched_lock();


k_sleep(1);


k_sched_unlock();
....................

and it obviously wrong because of the paradox, but i found the
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I'm not sure what this means... This code is valid. The scheduler lock
is per-thread. If the thread goes to sleep, it relinquishes the CPU, but
when it comes back from sleeping, the scheduler will be locked again. Of
course, anything could have happened during the time that it slept, so
the code has to account for that.

The same holds for irq_lock() too. This code is also valid:

int key = irq_lock();
k_sleep(1);
irq_unlock(key);

The kernel records the value of 'key' when it pends the thread and
restores the value of 'key' for the incoming thread.

The k_thread.base.sched_lock count is only used when exiting an
interrupt, to see if a reschedule operation has to be attempted or not.
The k_sleep() operation does not take it into account: the thread chose
to sleep, it will sleep, even if its sched_lock count is non-zero. It
removes the thread from the ready queue, puts it on the timeout_q, and
then calls _Swap(), to choose another thread to run. _Swap() does not
look at the sched_lock count, nor the irq_lock key for that matter: it
does not decide _if_ a reschedule must happen, but only swaps the
current thread with the thread in the ready q cache.

Now, I am not familiar with the riscv port: if its implementation of
_Swap() does not follow this rule, that's a bug.

Regards,
Ben

"__irq_wrapper" would realy do the actual switch action with this,
because the flow direct go to the schedule label when it found the
scheduler is involked because of "ecall" in _Swap, with out judge the
preempted object, so i want to know if it should be this?
--
Benjamin Walsh, SMTS
WR VxWorks Virtualization Profile
www.windriver.com
Zephyr kernel maintainer
www.zephyrproject.org


Adding support for CC2650 SoC

Geoffrey LE GOURRIEREC <geoffrey.legourrierec@...>
 

Hello Gil,

I am starting to work on supporting Zephyr on Texas Instruments' "SensorTag" device.

To this end, I need to add support for the CC2650 MCU. I looked at the support you
added for the CC32xx family of MCUs, and am having trouble deciding on wether to
integrate my work or not in the arch/arm/soc/ti_simplelink subdirectory.

This is my first time contributing to Zephyr, and I gather SoCs are primarily
differentiated regarding the CPU type they use. However, the "SimpleLink" family
is more of a commercial name, and CC26xx / cc32xx devices, in particular, differ
in this respect (Cortex M-3 / M-4 respectively). Other families of MCUs already
supported (e.g. Atmel's SAME70) at least share the CPU type.

Should I use your existing work as common ground?
Or should we reckon "SimpleLink family" is not really usable as a SoC "family"?

Thanks for your advice,

Best regards,

--
Geoffrey Le Gourriérec

 


did the scheduler need report the misuse of k_sched_lock before k_sleep?

曹子龙
 

hi all:

  i review the code of zephyr of riscv arch, with the aim that to porting this to the mips arch because the reiscv is the most like the mips,  

when i read the function" __irq_wrapper"  , and want to know whether the it should cope with the situation like this:
     ...............
   k_sched_lock();

   k_sleep(1);

   k_sched_unlock();
   ....................


and it obviously wrong because of the paradox, but i found the "__irq_wrapper" would realy do  the actual  switch action with this,  because the flow direct go to the  schedule label when it found the scheduler is involked because of "ecall" in _Swap, with out judge the preempted object, so i want to know if it should be this?

 thanks for your help, and much appreciate for your help.





 


Daily Gerrit Digest

donotreply@...
 

NEW within last 24 hours:
- https://gerrit.zephyrproject.org/r/11163 : Revert tests: disable qemu_riscv32 on test_ecc_dh test [REVERT ME]
- https://gerrit.zephyrproject.org/r/11162 : tinycrypt: allow use of tinycrypt ecc for riscv32
- https://gerrit.zephyrproject.org/r/11161 : boards: tinyTILE: enable USB console by default
- https://gerrit.zephyrproject.org/r/11160 : hosttools-tarball.bb: Integrate YAML library into SDK

UPDATED within last 24 hours:
- https://gerrit.zephyrproject.org/r/11112 : Merge remote-tracking branch 'origin/core' into master
- https://gerrit.zephyrproject.org/r/10140 : tests/gpio: enable gpio cases to run on more platforms
- https://gerrit.zephyrproject.org/r/10862 : ARC: fix I2C SPI and GPIO default name issue for ARC
- https://gerrit.zephyrproject.org/r/10788 : xtensa: apply overlay to newlib

MERGED within last 24 hours:
- https://gerrit.zephyrproject.org/r/9947 : tests/pwm: enable PWM case to work on D2000 board
- https://gerrit.zephyrproject.org/r/10793 : tests/gpio: fix test GPIO_INT_EDGE bug
- https://gerrit.zephyrproject.org/r/9820 : tests: add zephyr adc driver api test case
- https://gerrit.zephyrproject.org/r/10784 : pinmux: fix default pinmux driver for quark_se_ss
- https://gerrit.zephyrproject.org/r/11025 : tests: add zephyr counter and timer api test
- https://gerrit.zephyrproject.org/r/10826 : xtensa: fix assembling bb[cs]i.l on big-endian targets
- https://gerrit.zephyrproject.org/r/10785 : xtensa: remove unneeded patches
- https://gerrit.zephyrproject.org/r/10787 : xtensa: add recipes-devtools-xtensa
- https://gerrit.zephyrproject.org/r/10786 : xtensa: fix endianness

5841 - 5860 of 8335