Re: Problems managing NBUF DATA pool in the networking stack


Marcus Shawcroft <marcus.shawcroft@...>
 

On 8 February 2017 at 13:18, Luiz Augusto von Dentz
<luiz.dentz@gmail.com> wrote:

While I agree we should prevent the remote to consume all the buffer
and possible starve the TX, this is probably due to echo_server design
that deep copies the buffers from RX to TX, in a normal application
Indeed the echo server could perhaps be optimized not to deep copy
thus removing the issue. The wider question here is whether or not we
want a design rule that effectively states that all applications
should consume and unref their rx buffers before attempting to
allocate tx buffers. This may be convenient for some applications,
but I'm not convinced that is always the case. Such a design rule
effectively states that an application that needs to retain or process
information from request to response must now have somewhere to store
all of that information between buffers and rules out any form of
incremental processing of an rx buffer interleaved with the
construction of the tx message.

the RX would be processed and unrefed causing the data buffers to
return to the pool immediately. Even if we split the RX in a separate
pool any context can just ref the buffer causing the RX to stave
Maybe I missed the point here, but dropped packets due to rx buffer
starvation is no different to network packet loss, while undesirable,
high levels in the stack are already design to cope appropriately.

Cheers
/Marcus

Join devel@lists.zephyrproject.org to automatically receive all group messages.