Hi,
While I agree we should prevent the remote to consume all the buffer
and possible starve the TX, this is probably due to echo_server design
that deep copies the buffers from RX to TX, in a normal application
Indeed the echo server could perhaps be optimized not to deep copy
thus removing the issue. The wider question here is whether or not we
want a design rule that effectively states that all applications
should consume and unref their rx buffers before attempting to
allocate tx buffers. This may be convenient for some applications,
but I'm not convinced that is always the case. Such a design rule
effectively states that an application that needs to retain or process
information from request to response must now have somewhere to store
all of that information between buffers and rules out any form of
incremental processing of an rx buffer interleaved with the
construction of the tx message.
If you read the entire email it would be clearer that I did not
suggest it was fine to rule out incremental processing, in fact I
suggested to add pools per net_context that way the stack itself will
not have to drop its own packets and stop working because some context
is taking all its buffers just to create clones.
So, what should be the final solution to the NBUF DATA issue? Do we
want to redesign echo_server sample application to use shallow copy,
should we introduce NBUF DATA pool per context, a separate NBUF DATA
pool for TX and RX? Something else?
In my opinion enforcing too much granularity on allocation of data
buffers, i.e. having a separate nbuf data pool per context, maybe
another one for networking stack will not be optimal. Firstly,
Kconfig would become even more complex and users would have hard
time figuring out a safe set of options. What if we know one context
will not use many data buffers and another one a lot. Should we
still assign the same amount of data buffers per context? Secondly,
every separate data pool will add some spare buffers as a 'margin
error'. Thirdly, Ethernet driver which reserves data buffers for the
RX path has no notion of context, doesn't know which packets are
meant for the networking stack, which one for the application. It
would not know from which data pool to take the buffers. It can only
distinguish between RX and TX path.
In principle, having shared resources is not a bad design approach.
However, we probably should have a way to guarantee a minimum amount
of buffers for the TX path. As a software engineer, if I need to
design a TX path in my networking application and I know that I have
some fixed amount of data buffers available I should be able to
manage it. The same task becomes much more difficult if my fixed
amount of data buffers can at any given moment become zero for
reasons which are beyond my control. This is the case currently.
Regards,
Piotr