Re: More elaborate k_fifo API?


Paul Sokolovsky
 

On Sat, 1 Apr 2017 20:01:18 +0300
Johan Hedberg <johan.hedberg@intel.com> wrote:

Hi Paul,

On Sat, Apr 01, 2017, Paul Sokolovsky wrote:
Note that with net_buf you must always use net_buf_get/put and
never the k_fifo_get/put APIs directly to prevent corruption of
the fragment lists.
Yeah, so what happens is that net_pkt's (hopefully still a new name
for net_nbuf) gets extracted from global rx queue, gets fragment
chain attached, and passed to socket receive callback, just to be
queued again to a per-socket queue. Yes, that would smash the main
fragment pointer, so I already had to look for a random place in
net_pkt to save it. Fortunately, the place found wasn't that much
of random - the user token:
https://github.com/micropython/micropython/blob/master/zephyr/modusocket.c#L106
What net_buf_put and net_buf_get do is that they ensure the fragment
chain always stays intact, i.e. you never have only a subset of it
inside or outside of a k_fifo but always the full chain. So what you
describe shouldn't be an issue. Feel free to look at the
implementation of these functions in subsys/net/buf.c to understand
better what I mean.
Johan, thanks for keeping up discussion. I reviewed those functions
before I prepared to mini-summit, and did so now to make sure I don't
mix up things. So, my impression: net_buf_put() and net_buf_get() are
internal, private functions of network stack. That's based on 2 facts:

1. "External" parts don't really use them. For example, a network
driver calls net_recv_data(), and it's internal implementation details
that this function uses net_buf_put() to store fragment chain onto
internal RX fifo. Like net_buf_get() - it's called only by internal TX
thread.

2. What these functions do is pretty "weird", hardly useful for generic
applications.

To elaborate on the latter point, they (ab)use internal details of
net_buf structure by putting different kinds of information in a flat,
ducktype fashion on a fifo. In a diagram below, "PNB" means "packed
in net_buf":

pkt_header_PNB -> frag_PNB -> frag_PNB -> pkt_header_PNB -> frag_PNB

Such flat, differentiation-less scheme is what makes it feel so weird.
That's not what an application programmer wants. I want packets to be
stored in FIFO, and fragments to be nicely hanging from them, not
everything mixed up in barely distinguishable manner in FIFO:

pkt_header -> pkt_header
| |
v v
frag -> frag frag


So, I'm describing all that, trying to pinpoint the exact contention
point with the current buffer handling in Zephyr. It may be that it's
just my inertia of mind, and it's few more steps to conclude that both
structures above approach the same data representation somewhat
orthogonally and almost the same algorithms are required to process
both. But so far, I still see 1st structure as "weird", and algorithms
using it will be just as weird (and that's given that my algorithms for
2nd structure are already weird, because I have to go out of the way to
get a proper 2-level, hierarchical structure where Zephyr enforces a
flat one).


But with the work which prompted this email, it gets much worse. The
packet keeps being in fifo, until its data is fully processed. So, I
need to maintain duality of a packet both being in fifo and having a
fragment chain.
I don't think that would be an issue considering what I describe
above.
Based on the above, and in my mind, it wouldn't be. What I have is
2-level enclosed loop, coroutine style: I iterate over next packet in
the queue, and then over each data fragment of a packet. I process
only one fragment at time (fully or not), and return. When called
again, I need access to *both* the last packet and last or next
fragment. As both packet headers and fragments are intermixed in the
fifo, the only 2 ways to do processing would be:

1. Cache packet header somewhere and drop it from fifo, but that's
exactly what I'm trying to avoid - why spend more memory on it, if
it's already in the fifo?

2. Heavily mix and slice FIFO, like make sure I keep packet header at
the head of the queue, while dropping fragments from the next slot.
This essentially means bypassing and duplicating existing buffer
management APIs.

There's also another "d'oh" option - maybe I don't need all those
packet headers actually (maybe only thing I need from them is skip
them). That would be an elegant solution, where I'd still bypass all
the existing net_buf manipulation functions, and employ their *current*
raw structure (and I can't even claim I understand it 100%).


Fairly speaking, even after extensive discussion at the
mini-summit, I don't understand how comes that net_pkt object gets
with a lot of effort disguised as being a net_buf object, whereas
they're clearly wildly different types of objects, forming clear
hierarchy in relationship. The only explanation is desire to have
fewer pools of different element sizes, but I'd say it's already
clear that the current design isn't saving of memory, while makes
everything *so* convoluted...
I can't speak too much of nbuf of net_pkt since I don't really have to
deal with that construct on the Bluetooth side, however I thought the
idea was to have net_pkt as a simple reference counted context
accessed through the user data of each net_buf belonging to a chain,
i.e. it wouldn't we something disguising as a net_buf itself.

Johan


--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog

Join devel@lists.zephyrproject.org to automatically receive all group messages.