Re: More elaborate k_fifo API?


Paul Sokolovsky
 

Hello Vinayak,

On Sun, 2 Apr 2017 00:11:11 +0200
Vinayak Kariappa <vinayak.kariappa@gmail.com> wrote:

Hi Paul,

I am not in the complete context, but your last email got my
attention.

Bluetooth controller today is barebones and immediate wish is to
transition to using net_buf.
Yes, it was discussed at the BUD17 mini-summit that net_buf structure
comes from the Bluetooth subsystem, and was just reused in the IP stack
implementation. Per my proposal in the previous mail, net_buf structure
stays intact, so its users, current or future, are unaffected (well, I
actually propose to remove a field from it which was added adhocly for
IP stack use only).

The entirety of my proposal is centered around revamping IP stack only
packet metadata structures - clearly separating them from net_buf.

Hopefully, that clears your concern.


In the barebones design, memq.c implements a fifo using "link" which
hold "mem". Seems similar to your mention of pool of net_pkt_meta and
net_buf.

In the controller, "link" being fixed size gets allocated from a
shared pool; where as, "mem" of different sizes come from different
pools or even be offsets into other larger "mem". Release to which
pool ofcourse done based on "mem" contents.

Sent from my iPhone

On 1 Apr 2017, at 23:38, Paul Sokolovsky
<paul.sokolovsky@linaro.org> wrote:

On Sat, 1 Apr 2017 22:35:12 +0300
Paul Sokolovsky <Paul.Sokolovsky@linaro.org> wrote:

[]

To elaborate on the latter point, they (ab)use internal details of
net_buf structure by putting different kinds of information in a
flat, ducktype fashion on a fifo. In a diagram below, "PNB" means
"packed in net_buf":

pkt_header_PNB -> frag_PNB -> frag_PNB -> pkt_header_PNB ->
frag_PNB

Such flat, differentiation-less scheme is what makes it feel so
weird. That's not what an application programmer wants. I want
packets to be stored in FIFO, and fragments to be nicely hanging
from them, not everything mixed up in barely distinguishable
manner in FIFO:

pkt_header -> pkt_header
| |
v v
frag -> frag frag


So, I'm describing all that, trying to pinpoint the exact
contention point with the current buffer handling in Zephyr. It
may be that it's just my inertia of mind, and it's few more steps
to conclude that both structures above approach the same data
representation somewhat orthogonally and almost the same
algorithms are required to process both. But so far, I still see
1st structure as "weird", and algorithms using it will be just as
weird (and that's given that my algorithms for 2nd structure are
already weird, because I have to go out of the way to get a proper
2-level, hierarchical structure where Zephyr enforces a flat one).
Ok, I did more homework, and I guess I would be ready to workaround
the way Zephyr stores net_buf's in fifo on my side. For that, I'd
need to be able to determine which pool a net_buf belongs to -
either by heuristic (size=0 means rx/tx pkt header pools) or by
trying to break static encapsulation of pool structure.

So, that should be possible. But with this extra homework I did, I
still fail to see what was the benefit of designing it this way
(pushing flat structures onto fifo, etc.) Just look that even in a
minor matter of how net_buf_put/net_buf_get work, they need to
traverse this flat structure and set/reset a bit flag as a
workaround instead of preserving the actual conceptual structure.

So, Jukka, Tomasz, following and extending the discussion at the
mini-summit why don't we take a chance to make a following change:

What is now struct net_nbuf get spiritually superseded by the
following structure:

struct net_pkt_meta {
/** FIFO uses first 4 bytes itself, reserve space */
int _unused;
struct net_buf *frags;

[rest of fields needed, filtered from net_nbuf]
};

struct net_pkt_meta is the only structure which is pushed onto rx/tx
fifos, fragments are referenced as 2nd dimension from its ->frags
pointer. Fields of struct net_buf can be reviewed and some pushed
to struct net_pkt_meta. For example, /** List pointer used for TCP
retransmit buffering */ sys_snode_t sent_list; - do we retransmit on
the level of fragments? No, we retransmit complete TCP packet.
Likewise, struct net_pkt_meta doesn't need len and size, which are 0
for it.

Perhaps, these changes will require pool for struct net_pkt_meta to
be defined with another macro but NET_BUF_POOL_DEFINE, but that's
good, because struct net_pkt_meta and struct net_buf are completely
different structures, and C is a strictly typed language, so any
attempts at ducktyping look strange and confusing.


What would be drawbacks of such design?


Thanks,
Paul

[]

--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
_______________________________________________
Zephyr-devel mailing list
Zephyr-devel@lists.zephyrproject.org
https://lists.zephyrproject.org/mailman/listinfo/zephyr-devel


--
Best Regards,
Paul

Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro
http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog

Join devel@lists.zephyrproject.org to automatically receive all group messages.