Situation with net APIs testing

Paul Sokolovsky


As I'm approaching final steps in preparing BSD Sockets patchset for
submission, I'm looking into a way to add some tests for it. Testing of
networking functionality is by default hard, because in general,
networking hardware would be required for that, even if "virtual", like
tunslip6, etc. tools from net-tools repo running, to support QEMU
networking. During prototyping work, I learnt there're loopback
capabilities when binding and connecting to the same netif, but it
still requires net-tools running just to get QEMU start up with
networking support.

Well, I took a look at tests/net and saw the whole bunch of tests,
whoa! I gave a try to tests/net/tcp , some cases passed, some failed,
hmm. But then I killed net-tools/ script and the test
ran in the same manner. Whoa, so we have means to run networking
without any requirements on the host side, which means we can run them
as part of sanitycheck testsuite! But, 8 tests of tests/net/ have
build_only=true, any wonder they're broken?

Anyway, I looked at what's involved in net-tools free running, and
figured it's CONFIG_NET_L2_DUMMY. Added it to my sockets test, and got
only segfault in return. After debugging it, turned out it's the same
issue as already faced by me and other folks: if there're no netifs
defined, networking code is going to crash (instead of printing clear
error to the user):

But how the tests/net/ run then and don't crash? Here's the answer:

zephyr/tests/net$ grep NET_DEVICE -r * | wc
22 42 1532

So, almost each and every test defines its own test interface.

One would think that if we have 22 repetitive implementations of test
interfaces, whose main purpose is to be just loopback interfaces, then
we'd have a loopback interface in the main codebase. But nope, as
confirmed by Jukka on IRC, we don't.


1. Writing networking tests is hard, but it Zephyr, it takes
extraordinary, agonizing effort. The most annoying is that all needed
pieces are there, but instead of presenting a nice picture, they form
a mess which greets you with crashes if you try to change anything.

2. There're existing big (~20K each) test which fail. Apparently,
because they aren't run, so bitrot. Why do we need these huge, detailed
tests if we don't run them? (An alternative explanation is that there's
something wrong with my system, and yep, I'd be glad to know what I'm
still don't do right with Zephyr after working on it for a year.)

I'd be glad if more experienced developers could confirm if it's really
like the above, or I miss something. And I'll be happy to work on the
above issues, but in the meantime, I'll need to submit BSD Sockets with
rather bare and hard to run (not automated) tests due to the situation


Best Regards,
Paul | Open source software for ARM SoCs
Follow Linaro:!/linaroorg -

Join to automatically receive all group messages.