Re: About the testing framework

Rohit Grover

Yes, calling the test case repeatedly isn't important, and may even complicate things. Given than threads can yield/block, this should be easy to abstract into the framework.

I would also like to propose importing test macros from Unity<>. It would then be one less thing to maintain.


From: Hannikainen, Jaakko [mailto:jaakko.hannikainen(a)]
Sent: 17 August 2016 09:25
To: Rohit Grover; devel(a)
Subject: RE: About the testing framework


Thanks for the feedback and suggestions!

I believe the framework can be extended to provide support for asynchronous testing. An assertion function which either blocks until a condition/timeout or calls a callback function should be relatively easy to implement. I think the waiting part should be abstracted into the framework itself, so that developers can focus on the test itself and not the framework around it. Especially calling the same function on repeat with a call count seems really like a bad code smell to me - certainly we could do better.

From: Rohit Grover [Rohit.Grover(a)]
Sent: Wednesday, August 17, 2016 10:40
To: Hannikainen, Jaakko; devel(a)<mailto:devel(a)>
Subject: [devel] Re: About the testing framework
Hello Community,

Thanks for introducing a test framework.

It seems that this proposed test framework only deals with synchronous test execution. It is fine for a start, but this may prove to be a limitation.

Consider the pattern offered by 'utest' from mbed:, where a test case handler may execute asynchronously as many times as needed until a case setup succeeded. So a test case for a peripheral may return partway (after setting up an async. operation) with a CaseTimeout(uint32_t ms) + CaseRepeatAll (for instance) to indicate that the harness should wait for validation from asynchronous activity before proceeding (in this case before repeating the test).

The following shows a sample test for flash-erase which would be valid for flash peripherals operating erase either synchronously or asynchronously.

control_t test_flashErase(size_t call_count)
switch(call_count) { /* call_count is automatically incremented by the harness for every repeat-invocation */
case 0:
/* erase the sector at 'addr' */
LOG_INFO("erasing sector at addr %lu", (uint32_t)addr);
rc = drv->Erase(addr, erase_unit); /* assume that the system is setup to call eraseCompleteCallback() upon completion of Erase */
TEST_ASSERT(rc >= 0);
if (rc == 0) { /* this indicates that the operation is still active in the background */
TEST_ASSERT_EQUAL(true, capabilities.asynchronous_ops);
return CaseTimeout(200) + CaseRepeatAll; /* wait for up to 200ms for the operation to complete before proceeding */
} else { /* this is the case for synchronous completion of erase */
TEST_ASSERT_EQUAL(erase_unit, rc);
verifyBytePattern(addr, erase_unit, (uint8_t)0xFF);

/* intentional fallthrough */

case 1:
// proceed with testing of erased flash

void eraseCompleteCallback(int32_t status, STORAGE_OPERATION operation)
verifyBytePattern(addr, erase_unit, (uint8_t)0xFF);

Harness::validate_callback(); /* this tells the test framework to proceed with the test */

Being able to write asynchronous tests will enhance the quality and reach of hardware testing.

From: Hannikainen, Jaakko [mailto:jaakko.hannikainen(a)]
Sent: 17 August 2016 07:56
To: devel(a)<mailto:devel(a)>
Subject: [devel] Re: About the testing framework


I now implemented an initial version of a framework, and I submitted it at

While developing, it became apparent that the current testing stack also benefits from a more generic way of testing. I made the stack so that it works transparently both on Zephyr and without Zephyr. I implemented a small testing program for native (as in your own computer) testing and converted an existing test to the new framework as a sample.

The framework in general is a work-in-progress, but at its current state it's already usable for general testing. When I ran some dummy native tests, I could run about 30 tests within 5 seconds on my laptop. Comparing this to current QEMU testing, where 1 test took 9 seconds on my computer. Clearly this would improve development speed for test-driven development as initial tests take much less time and can be written simultaneously for both QEMU and native runs, sans a couple lines of boilerplate for native code.

From: Hannikainen, Jaakko [jaakko.hannikainen(a)]
Sent: Wednesday, August 03, 2016 15:29
To: devel(a)<mailto:devel(a)>
Subject: [devel] About the testing framework

Currently, the testing stack consists of an unsorted bunch of integration and end-to-end tests. There's nothing wrong with having integration tests, but currently a lot of the tests could be done at least partially as plain unit tests. This way we could skip over a lot of unneeded time used for both compiling and running tests, as a lot of the testable code is written in portable C. Then the unit tests wouldn't have to do anything except compile the single .c file containing the code and link it with the testing framework. The test would also run a lot faster then the current stack since it can be run natively on the host without dragging in QEMU and the rest of Zephyr OS.

Including a unit testing library (either existing or a new one) would have multiple benefits for the entire system:

1) Unit testing enables testing static methods, rather than having to stick with the public api exposed by the module. This way all functions can be tested, for example edge cases of a static validation function.

2) Writing unit tests is also a lot simpler than writing complex integration tests. This would eventually result as an increase of testing coverage as writing tests would have a lower barrier of entry. This would especially happen if the project would start to enforce some percent of required test coverage for all incoming patches (this would have to exclude eg. arch specific code).

3) Writing native tests enables running valgrind on the tests, exposing bugs like invalid pointer dereferences and uninitialized variables. For certain cases, statically initialized code could be turned to dynamically allocated by mocking the respective functions, which would also allow using valgrind to hunt down memory leaks.

Although C lags a bit behind languages like Java for mocking, GCC's -Wl,--wrap works well for isolating single functions for testing. I tested out a library called CMocka (licensed under Apache 2.0) which seemed to work pretty well. Writing our own testing library isn't really that hard either, an initial usable version can be done in a day or two.

I propose including a unit testing framework in the Zephyr project (1.6.0 as target), and would like some discussion about how it should be implemented. Should we write our own test framework or use an existing one? What kind of adaptations to the existing test frameworks would be needed, if any? Should code coverage metrics be enforced so that we could have actual CI, rather than Jenkins just running some tests?

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Join to automatically receive all group messages.