Re: initial mail for zephyr testing plan

Aga, Svein



I’ve proposed some changes/additions below in blue + some comments and questions.  


A more general comment is that the Zephyr project can define a top level master test plan that is common for everyone contributing. Each subsystems and drivers (and the like) can have their own test plan, specifying the additions and deviations from the Zephyr master test plan. It could then be possible to have traceability from top level and down to subsystem testing. Each of the subsystems can define their own scope within a v-model with proper (automated) test reports. The point is not to necessarily provide a strict regime on testing, only good practices, but also provide insight to the users so they can properly assess the risk of using the subsystem. This is what is typically asked for in quality audits by big companies.




Svein Aga


From: testing-wg@... [mailto:testing-wg@...] On Behalf Of Hake Huang
Sent: 17. september 2018 18:06
To: testing-wg@...
Subject: [testing-wg] initial mail for zephyr testing plan


Hi All,


I initialize a test plan based on IEEE 829 template.


I propose we can start at defining below things:



1.     Testing goals:

a)      Let it be known what we want to achieve with Zephyr master test plan.

2.     Testing Items

a)      Hardware platforms

b)      Software components

e.g. the zephyr project can be divided intro

1. kernel

2. boards drivers

       Each company shall have its own test policy on their drivers

3. subsystems

4. samples / acceptance tests

5. unit tests

6. system tests


3.     How the existing samples/acceptance tests, system tests and unit test meet our test requirements.

4.     Long term / short term quality strategy for test items, and standards.

a)      I think for short term, we shall ensure that all features have test coverage, then function coverage, then line/branch coverage(based on requirements)

Svein: There are some testing literature that refers to maturity levels. There is an option to think of a few levels and define milestones for each. This will show our current view on where we are and where to go. Of course, the milestones can be redefined as progressing.

b)      We shall have user cases definition for subsystem cases, and this need help from TSC.

Svein: Could this be same as how to get test requirements for acceptance tests for a subsystem?

5.     V-model and test level requirements

a)      Define a Zephyr v-model (maybe as simple as unit, system and acceptance)

b)      Zephyr Master test plan defines the general test requirements within the v-model

c)      Subsystem defines the subsystem test requirements within the v-model

6.     Issue reporting and tracking

a)      Would current process enough, Reporting on github -> reproduce -> fix -> close.

b)      Current the issue severity is defined by code owner, would this be acceptable?

7.     Test environment definition and tools

a)      Qume simulation environment(already defined)

b)      Real board testing(each company shall provide this)

c)      Simulation such as renode(working group shall define it in the future)

d)      What tools we are recommend to use

                 i.          TestRail report scripts

                ii.          Renode, do we have free support from renode?

1.      I check the source code, I understand they use the cmsis-svd file to get the memory map of SOC, but how could renode know the SOC driver model? In other simulation they need to have a fast model to define the driver behavior.


Thanks for your comments!




Join { to automatically receive all group messages.