Re: initial mail for zephyr testing plan
Thanks a lot. Some feedback in lines leading with [Hake]
From: testing-wg@... [mailto:testing-wg@...] On Behalf Of Aga, Svein
Sent: Friday, September 28, 2018 4:46 AM
Subject: Re: [testing-wg] initial mail for zephyr testing plan
I’ve proposed some changes/additions below in blue + some comments and questions.
A more general comment is that the Zephyr project can define a top level master test plan that is common for everyone contributing. Each subsystems and drivers (and the like) can have their own test plan, specifying the additions and deviations from the Zephyr master test plan. It could then be possible to have traceability from top level and down to subsystem testing. Each of the subsystems can define their own scope within a v-model with proper (automated) test reports. The point is not to necessarily provide a strict regime on testing, only good practices, but also provide insight to the users so they can properly assess the risk of using the subsystem. This is what is typically asked for in quality audits by big companies.
I initialize a test plan based on IEEE 829 template. https://github.com/zephyrproject-rtos/qm/pull/2.
I propose we can start at defining below things:
1. Testing goals:
a) Let it be known what we want to achieve with Zephyr master test plan.
[Hake] usually the goals come from requirements, so I prefer to have a requirement document in community to set as goals.
2. Testing Items
a) Hardware platforms
b) Software components
e.g. the zephyr project can be divided intro
2. boards drivers
Each company shall have its own test policy on their drivers
4. samples / acceptance tests
[Hake] acceptance tests is test type, not test item. Test item is something we delivery.
5. unit tests
6. system tests
[Hake] The system tests in zephyr delivery are samples. Do we need to have an system tests folder and adding system tests?
3. How the existing samples/acceptance tests, system tests and unit test meet our test requirements.
[Hake] in zephyr delivery does not have acceptance tests, system tests. Only samples, do we need category them?
4. Long term / short term quality strategy for test items, and standards.
a) I think for short term, we shall ensure that all features have test coverage, then function coverage, then line/branch coverage(based on requirements)
Svein: There are some testing literature that refers to maturity levels. There is an option to think of a few levels and define milestones for each. This will show our current view on where we are and where to go. Of course, the milestones can be redefined as progressing.
[Hake] can you share these literatures? We may apply the maturity level, once accepted by all
b) We shall have user cases definition for subsystem cases, and this need help from TSC.
Svein: Could this be same as how to get test requirements for acceptance tests for a subsystem?
5. V-model and test level requirements
[Hake] V-model is a sw process, I think we need zephyr TSC to apply this model first, currently they are using scrum process.
a) Define a Zephyr v-model (maybe as simple as unit, system and acceptance)
b) Zephyr Master test plan defines the general test requirements within the v-model
c) Subsystem defines the subsystem test requirements within the v-model
6. Issue reporting and tracking
a) Would current process enough, Reporting on github -> reproduce -> fix -> close.
b) Current the issue severity is defined by code owner, would this be acceptable?
7. Test environment definition and tools
a) Qume simulation environment(already defined)
b) Real board testing(each company shall provide this)
c) Simulation such as renode(working group shall define it in the future)
d) What tools we are recommend to use
i. TestRail report scripts
ii. Renode, do we have free support from renode?
1. I check the source code, I understand they use the cmsis-svd file to get the memory map of SOC, but how could renode know the SOC driver model? In other simulation they need to have a fast model to define the driver behavior.
Thanks for your comments!