Re: initial mail for zephyr testing plan
Comments inline [Svein].
From: testing-wg@... [mailto:testing-wg@...] On Behalf Of Hake Huang
Sent: 15. oktober 2018 18:23
Subject: Re: [testing-wg] initial mail for zephyr testing plan
Thanks a lot. Some feedback in lines leading with [Hake]
I’ve proposed some changes/additions below in blue + some comments and questions.
A more general comment is that the Zephyr project can define a top level master test plan that is common for everyone contributing. Each subsystems and drivers (and the like) can have their own test plan, specifying the additions and deviations from the Zephyr master test plan. It could then be possible to have traceability from top level and down to subsystem testing. Each of the subsystems can define their own scope within a v-model with proper (automated) test reports. The point is not to necessarily provide a strict regime on testing, only good practices, but also provide insight to the users so they can properly assess the risk of using the subsystem. This is what is typically asked for in quality audits by big companies.
I initialize a test plan based on IEEE 829 template. https://github.com/zephyrproject-rtos/qm/pull/2.
I propose we can start at defining below things:
1. Testing goals:
a) Let it be known what we want to achieve with Zephyr master test plan.
[Hake] usually the goals come from requirements, so I prefer to have a requirement document in community to set as goals.
2. Testing Items
a) Hardware platforms
b) Software components
e.g. the zephyr project can be divided intro
2. boards drivers
Each company shall have its own test policy on their drivers
4. samples / acceptance tests
[Hake] acceptance tests is test type, not test item. Test item is something we delivery.
[Svein]: yes, but you can think of the samples as acceptance tests. If the samples does not work, the test item (or the test) is broken. If the samples can provide a PASS/FAIL status they could be treated as acceptance tests and be part of the CI.
5. unit tests
6. system tests
[Hake] The system tests in zephyr delivery are samples. Do we need to have an system tests folder and adding system tests?
[Svein]: The samples are not really system tests. It’s just a sample on how to use the subsystem, although important to have (see above for acceptance tests). System tests do more than acceptance tests, ie more thorough, like functional valid/invalid, non-functional (performance/throughout, robustness, stress), power measurements, backwards compatibility, conformance, …
3. How the existing samples/acceptance tests, system tests and unit test meet our test requirements.
[Hake] in zephyr delivery does not have acceptance tests, system tests. Only samples, do we need category them?
[Svein]: See above comments on system and acceptance tests. It is useful to categorize them for the long term. You quickly know what kind of test has failed and possibly the impact. After some time with statistics you can extract reports to analyse what kind of failures are more frequent than others and quantify them. You can then also consider means to reduce them. See next comment on test process improvements.
4. Long term / short term quality strategy for test items, and standards.
a) I think for short term, we shall ensure that all features have test coverage, then function coverage, then line/branch coverage(based on requirements)
Svein: There are some testing literature that refers to maturity levels. There is an option to think of a few levels and define milestones for each. This will show our current view on where we are and where to go. Of course, the milestones can be redefined as progressing.
[Hake] can you share these literatures? We may apply the maturity level, once accepted by all
[Svein]: A few references:
- Test Process Improvements (TPI) next - https://improvement.polteq.com/en/tpi-next/
- https://www.tmmi.org/tmmi-model/ Also see the figure. At which level do you think Zephyr is? IMO at level 1 or 2.
b) We shall have user cases definition for subsystem cases, and this need help from TSC.
It might not be that Zephyr follow one model to full extent, but some parts could still be relevant to follow.
Svein: Could this be same as how to get test requirements for acceptance tests for a subsystem?
5. V-model and test level requirements
[Hake] V-model is a sw process, I think we need zephyr TSC to apply this model first, currently they are using scrum process.
[Svein]: It’s good to involve TSC for the development process, but regardless you can apply the v-model purely for testing. Most typically the left-branch of the v-model will still happen (higher level requirements, system requirements/design, unit/component requirement/design), however with different names. Defining a v-model helps you to know what kind of tests are required for the different development activities.
a) Define a Zephyr v-model (maybe as simple as unit, system and acceptance)
b) Zephyr Master test plan defines the general test requirements within the v-model
c) Subsystem defines the subsystem test requirements within the v-model
6. Issue reporting and tracking
a) Would current process enough, Reporting on github -> reproduce -> fix -> close.
b) Current the issue severity is defined by code owner, would this be acceptable?
7. Test environment definition and tools
a) Qume simulation environment(already defined)
b) Real board testing(each company shall provide this)
c) Simulation such as renode(working group shall define it in the future)
d) What tools we are recommend to use
i. TestRail report scripts
ii. Renode, do we have free support from renode?
1. I check the source code, I understand they use the cmsis-svd file to get the memory map of SOC, but how could renode know the SOC driver model? In other simulation they need to have a fast model to define the driver behavior.
Thanks for your comments!