Re: initial mail for zephyr testing plan
Thanks for the reply. Basically I agree with you. But there are two major item, I want to raise here:
1. Will testing working group own all the samples and tests? If not how could we category them? Currently we just take the available cases as is.
2. If we define a new test suite, I have concern whether we have enough resources to develop them. Pre my experiences, a dedicated and efficient team is required, can we afford?
3. I am not quite familiar with Zephyr development process, @Nashif, Anas can you help to explain the current process? As Svein proposed, there are some equivalents for higher level requirements, system requirements/design, unit/component requirement/design in zephyr development, but I just can’t find them, I know some of them a list in the issues on git hub, but this is typical scrum process. IMHO, the difference of scrum process and V-model is the timing, in v-modelling, everything is defined well before execution, but in scrum those are likely to change always. So the questions is whether we have enough resource to closely follow the developers.
Some more feedbacks in line leading with [Hake-2].
From: testing-wg@... [mailto:testing-wg@...] On Behalf Of Aga, Svein
Sent: Tuesday, November 6, 2018 6:55 PM
Subject: Re: [testing-wg] initial mail for zephyr testing plan
Comments inline [Svein].
Thanks a lot. Some feedback in lines leading with [Hake]
I’ve proposed some changes/additions below in blue + some comments and questions.
A more general comment is that the Zephyr project can define a top level master test plan that is common for everyone contributing. Each subsystems and drivers (and the like) can have their own test plan, specifying the additions and deviations from the Zephyr master test plan. It could then be possible to have traceability from top level and down to subsystem testing. Each of the subsystems can define their own scope within a v-model with proper (automated) test reports. The point is not to necessarily provide a strict regime on testing, only good practices, but also provide insight to the users so they can properly assess the risk of using the subsystem. This is what is typically asked for in quality audits by big companies.
I initialize a test plan based on IEEE 829 template. https://github.com/zephyrproject-rtos/qm/pull/2.
I propose we can start at defining below things:
1. Testing goals:
a) Let it be known what we want to achieve with Zephyr master test plan.
[Hake] usually the goals come from requirements, so I prefer to have a requirement document in community to set as goals.
2. Testing Items
a) Hardware platforms
b) Software components
e.g. the zephyr project can be divided intro
2. boards drivers
Each company shall have its own test policy on their drivers
4. samples / acceptance tests
[Hake] acceptance tests is test type, not test item. Test item is something we delivery.
[Svein]: yes, but you can think of the samples as acceptance tests. If the samples does not work, the test item (or the test) is broken. If the samples can provide a PASS/FAIL status they could be treated as acceptance tests and be part of the CI.
[Hake-2] agree. So samples will be category as acceptances and shall be pass in every release. If not, we need blocking the release for given board or not?
5. unit tests
6. system tests
[Hake] The system tests in zephyr delivery are samples. Do we need to have an system tests folder and adding system tests?
[Svein]: The samples are not really system tests. It’s just a sample on how to use the subsystem, although important to have (see above for acceptance tests). System tests do more than acceptance tests, ie more thorough, like functional valid/invalid, non-functional (performance/throughout, robustness, stress), power measurements, backwards compatibility, conformance, …
[Hake-2] Agree. So who will create those system test? The testing working group or community?
3. How the existing samples/acceptance tests, system tests and unit test meet our test requirements.
[Hake] in zephyr delivery does not have acceptance tests, system tests. Only samples, do we need category them?
[Svein]: See above comments on system and acceptance tests. It is useful to categorize them for the long term. You quickly know what kind of test has failed and possibly the impact. After some time with statistics you can extract reports to analyse what kind of failures are more frequent than others and quantify them. You can then also consider means to reduce them. See next comment on test process improvements.
[Hake-2] in this case, testing working group will take care of the samples? Do we have enough resources?
4. Long term / short term quality strategy for test items, and standards.
a) I think for short term, we shall ensure that all features have test coverage, then function coverage, then line/branch coverage(based on requirements)
Svein: There are some testing literature that refers to maturity levels. There is an option to think of a few levels and define milestones for each. This will show our current view on where we are and where to go. Of course, the milestones can be redefined as progressing.
[Hake] can you share these literatures? We may apply the maturity level, once accepted by all
[Svein]: A few references:
- Test Process Improvements (TPI) next - https://improvement.polteq.com/en/tpi-next/
- https://www.tmmi.org/tmmi-model/ Also see the figure. At which level do you think Zephyr is? IMO at level 1 or 2.
[Hake-2] Thanks, I agree with your judgement. Pre my understanding, if we targeting to this model, huge resource will be required. And low level does not mean low quality.
b) We shall have user cases definition for subsystem cases, and this need help from TSC.
It might not be that Zephyr follow one model to full extent, but some parts could still be relevant to follow.
Svein: Could this be same as how to get test requirements for acceptance tests for a subsystem?
5. V-model and test level requirements
[Hake] V-model is a sw process, I think we need zephyr TSC to apply this model first, currently they are using scrum process.
[Svein]: It’s good to involve TSC for the development process, but regardless you can apply the v-model purely for testing. Most typically the left-branch of the v-model will still happen (higher level requirements, system requirements/design, unit/component requirement/design), however with different names. Defining a v-model helps you to know what kind of tests are required for the different development activities.
[Hake-2] as I replied in the beginning of mail, it is a timing issue, could we have enough resources and commitment to follow the developers.
a) Define a Zephyr v-model (maybe as simple as unit, system and acceptance)
b) Zephyr Master test plan defines the general test requirements within the v-model
c) Subsystem defines the subsystem test requirements within the v-model
6. Issue reporting and tracking
a) Would current process enough, Reporting on github -> reproduce -> fix -> close.
b) Current the issue severity is defined by code owner, would this be acceptable?
7. Test environment definition and tools
a) Qume simulation environment(already defined)
b) Real board testing(each company shall provide this)
c) Simulation such as renode(working group shall define it in the future)
d) What tools we are recommend to use
i. TestRail report scripts
ii. Renode, do we have free support from renode?
1. I check the source code, I understand they use the cmsis-svd file to get the memory map of SOC, but how could renode know the SOC driver model? In other simulation they need to have a fast model to define the driver behavior.
Thanks for your comments!