Re: [EXT] [testing-wg] Nomenclature used in sanitycheck

Hake Huang

According to what I learn from the sanitycheck scripts, the steps are:



1.      Check the testcase.yaml/samples.yaml

a)       Which lists the dependency and platforms.

b)       Some test requirements

2.      When pass step1 , sanity check will build it , if it pass then go to execute, otherwise it is skipped.

3.      Run the case, and check the result.


As the yaml files is not that good maintained over platform, there may have some unconsistances.





From: testing-wg@... <testing-wg@...> On Behalf Of Perkowski, Maciej via
Sent: 2020
1119 17:41
To: testing-wg@...
Subject: [EXT] [testing-wg] Nomenclature used in sanitycheck


Caution: EXT Email

Dear All,
I would like to discuss the nomenclature used in sanitycheck reporting during our next meeting. It will relate to the issue:
I already made a comment in few places quite some time ago with my concerns about a confusion (and it seems not only for me) this could cause, so I will just copy it again here [I guess creating a PR next time would be better than comments section/slack]. I will verify to what extent the below is still present. I hope we could agree on using one name for each of items.

/home/maciej/zephyrproject2/zephyr/sc-venv/bin/python3.7 /home/maciej/zephyrproject2/zephyr/scripts/sanitycheck --build-only -T samples/hello_world/ --all --subset 2/120

Renaming output directory to /home/maciej/zephyrproject2/zephyr/sanity-out.1

INFO    - Running only a subset: 2/120

INFO    - JOBS: 8

INFO    - Selecting all possible platforms per test case

INFO    - Building initial testcase list...

INFO    - 3 test configurations selected, 8 configurations discarded due to filters.

INFO    - Adding tasks to the queue...

INFO    - 2 of 2 tests passed (100.00%), 0 failed, 1 skipped with 0 warnings in 3.15 seconds

INFO    - In total 2 test cases were executed on 269 out of total 272 platforms (98.90%)

INFO    - 0 tests executed on platforms, 2 tests were only built.

INFO    - Total complete:    2/   2  100%  skipped:    0, failed:    0


Process finished with exit code 0

I think this logging is still a bit confusing and could be cleaned out.
I find the line: 
In total 2 test cases were executed on 269 out of total 272 platforms (98.90%) confusing. It is not possible that only 2 test cases are run on 269 platforms. The issue is that 269 is the total number of platform preselected: len(self.selected_platforms) but then we only chose 3 platforms out of them (and 1 is skipped later on).
I think we should be more descriptive adding the info about the further limiting of the platforms. I think this number also gives the wrong impression, it looks like we've tested 98.9% platforms, but vast majority was just skipped in fact.

Another issue is how we count tests that were only built. In the line INFO - In total 2 test cases were executed on 269 out of total 272 platforms (98.90%) these 2 test cases were only built. It is also incoherent with the next line: 0 tests executed on platforms, 2 tests were only built. were build-only tests are subtracted

The last issue is that we use different naming for the same stuff. Test configurations is in fact the same as tests as both of them refers to len(suite.instances) in the code. IMO "test suites" from the testing language corresponds the best to what we are counting there. I guess "test suite" is not used since we have TestSuite class in the code which is even something different (it is a suite of all test suites).

So we have test configurations and tests that correspond to instances in the code and test suites in testing language. Test cases are at least coherent everywhere (but I am not sure the difference between test and test case is obvious) ;) Or did I miss something? I think sorting this nomenclature would be beneficial for anyone who would try to work with sanitycheck.



Maciej Perkowski | Software Test Developer
M +48 728 395 111 | Kraków, Poland |



Join to automatically receive all group messages.