+1 for Maksim suggestions. I am also still wondering, if WARN supposed to be a verdict. According to the proposition, if there was WARN does it mean that test was neither PASS, nor FAIL, nor ERR but still something else?
From: testing-wg@... <testing-wg@...>
On Behalf Of Masalski, Maksim via lists.zephyrproject.org
Hello, Hake and team. I inspected your abbreviations of test types. I have some comments.
1. I can’t understand why we need that “T”, it makes difficult to read text. I vote for clear and understandable definitions without “T” letter.
2. TERRR, as I understand should be just ERR.
3. “TNEXE” maybe better NOT_EXEC? EXE feels like Windows execution file abbreviation.
My variants are below.
Summary all comments: I propose below type definitions:
Basic rules is:
1. 5 chars for each type
2. clear definition
3. test specific token
Caution: EXT Email
Today we should agree on how test types will be defined. To my mind each test type definition should have only 1 robust definition. It is better to avoid "or", "some" words in the definition, because it will make unclear during testing why that test type happened. Will mark my comments using cursive font
a) PASS - test was successful Agree
b) FAIL - test assertion(s) failed Agree
ERROR – is usually reported when test setup fails before the test even attempts to test the test assertions or(!) some other error occurred during
the execution. I think necessary to split that. Make ERR1 and ERR_DARK
d) NOT_EXECUTED (reason in msg) - Test was skipped due to some conditions at the specification stage (e.g. was on a filtered list). This would indicate that the behavior (not executing) was expected Agree, only if reason will be in msg. Why to make it shorter? NOT_EXEC
e) IGNORED – Test was skipped due to being marked manually by a user. E.g. faulty tests could get such flag before they are repaired and be skipped during the execution. Why to make it shorter? IGNORE
f) MISSING – test was in the specification marked as to be executed, although it was not found in the report. It will work only if we decide to take an approach where after tests execution the program runs through the list of tests in the specification (extracted in advance) and looks for the results in the results report. Why to make it shorter? MISS