Dear Fellow Testers,
As a follow-up to the last meeting, I am sending you a proposal for verdicts used by sanitycheck to mark results. It is based on Hake’s proposal with some clarifications discussed during the meeting and our internal experience:
· PASS - test was successful
· FAIL - test assertion(s) failed
· ERROR – is usually reported when test setup fails before the test even attempts to test the test assertions or some other error occurred during the execution
· NOT_EXECUTED (reason in msg) - Test was skipped due to some conditions at the specification stage (e.g. was on a filtered list). This would indicate that the behavior (not executing) was expected
· IGNORED – Test was skipped due to being marked manually by a user. E.g. faulty tests could get such flag before they are repaired and be skipped during the execution
· MISSING – test was in the specification marked as to be executed, although it was not found in the report. It will work only if we decide to take an approach where after tests execution the program runs through the list of tests in the specification (extracted in advance) and looks for the results in the results report.
The idea of marking a test as UNSTABLE is more about an extra flag than a verdict. A test can be marked as unstable if it required reruns to pass or tends to fail occasionally. This can serve as a notification to a user that seeing failures in the given test is nothing unexpected and one should try rerunning the test.
I also think that WARNING should be an additional flag, not the verdict per se. The test can be marked as PASS but have an extra notification in WARNING that some warnings appeared (e.g. produced when test cleanup failed to restore the system)