Archive for January, 2008|Monthly archive page

Pass, Fail and Skip

(I’m using Firefox this time so that the WordPress editor has a chance of working.)

It seems like every test harness has a different set of possible results. I’ve seen Passed, Failed, Skipped, Abort, Error, Expected Failure, Unexpected Pass and many others. Seems exciting and technical at first. But, what do you *do* with this data? I’ve found that there are really only two possible actions for each test result:

1. Ignore the result

2. Do something about the result

There are many other associated actions, like looking for trends, measuring progress, getting warm fuzzy feelings, etc. I usually find that these are management functions. To actually make the product under test better, you have to follow up the issues and fix them.

So, I favor a very simple set of possible results:

Passed: You can ignore this test. It worked. Nothing to see here. You run the risk of getting false positives and this is important. But, I’ve yet to work on a project where we can actually keep up with all the failures and so have never done any significant research into method for capturing these more subtle problems.

Failed: This is a BUG. Getting to root cause may be difficult, but there *is* a problem. It might be a problem in the script, a bug in the product, a conscious design change or any of a thousand other reasons. In the end, though, it needs to be fixed, and therefore is a bug.

Skipped: The test isn’t going to run for a good reason. The test case itself knows this good reason. To make up an example, you might have a test suite for browser compatiblity. There may be some tests specificialy for Internet Explorere and some other tests specifically for Firefox. If you run the full suite on both, the tests designed for the other browser should skip, not fail.

One other result type I have had difficulty avoiding is the “performance” test. Usually it gathers data, which can then be analyzed later. I don’t find that this is the best use of resources, though. Whatever the performance test is doing, it is a *test*. So, it should also pass, fail or skip. It needs to *also* keep the metrics it was designed for, but why waste a testing opportunity?

Advertisements