Archive for July, 2007|Monthly archive page

Testing is not part of the product

I run into the idea that test code (could be infrastructure, a tools, functional built into the product and others) is optional. Because it doesn’t ship with the product, it is a second class citizen.

On one hand, I totally agree. No customer will ever see or even care about how the product was tested. They just want it to work and expect that it has been tested carefully.

On the other hand…this doesn’t seem to work. Code that is testable seems to be better. It takes a long time to figure this out, at least with the products that I have worked with. The product has to ship and customers have to be unhappy. It doesn’t usually come down to a single big (or well publicized) bug. It’s the odd behavior that wasn’t thought through well enough or the odd an intermittent crash/hang/hiccup.

The big, awful bug is actually pretty easy to deal with from a development standpoint. Everyone sees it, you fix it, you’re done. Well, the lawyers may not be.

Anyway, the problem bugs are the ones that are subtle, not easily reproducible and cause a general sense of low quality or instability. In the end, they are all just bugs, with steps to reproduce. But, with systems as complex as they are today, this is not easy.

In fact, it requires a carefully built test infrastructure. Good product design, good development practices, an eye for testability, hooks, logs and functions built into the code to make it more transparent, a layered approach where you test each piece, test pieces together, do integration testing, stress testing and finally stability testing.

Skipping some of these steps can give you quite a shock. Instead of a quick fix, tiger team or some other tactical approach, what you may find is that you simply have no idea where the problems are or how to fix them. You find that your test tools are buggy and give bogus results. You find areas that have been completely skipped.

This isn’t necessarily a business disaster. There are plenty of examples of very successful, buggy and unstable products. But, is that the kind of product you want to work on?

My suggestion, if it isn’t obvious already, is that the core test tools and a usable test environment are part of the product. They should be developed, tested and maintained at the same level as the product itself. And, the expectation should be that these tools are of the same quality level, in terms of design an implementation, as the rest of the product.

It’s hard to do. And, it’s easy to waste large amount of money if you try, but don’t do it correctly (i.e. spend the money, but have a bad design or implementation). But, once your product ships, it will be far more expensive (in terms of time and money) to go back and put quality in.


Test Automation Tool

Elements of a Test Automation Tool

I have someone show me or tell me about a new automation tool a couple of times a year. Sometimes it is something the person made themselves or a product they read about somewhere. Other times, it is a partner that is showing off their automation and tries to say that it covers everything.

The tool usually comes down to a single part of what I’m about to describe. The ability to write a script, capture a screenshot, push buttons or for the code-centric, a library, harness or development environment addition is usually a long ways away from having all the pieces required to do automation.

I’m sure I’ve forgotten some things. I’m also sure that some are unnecessary in certain environments. But, most of these are generic and are required in some form.

So, here is my list:

Provide a way to program (from least to most sophisticated)
–API’s for use with C
–vendor script (Like 4Test that is used with SilkTest)
–open source language with interface (Perl, Python, Ruby or PHP)
–action word (or even multi-level action word) system

UI Elements
–identify all UI elements native to the OS
–provide a way to identify custom elements, gadgets and other non-native elements
–associate each instance with a name (or multiple names, for robustness)
–give them names (provide a naming structure). Sometimes call declarations.
–provide a bitmap and coordinate system as a fallback

Framework hooks
–start a script
–start a testcase
–end a testcase
–end a script

Test case management system
–way to identify a test case
–way to identify a test suite
–way to feed data into a test case (making it a “data driven” test case)

Test Execution
–Ability to execute a single test case (from the command line)
–Ability to execute a test suite
–Ability to execute a group of test suites

Logging system
–Pass, Fail, Skip
–Capture error messages
–Ability to log warnings and other debugging text
–Ability to save log to a file or to a remote system/database

Validation function (did the test pass?)
–Bitmap compare
–String compare
–Window exists

Agent (or other method to accomplish the same thing)
–Run the least possible code on the system under test
–Have a way to communicate with the system under test (via TCP/IP, serial, USB, etc.)
–Have another way to communicate with the system under test (you may want to test the communication channel you usually use for control, so there needs to be a way to switch to a secondary channel).
–Have a way to automatically start and stop the agent code (at boot or at application launch)
–Have a way to install and uninstall the agent without user interaction (including it in the build is not good enough, unless you are willing and able to ship it as part of the product).
–Have source code to the agent. This is essential if you are working on a non-product OS (i.e. you are developing an OS or parts of an OS and not just an application).

Test Automation – Step 1 – Test Design

The word “Automation” is used for so many things, it has lost all meaning. So, I’m going to describe exactly what I mean by Test Automation as a process and specifically what I’m calling a Test Automation System.

It’s going to take me multiple entries, so I’m going to break it into steps.

Test Design
–Write Test Cases
–Put them in a database
–Add data
–Pick the ones you want to automate

First comes the QA part. You need tests. To come up with tests, you need to consider the Software Under Test using all of your knowledge (specifications, conversations, implicit specifications and so on). Then, you consider all the various types of testing, like Positive tests, Error tests, Stress tests, Stability tests, Performance tests. It is very difficult to separate Test Design from Test Planning; at this stage, we’re only looking at coming up with a good set of tests (Test Design), not who will do them, when or how (Test Planning or Management).

It turns out that this is actually an infinitely large task, so in reality you will have to make some decisions about which areas and types of tests you would like to focus on and also how deep you would like the testing to be in each area. In general it’s easy: old parts of the software get spot checking and the new parts get the bulk of the attention.

Since this involves a significant amount of data, you will need a place to keep it all. I call this a Test Case Management System. You need a way to create a taxonomy of tests, which is usually broken up by application or area of an application and then sub-areas. You also need to store information about the steps in the test case, the type of test and what the expected results are.

The next step is to add data to go with your test cases. For example, you might have a test to fill out information on a web page form. Even for a few fields, the number of iterations is large. You might explicitly specify all of the data to use for some special tests, especially error tests or other key areas. But for most, a list is best. In automation, these are usually referred to as Data Driven tests. For manual testing, I’ve heard them called matrix, spreadsheet, or combinations and permutations.

Here we come to the first Grand Question of automation: which tests should we automate? I’m not even going to give any advice here. There are article and maybe even whole books on this subject. Have meetings, decide on your criteria, then move to the next step.