Test design is the single biggest factor in successful software testing. Good test design not only results in good coverage but contributes significantly to efficiency. The principle of test design should be “lean and mean.” The tests should be of manageable size and at the same time complete and aggressive enough to find bugs before a system or system update release.
Test design is a major factor for success in test automation—an idea that is not so intuitive. Many in the industry still are under the assumption that successful automation is an issue of good programming or just “buying the right tool.” Coming to terms with the idea that test design is the primary force for automation success will make a significant difference in test automation effectiveness.
In test design there are three main goals to achieve. These can be characterized as the “Three Holy Grails of Test Design”—a metaphor based on the stories of King Arthur and the Round Table. The three goals are difficult to reach and mimic the struggle of King Arthur’s knights in search of the Holy Grail.
What follows is based on Action Based Testing, LogiGear’s modern keyword-based method for testing and test automation that organizes tests into test modules. It helps to think of a module as a chapter of a book. Each test module has a well-defined flow of test cases. Within the test modules the tests are described by sequences of test lines, each starting an action, defined by a keyword. The automation does not focus on automating test cases but on automating individual actions, which can be reused as often as necessary.
Using this method, the first goal is to break down the test plan into manageable pieces, which become the test modules. At this point test cases aren’t being described but are simply being organized in their appropriate “chapters.” The organization is good when each resulting test module has a clearly defined and well-focused scope that is differentiated from other modules. The scope of a test module subsequently determines what its test cases should look like.
The second goal is to define the modules, at which point an individual test module becomes a mini-project. The scope of a test module will determine what approach to take to develop the test cases, taking into consideration the choice of testing techniques used to build the test cases (e.g., boundary analysis, decision tables), and who should get involved in creating and assessing the tests. For example, a test module aimed at testing the premium calculation of insurance policies might need the involvement of an actuarial department.
The third goal is to identify where you can win or lose most of the maintainability of automated tests. When creating a test case, it is best to specify only the high-level details that are relevant for the test. For example, from the end-user perspective “login” or “change customer phone number” is one action; it is not necessary to specify any low-level details such as clicks and inputs. The low-level details should be placed in separate, reusable automation functions common to all tests.
This is significant from a maintainability standpoint. Since low-level details aren’t included in the test cases, the test cases will not have to be changed one-by-one in every single test if the underlying system undergoes changes. Only the low-level details need to be revised—and just once—and can then be reused many times in all tests.
Regardless of the method you choose, spend some time thinking about good test design before writing the first test case. The time spent will have a very high payback—both in the quality and the efficiency of the tests.