Testing in the Pipeline | TechWell

Testing in the Pipeline

pipeline

With DevOps becoming the norm, we're entering a world of pipelines. In particular, when frequent or continuous deployments are a goal, streamlining and automating the process of building, configuring, testing and releasing developed software components becomes a high priority. Of these steps, the testing is an interesting one, posing its own unique set of challenges.

Unlike other parts of the pipeline, testing is largely a creative process—testers design tests that hopefully uncover issues and thus help ensure quality. However, to fit in the pipeline, the tests also need to be fully automated, and that automation needs to be "100 percent stable"—meaning no test cases should fail by reasons other than issues in the system(s) under test. For unit tests, this is relatively easy to achieve, but for functional testing it can be more uphill.

Test automation is not just a technical challenge, it is also in large part a matter of good test design. If tests are not well-structured or if they are more detailed than necessary, they will be sensitive to changes in the systems they're testing. A modularized method like Action Based Testing (ABT) can help achieve automation-friendly test designs. In ABT, tests are organized in modules and written as sequences of keyword-based "actions." The level of detail of the actions depends on the scope of the test—for example, a business test should not contain navigation actions.

Having tests well organized can help to plan which tests to run when in the pipeline, since it is usually not practical to run all tests, in all possible configurations, whenever a developer checks in one file. You may execute lower-level interaction tests early and often, while running bigger comprehensive business scenarios less frequently. You could, for example, let automation continuously pick tests and configurations and execute them. Cem Kaner introduced the term "Random Regression" for this idea.

I approach system development, test design, and test automation as three distinct product life cycles. Each involves its own skills and deliverables. Test design produces reusable test cases, and automation delivers action automation. From that perspective, it can make sense to give tests and automation their own pipelines. If tests or actions are updated, they can be executed on an already-proven version of the system under test to verify them. The "deployment" end-point of the testing and automation pipelines is when they're executed on the system under test.

A good approach we found is to make 100 percent stability a goal in itself, which is to be achieved in an incremental fashion. The first step is to establish a good "plumbing" for the test deployments and executions, orchestrated by tools like Jenkins and TFS. This should include provisioning machines, configurations, executions, results, etc. Start with a small set of representative tests, and when those work well, go on adding more tests that are larger and more complex. Do this in increments, making sure each of those work well before moving to the next one, so the 100 percent stability is the constant, and the test coverage is the variable that is increasing when more increments are added.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.