Part of the Pipeline: Why Continuous Testing Is Essential
Continuous testing is about fast and continuous feedback. Specifically, it is the practice in which tests are run as part the build pipeline so that every check-in and deployment is validated. This includes all types of testing, across all non-production environments. This does not mean that all tests are run all the time, but they are all executed at some point, thus providing the necessary gates to know that the deployment package(s) can move into production with high quality.
If you are doing test automation today, you most likely have some type of keyword or hybrid framework that makes automating regression tests easier. Perhaps you have moved to one of the open source tools, but your tests are still dependent on some amount of working code in the testing environment. That approach to automation is no longer effective in today’s world of continuous delivery; you must transition to continuous testing.
With continuous testing, tests have to be atomic, which means they are small, independent units. They cannot have dependencies with other tests; otherwise, you will have large amounts of refactoring when small changes are made. Also, debugging time will be increased with large tests. With smaller tests, you are able to categorize them and easily determine when and where to run them and run in parallel.
All testing has to be part of the pipeline, which means that automation, performance, and security engineers have to be familiar with tools such as Maven, Nexus, and Jenkins. They have to ensure that their tests can be kicked off with these tools, and that when a test fails, results are integrated back into the pipeline triggering a failed build.
Pipeline integration requires that test creation becomes more of a design effort. For example, if we run all our regression tests early on, we delay getting feedback to the team versus tagging some tests as a smoke test, running them within seconds, and immediately knowing the build results.
The other design aspect that has to be accounted for is the focus of the tests. Are you following the testing pyramid (from high to low: Unit > Service Layer > UI), or are you more like an ice cream cone (from high to low: UI > Service Layer > Unit)? If it’s the ice cream cone, you most likely have flaky tests that fail frequently, or you have a lot of rework when UI changes occur. With continuous testing, automation has to keep in sync with development. Thus, if you have rework with the smallest UI changes, the people doing the automation will not be able to keep up with development, and you quickly will be back to only automating regression tests.
Finally, test data and environment constraints have to be eliminated. For tests to be able to be run constantly, data needs have to be handled automatically as part of the pipeline before and/or after test execution. Therefore, you must have a way to either reset or create or condition the data as part of the pipeline prior to the tests running.
From an environment perspective, you need to be able to eliminate dependencies on all downstream systems, especially those that are not high-availability systems. This is where service virtualization becomes critical. Yes, you do need to run integration tests at some point, but because we are focusing on the components, it’s not required all the time. Service virtualization helps remove these constraints and has the added benefit of being able to be used for some test data needs, as well. For example, if you have a system with transaction limitations, you could use service virtualization as a way to remove that system from the critical path.
With the DevOps movement and push for continuous delivery, the way we have done automation in the past has to evolve. Our teams have to learn more about the key enablers to ensure continuous testing can be achieved.
Adam Auerbach will be presenting a session on Putting Quality First through Continuous Testing at STARWEST 2015, from September 27–October 2 in Anaheim, California.