Optimizing Testing: Moving Faster without Compromising Quality
At some point as a software tester, you’ve probably been urged by management to reduce the amount of time required for testing, without compromising product quality. The challenge seems highly contradictory, and testers are often left wondering exactly what is being requested. How can we spend less time searching for defects and evaluating product release readiness and not expect product quality to suffer?
The problem with the task is that it focuses on the amount of time for testing, rather than how the time for testing is spent. As testers, we know that no matter how much time is allocated for testing activities, it will never be enough—we have to search an infinite test space in finite time. In other words, time will always be scarce, so it becomes even more critical to ensure that the tasks we perform:
- Add value to the product
- Are more important than the tasks we could not perform due to time constraints
Considering the added value and relative importance of each testing task can help testers optimize the testing strategy, and focusing on value provides a systematic way of identifying testing waste. Any task that consumes time and resources but does not add value to the product, either directly or indirectly, should be avoided or eliminated. Assessing the importance of testing tasks facilitates the creation of a prioritized list to guide testing activities. Such prioritization serves to help us get the most out of our testing efforts.
So, what are some ways you can optimize your testing strategy?
Design for testability: Much of the time, testing is slow or difficult because the system has not been designed with testing in mind. Testability must be engineered into the product at all levels, from the class design, to the design of subsystems, to system integration. The choice of technologies, design conventions, isolation frameworks, and logging all influence testability.
Lightweight test planning: Although testing requires much planning and collaboration, the methods and tools to support it do not need to be rigid or heavy on documentation. Self-documenting automated tests, domain-specific languages, recorders for exploratory testing sessions, and test inventories can be combined to keep test planning lean and lightweight.
Test impact analysis: Regression testing is an expensive activity. Dependency analysis tools and approaches can be leveraged to automatically retest only those parts of the system that have been affected by changes, thereby reducing the cost of regression testing.
Risk-based testing: Test the parts of the software that pose the highest threat to project success most heavily. Business-facing and technical risks can be evaluated and used to drive testing activities. Examples include focusing testing efforts on error-prone and critical features while performing little to no testing on low-impact areas.
By optimizing your test strategy, you can reduce your time spent on testing while maintaining test coverage and product quality.
Tariq King will be presenting a session on Lean Test Management: Reduce Waste in Planning, Automation, and Execution at STAR WEST 2015, from September 27–October 2 in Anaheim, California.