Why Do We Test Software?
Why do we test software? This seems like an exceptionally silly question—most people would say the answer is obviously, “So we know it works, dummy. Sheesh.”
But in reality, there are many other reasons we test our products, as well as an array of possible benefits to tests besides confirming that what a system does is, in fact, what we intended it to do. Building a product is about implementing a vision and creating something from nothing, and tests are a crucial part of that act of creation.
When a coworker is struggling with a testing problem, a question that I find often helps light the way is, “What are we testing here, exactly? What is the purpose of this test?”
A frequent source of problems is confusion of purpose in test layers. Too many things are being tested at once, and sometimes this even reveals problems in the underlying system—perhaps a layer of code is implementing too many things at once.
Proficiency in technology and tools is only half the equation. An understanding of what you are trying to accomplish with a particular test or layer of tests is essential. You need both mastery of tools and vision.
There are tests whose primary purpose is to verify the implementation of an algorithm or to confirm that the API and responsibilities for a unit of code are easy to understand and consume. There are tests that verify a particular requirement is implemented correctly end to end for a deployed system. There are tests that verify performance is acceptable for critical paths of a system. There are even tests that come before implementation, communicating to developers the runnable requirements and what needs to be built.
There are almost an infinite number of possible combinations of test purpose and technologies, and having an explicit vision for what flavors of tests you need for a particular product and how you will be implementing and maintaining those over time is a testing concern that is often overlooked.
A good place to start before getting into the daily battles of a specific product is widening your perspective of test strategies and purposes. Brian Marick’s discussion of test dimensions into a quadrant of business-facing, technology-facing, supporting the team, and critiquing the product is well worth a read, in addition to the detail and thought on those subjects provided by Lisa Crispin and Janet Gregory in Agile Testing: A Practical Guide for Testers and Agile Teams. Mike Cohn’s test automation pyramid concept also is a very worthwhile pattern to consider when distributing tests for a product across different purposes and tools.
Learning new test automation technologies and tools is an excellent endeavor, but our ability to create quality products also requires the vision to understand how best to apply these tools.
Jim Weaver is presenting the session The Software Testing Pyramid: A Concrete Example at STARWEST 2017, October 1–6 in Anaheim, California.