Before Jumping into Software Testing Tools, Get Your Code Straight
Imagine, for a moment, that I have a button that can make testing free and immediate. Test results don’t just appear, they are automatically injected into bug tracking and defined perfectly; we just need humans to triage them. Software testing should be cheap and easy, right?
Except it probably wouldn’t be.
For many of the teams I work with, making test execution free and instant would only deliver a few percentage points of improvement. That’s because most of the time is spent deciding what the priorities are, figuring out what the fixes should be, waiting for people to get around to doing a fix, waiting for a build … then finding a few more bugs and starting the cycle over again.
Even in a magical world where this test tool is free, it would still require special skills to use, and if the user interface is changing frequently, the tool will have some maintenance cost. Skilled programmers can design test tool systems that are easy to adapt, yet most of the time, when a team tries to drive the interface as a black box, programming skills are not high.
When I’m advising a team about test tools, I typically start at the unit level, with classic test-driven development (TDD) in code. Invariably, the team “can’t” do it. The code is legacy, it is too tightly coupled together, and they can’t isolate components.
That’s because they don’t know how. The code is in a mess, and no one knows how to unmess it. To begin writing automated GUI checks on top of this is just going to add another layer to the mess. In terms of delivery, the team would be better off learning simple design, refactoring, TDD, isolated components, and other elements of code as craft.
Sadly, the typical response is to have someone “just get started” driving the entire system as a black box. The results are often quick wins, then slowly diminishing value over time. In the best case, the tools are thrown away when there is a major UI change and we start over; in the worst, the team uses the sunken cost fallacy to justify continued investment in tools. Costs go up while software and value delivered might not.
I belong to the context-driven school of software testing, but I’m not opposed to automated tooling; not at all. What I am opposed to is tooling used to cover up failure.
Steve McConnell’s programming book Code Complete contains this relevant quote:
Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. … If you want to lose weight, don’t buy a new scale; change your diet. If you want to improve your software, don’t test more; develop better.
Though his book was published in 1993, the idea still rings true today. Better yet, through code as craft, test case creation, and other methods, we actually know how to develop better.
So fix the failures as close to the code as possible. Once failures are low, GUI-driving tools can make a ton of sense.