Capture/Playback: The Vampire That Will Not Die
Years after industry articles by reputable experts proclaimed the death of capture/playback as a viable test automation approach, it has apparently arisen from the dead to suck the life out of another generation of hapless, would-be automators.
Imagine my astonishment when a consultant from one of the largest software companies in the world demonstrated their tool’s ability to capture manual steps and record them into a script as though it was some sort of radical breakthrough. Even worse, management was impressed!
Unfortunately it was not a one-off. In the past several weeks, I have been asked more than once to comment on using capture/playback for test automation (aka point-and-click or record/replay). I don’t know about you, but I find it profoundly depressing that we cannot put a stake in the heart of this capture/playback vampire once and for all.
But, I am willing to try—again.
Capture/playback is based on the seductive idea that tests can be automated by simply capturing the manual process into script code and then replaying it. It is attractive because it implies that there are no special skills required nor is any particular change in planning or process needed. It creates the illusion that automation is only a few clicks away. What's not to like?
Everything. Recorded scripts are inherently unstable. The smallest change or error in the application or the data will cause the test to break, and playback timing is unreliable as systems run slower or faster from time to time. As a result, users spend more time than they save trying to supervise and debug script execution. My favorite quote from a disillusioned tester is “I used to test my application, now I test my scripts.”
Furthermore, recorded scripts embed the data as well as the steps. If a hundred transactions are created, the same steps will be recorded hundreds of times. Later, when changes are made to any part of the transaction, they will require hundreds of script changes. And, since the scripts are not inherently structured or documented, they are almost impossible to maintain.
The fact is that we test software because it changes, and if you can’t efficiently update the scripts, then the central value of automation, repeatability, is lost. While these issues can be resolved by externalizing data, substituting variables, adding logic, and error handling, they require writing code.
And, there is the trap. An approach that is sold as an easy, low-tech alternative inevitably leads to the need for challenging, high-tech intervention to achieve any semblance of stability or maintainability. But by the time you finally realize this, management is frustrated that automation has failed to deliver as expected and is not easily persuaded to invest more time and resources to make it actually work.
In the end, it is a lose-lose proposition. Capture/playback ultimately leaves you worse off than when you started. Not only do you waste time, money, and effort without receiving the promised benefits of automation, you also sacrifice your credibility and lose the opportunity to truly achieve results. And that, in my humble opinion, is a real horror story.