Five Misconceptions about Test Automation

Over the years, I have heard many valuable opinions about test automation, and I want to give my own twist on some of these. Let me start by providing my view on automated testing. I have pioneered keyword-driven testing, and its culmination led to a method called action-based testing, which is supported by—but not dependent on—a product called TestArchitect.

In action-based testing, the focus is on a modular approach, where tests are organized in test modules, each with a clear and differentiated scope. The test cases in the module consist of sequences of "action lines," each starting with an action keyword (short "action") with optionally one or more arguments. In action-based testing, the automation efforts focus on automating the actions rather than the tests. What I’ve learned over the years is that using actions shifts the focus from technology and instead places it on test design as the key driver of automation success. In other words, a clever automation engineer alone is not enough to fix a situation where tests are not well designed.

When I talk about automation I usually mean functional testing through the UI. Non-UI automation tends to be easier and more inherently stable and should also always be looked at in addition to UI automation. Here are some examples of what I would call test automation "misconceptions," but take them with a grain of salt.

Good automated testing is automating good manual tests.
Manual tests typically aren’t written with automation in mind. They can be verbose and lack a clear scope, resulting in numerous detailed steps and checks. For a manual tester, that is not a big problem, but an automation engineer can have a hard time creating clear and reusable scripts and functions. When there are changes in the application under test, manual testers can usually cope with those, but automation may falter. It is best to follow an integrated approach in which both tests and automation are designed to work together to achieve a good result.

Automated tests are always dumb.
A common impression is that automated tests are by definition “dumb,” compared for example, to exploratory testing. A common way to say this is that automated tests are checking, not testing. However, automated tests are as dumb or smart as the test designer makes them. Just like a poor craftsman can't blame his tools, automation is not an excuse to not make smart tests. Tests can be developed in an exploratory way, even if what you are exploring is the business under test, rather than the UI interaction alone. Automation can also help with testing variations of unexpected values to see if a system can handle those.

Automation is a technical challenge.
Regarding automation as a technical task risks underestimating the role test design plays in achieving both manageable and maintainable automation. If tests are written as a long sequence of detailed steps without a clear scope or structure, even a skilled automation engineer will have a hard time fixing that. An automation effort will encounter technical challenges, but they tend to be at the single keyword level. Another element in ensuring automation success is the application development. If developers provide enough hooks for tasks like identifying controls or for timing of operations, automation can be developed more quickly and will be more stable.

Return on investment should decide what tests to automate.
This is one of the most commonly found statements on test automation. However, action-based testing shifts the focus to a more important question: What is the return on investment for a test? Once you have decided that a test is worth developing, and you do that with actions, you might as well automate it, since what you're automating are the actions. My rule is that you should automate any tests that you write down.

Keywords will solve all your test automation problems.
Keywords are not a magic wand. Some of the worst automation projects I have seen were keyword-based. Keywords provide a convenient interface for non-technical users and encourage abstraction from unneeded details. However, when used alone, they don't help if you don't pay attention to how you organize your tests. Keywords, in fact, can work as an amplifier: good practices get a better pay-off and bad practices will have more dire consequences than they may have without keywords.

We can differ in opinion on which of the above concepts are true or false. The important thing is to not underestimate automated testing. It can be complex and needs our attention, but we should not be afraid of it, either. A well-thought through approach and cooperation as a team can lead to great results.

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every week.