Scalability in Automated Testing | TechWell

Scalability in Automated Testing

Automation for functional testing is usually introduced to make test coverge more efficient and speed up the process. The idea is simple enough: The ability to execute automated tests regularly—for example, as a step in a build process—will save time and help control quality.

However, functional test automation is challenging, particularly when used at a large scale. Working in an ad-hoc way for twenty test cases can work, but for two thousand or twenty thousand, you need a structured approach, especially if the application under test is complex and needs sophisticated testing. I will be teaching about this kind of large-scale testing at the upcoming STARWEST conference in Anaheim in October, and I want to share some thoughts here.

For automated funcational testing to be scalable, there are certain items you need to look at:

  • The organization and design of your tests
  • How you organize the process and how the various players cooperate
  • Automation and software under test designed for testability and stability
  • Commitment from management

When I’m asked to discuss scalable automation with a customer, I first look at the test design. Obviously, test design plays a large role in the effectiveness of tests to help ensure quality. However, maybe less obvious is that test design basically determines whether automated tests are maintainable.

My key observation is that test automation is not so much a technical challenge as it is a test-design challenge. In our action-based testing approach, we focus on two things: the use of test modules to group the tests and the use of keyword-based actions to describe individual tests. I wrote a longer article about this in a past issue of Better Software magazine.

Another major factor to success or failure is how well the QA efforts are embedded in the totality of the organization's processes. Agile, if applied well, can bring developers and testers together, and focuses more on cooperation with end-users and domain experts, who can be crucial for good testing. Generally, if developers, testers, automation engineers, and other parties work well together, you can get close to a grand prize of agile functional testing—keeping up with the developers.

Though technology is often not the main driver for automation success, it is still critical to making automation stable. The main culprits when things don’t run well tend to be interfacing with the application under test and the timing. For interfacing, the UI interaction needs the most attention. It’s best to invest in designing UI mappings, something developers can help you with by exposing and identifying properties for UI elements. Timing often is not stable because of hard-coded wait times. It’s better to wait for observable conditions.

Commitment from management is a key factor in the success or failure of many processes in organizations. Business and IT managers can assess testing and automation by specifically looking at business value: What is it worth to have tests be effective and efficient? This will usually be time to market and quality. Tests should help relieve the critical path, giving you flexibility, and be meaningful, finding important problems before they can cause damage.

To be successful, creating and executing large-scale automated tests need a thoughtful approach to elements in the process and a close eye on test design. But even small improvements can make significant differences.

I’m very interested in your experiences and insights. Leave a comment below—or maybe we can continue the discussion in person at STARWEST.

Hans Buwalda is presenting the tutorials The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More and Introducing Keyword-Driven Test Automation at STARWEST, from October 12–17, 2014.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.