Using Real World Data in Testing | TechWell

Using Real World Data in Testing

The world of software has gotten faster in the past few years. We went from needing CD discs to update software to downloading updates over the Internet. That is steadily giving way to automatic updates pushed to users without any effort on their part. Major revamps are rarer these days; instead, companies are opting for small, continuous changes. To accommodate this trend, companies are turning to continuous development, deployment and integration, and testing in production (TiP).

In November, The Guardian launched a new version of its mobile website and adopted continuous deployment. Within ten days of launch, more than 100 deployments had been made to the live environment, averaging about eleven per day, according to Andy Hume’s chronicle of the team’s work on The Guardian’s development blog. Hume wrote:

When you’re releasing code multiple times a day, you don’t have time for full regression tests. Running a full set of integration tests across all browsers can take many minutes, if not hours. When we merge code to the master branch, we run a full set of unit tests on the Scala and JavaScript codebase, as well as check the output of some key application endpoints in a headless browser. These take five to 10 minutes to run. If they pass, the code is automatically deployed to a continuous integration environment. Developers can sanity check their changes in this environment, and if they’re happy (and with the conscience of the team on their shoulders), can immediately deploy to production.

Though it may seem counterintuitive to release a product with less testing, this practice allows teams to move quickly. The key, says Seth Eliot, TiP expert and senior knowledge engineer of test excellence at Microsoft, is to understand that Testing in Production isn’t skipping QA.

“Testers hear ‘Testing in Production’ and remember their worst nightmare: dev bypassing test to throw cruddy code to users. But to be TiP it has to be a conscientious strategy aware of risks and utilizing mitigations,” Eliot said in a Testing the Limits interview.

Enough testing should be done before live integration to catch show-stopping bugs, but after release, monitoring live interactions will help teams flag issues they might not have seen before. From the AppSignal blog posting, App Man writes:

Based on my experiences and customer interaction, I’d strongly argue that testing in development isn’t enough. At the very least, it’s certainly not an insurance policy for deploying an application in production. When a Formula 1 team designs a car in a wind tunnel and tests it on a simulator pre-season, they don’t assume that the performance they see in test will mirror the results they see in a race.

Monitoring new releases mimics in-the-wild testing and allows teams to cover a wider matrix of use cases. This is particularly helpful since covering the array of hardware/software combinations in-lab is extremely time consuming and costly. Real-time data is the key to ensuring your newest release is successful.

Read more on the uTest Software Testing Blog.

Up Next

December 21, 2012

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.