As long as there has been software, there has been software testing. As the industry matured, it segmented testing into various schools and methods: manual vs. automated, in-house vs. outsourced, emulators vs. remote access. In each case, these innovations take place inside the confines of the QA lab. Starting to see the problem?
When companies wanted to improve their testing, they did so within this somewhat sterile environment. And yet, even with millions spent on QA in all of its various forms, organizations continue to launch products that miss deadlines, exceed budgets, and fail to work as designed in the hands of actual users.
The problem has little to do with in-the-lab testing practices, methodologies, or budget. Rather, there is a fundamental link missing in the QA chain: in-the-wild testing. After all, users consume applications:
- In adverse, unpredictable, and widely varied environments
- With outdated browsers, plug-ins, and third party apps
- On varied hardware and devices
- With imperfect connectivity (both Wi-Fi and mobile carriers)
The only way to launch apps that consistently work in the hands of users is to move a portion of testing out of the lab and into the wild. This means involving professional testers, with real devices, operating under true real-world conditions.
What makes in-the-wild testing distinct from testing methods discussed earlier? Here are a few key differentiators:
- Mirror real-world conditions: While this attribute pertains to all testing types, it is perhaps most applicable to usability and localization testing. Suppose your target users are mothers, ages 35-45, who live in Latin America. By moving your testing into the wild—with handpicked testers that match your exact demographics—you get a much clearer picture of how your target users will respond to your application.
- Identify fringe use cases: When testing a web application, for instance, it’s fairly common to have your QA team verify the app’s functionality across all major browsers. But what about the various third-party applications (e.g., antivirus, plug-ins, etc.) that mostly exist on the hardware of your users—but not on your QA team’s hardware? With in-the-wild testing, you get insight into the unusual use cases that can lead to big problems post launch.
- Test on-demand: In-the-wild testing is designed to be used where and when you need it most, requiring very little setup time. This benefits companies whose QA requirements change frequently (particularly those adhering to an agile framework).
In today’s world of pay-as-you-go products, any software bugs that make it in front of your users will immediately decrease usage, dragging revenue down with it. In addition to the financial incentives, in-the-wild testing offers additional key benefits:
- Improved app quality: By testing in the wild, a development team can receive a list of bugs three to four weeks before the bugs normally would have surfaced, giving the development team more time to launch a finished product.
- Increased tester diversity: In-the-wild testing gives you the opportunity to offset the group-think that often plagues internal QA teams. This is particularly helpful in terms of usability, where you can involve testers who are totally unfamiliar with your product.
- Improved efficiency: Since it is an on-demand solution, crowdsourcing helps alleviate the pains associated with peak release times by leveraging a community of testers exactly when you need them.
While new to some, in-the-wild testing is an established practice inside many of the world’s most successful companies, including Microsoft, T-Mobile, and Google.
The trend is clear: Companies of all shapes and sizes are leveraging in-the-wild testing to ensure a higher level of quality. Those who ignore this real-world component do so at their own risk. Choose wisely!