When User Acceptance Testing Isn’t | TechWell

When User Acceptance Testing Isn’t

Examining user acceptance testing

User acceptance testing (UAT) has been coming up a lot lately in conversations with clients, and I’m amazed that what most organizations call user acceptance testing just plain isn’t. 

Here’s a sampling of what I’m hearing:

  • We have our UAT team run our test scripts.
  • We have our UAT team run their test scripts.
  • Our UAT is run in parallel with our system testing, so they find lots of bugs.
  • Our UAT team is staffed with testers, not users. They were users … long ago.
  • UAT is a useless formality. They just rerun our test cases.
  • UAT is happening, but big bugs still get reportedimmediately after rollout.

So if these things presumably aren’t UAT, what is? Essentially, UAT means the software is tested in the "real world" by its intended audience.

It’s clear why the above descriptions I’ve heard from clients signal to me that they aren’t doing UAT correctly. UAT done by testers is not being done by the intended audience; UAT done by former users is not representative of the current audience; and UAT done in unrealistic test environments does not mimic the “real world.”

But beyond that, what else is troubling about those descriptions?

For one, user acceptance testers are running test scripts. What? Do end-users run scripts daily? Do they follow a set of instructions in order to do their tasks? If not, how is the UAT representative of intended usage?

If these testers are running another team's tests, what is the purpose? Are you low on staff and just borrowing resources to complete testing on time? If so, it's not UAT.

And if UAT isn't finding obvious issues that matter to the intended audience, what is the point?

Why does this matter? Well, if you claim to have a UAT team, and the stakeholders believe it is doing UAT but it isn’t, that seems risky to me. There’s an expectation of finding things that matter to the end customer, sign-offs representing fulfillment of acceptance criteria, and an associated comfort level that the team has succeeded in its mission. Why wouldn’t stakeholders be frustrated when phone calls start streaming in hours to days after deployment with customers complaining about critical issues that could have been caught earlier?

Here’s another way to think of acceptance testing in general. Imagine you’re developing custom software for a customer by contract. You have delivered the software, and the customer needs to determine if they should pay you (i.e., “accept” the software). The customer would test key things that determine that their needs are met, whether the contract is fulfilled, and if the software provides the required functions. If so, voila! You have a check in hand. Essentially, acceptance testing is a way to “gate” a hand-off. This is how many decision- making stakeholders view UAT.

How do I describe UAT? Simply put, it’s an end customer doing their job.

How do you describe UAT? What do you expect out of it? Are you getting it? If not, what are you going to do about it?

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every week.