Great Testing Comes from Great Questions
I like to see testing as a game we play with our software. The game is all about gathering information, and the most direct way to gather information is by asking questions. So, let's draw an analogy between testing software and conducting an interview with the software.
The more questions we ask (tests we perform), the more answers we receive (information we gain). But we don't just want to pose some questions—we want to pose useful questions. Useful questions lead to answers that contain useful information, and useful information helps us judge the value of our software to enable our stakeholders to make better decisions.
We all know that proper preparation prevents poor performance, so it’s a good idea to come up with questions up front. You might also start with some simple warm-up questions you already know the answer to, in order to check some "facts" (outputs). Because you know that there are no facts, there are only interpretations.
You’ll start realizing that the next question you ask is influenced by the answer of the last question. So your interview turns into an exploratory activity, because you realize that you couldn't think of all the useful questions ahead of time. Eventually, you’ll switch back and forth between exploring your software to discover new information, and checking facts to confirm your existing beliefs. The interview ultimately turns into a good mix of creative exploration, experimentation, investigation, and mechanical checking—just like in testing.
Posing useful questions, interpreting the answers you receive, drawing conclusions from these answers, forming new useful questions from the answers you received, learning from these answers, describing what you've learned, and preparing your learnings for your stakeholders are just some of the hard problems in testing.
Hard problems usually involve the application of many more human skills, like empathy and critical thinking, than easy problems because they involve much more thinking. Thinking makes us uncomfortable; it requires fighting through confusion, which takes effort. That's why there's an appeal to do the things you already know, and not just in testing. Consequently, there is a great chance that people overfocus on the easy problems in testing.
Still, “easy” doesn't mean unimportant. Known "facts" often need to be checked again and again, because we can't trust our interviewee (the software). Our software and the domain it is situated in constantly evolve, often over a very short period. So, it's our aim to automate the process of checking to make it less error-prone and faster, and to free humans from this dreary and uninteresting task.
Easy problems can be much simpler described algorithmically than the hard problems. Hard problems are hard to automate, and easy problems are easy to automate. Easy problems are also simpler to understand than hard problems. This is probably why testing is often reduced to automated checking.
This is also why automation sells in software testing. If you can automate the easy problems, it gives testers more time to tackle the hard problems.
So, remember: Doing excellent testing is one thing; selling it to management is another.