Can We Ever Find All Bugs?
I am convinced we can't answer the question "Did we find all bugs?" This may not be what testers want to hear, but I have reasons for this belief.
Edsger Wybe Dijkstra once said that testing can prove the presence of bugs, but not their absence. This sounds reasonable, and as such, I consider this statement my axiom.
When you accept this axiom to be true, then the problem of finding all bugs is undecidable, by definition. There can't be an algorithm that proves the finding of all bugs, and you can't quantify the number of absent bugs. This, then, implies that testing itself is undecidable.
Now, let's challenge this conclusion. Stephen Brown did that; he once asked me, "Is retesting a fixed bug not proving its absence, if the retest passes?"
That's indeed a tough one. Let's go back to Dijkstra's statement. I think what Dijkstra was trying to say was that you can't prove the absence of bugs that are unknown, because if they are known, they wouldn't be absent. I would say this also holds true for known bugs. You can only check—not prove—the absence of known bugs to some degree after these bugs have been fixed.
Michael Bolton once explained it this way: Retesting after a bug is presumed to have been fixed starts with an attempt to see if the same conditions trigger the same observations. This already assumes that we believe we're aware of all the important or essential conditions that trigger the bug. When the problem that was caused by this bug appears to be gone, we have some evidence to support the hypothesis that the bug is fixed.
Then, if we want to test well, we perform other testing, varying some of those conditions, in an attempt to disprove the hypothesis that the bug has been fixed. If we can't disprove that hypothesis, we take that as evidence to support that the bug has been fixed. However, we haven't verified that the bug has been fixed; we have only verified that those tests didn't find the bug this time under certain circumstances. This means we can't verify our software—we can only falsify our software through testing.
Bear in mind that there might be new bugs introduced by the fix. There might be weird conditions that the fix doesn't cover. And there might have been bugs all along that none of our testing found. So, Dijkstra still wins.
Basically, this means there ain't no such thing as quality assurance. In the same way a doctor can't assure health, we can't assure quality through testing. We can't even assure that all bugs have been found, and finding and fixing bugs is just a minuscule dot on a miniscule dot on a miniscule dot in the space of quality assurance.
So, quality assurance is not a goal; it's a never-ending process of continuous improvement. What we do in testing is more like quality assistance. We provide information that helps people make better decisions about the things that might threaten the value of our software products.
That's testing. It's an information service, not an automation service.