The Ethical Responsibility of Defect Severity Classification
A couple of months ago, I attended a meeting where a bunch of Severity 1: Showstopper defects were mysteriously recategorized and downgraded so that go-live criteria could be met. It was a trite exercise; we’ve all seen it before. This week, I had a much more sobering experience.
Reviewing a different project, I found defect categories for go-live criteria identified as:
- Level 1: Defect results in a complete failure of the system
- Level 2: Defect does not result in complete failure of the system, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the system’s usability
- Level 3: The defect does not result in complete failure or impair usability, and the desired results are obtained by working around the defect
The criteria for go-live was zero Level 1 defects. Here’s my concern: These categorizations were crafted by pencil pushers and lawyers who were trying to describe what was acceptable for go-live from a system perspective, without concern for the business and human implications of the errors.
Without going into details, incorrect results in this particular system—that would only be classified as Level 2 defects—could literally kill people. Does that seem like a big deal to you? According to the contract I’m reading, it appears that as long as there are no Level 1 defects outstanding, the system would go live and the Level 2 defects would be prioritized in the maintenance backlog.
Please, let’s all remember that some software does very important work. If, as a professional, you want to play fast and loose with defect classification for your iPhone game, fine. It could cost your business its reputation if the game is too buggy, but I understand market pressure for timely shipment.
On the other hand, if your system does safety-critical work and people could sustain significant financial loss or be harmed or killed, make sure that the business and human impacts of errors are front and center when discussing whether a system is ready to be released. It’s not that executives are stupid; it’s that they are often too far removed from the implications of defects to have an informed opinion. Consequently, you must assure they are informed.
That is our most important job as professionals. We are supposed to be the bridge between the bits and their meaning to the organizations that hire us. That is an ethical duty we need to take very seriously. It’s what we expect from the QA guys who review the flight-control software we rely on when we put our loved ones on an airplane, or who review the software that runs life-sustaining medical equipment at a hospital.
We have a responsibility to support informed decision-making about whether a system is ready for prime time. It is a risk-based decision, but we must assure that the people making it know what the risks mean in human and business terms, not jargon. Don’t be the QA professional on a project that decides to go live because there are no Severity 1 defects, even though known Severity 2 defects can have disastrous consequences.