How Do You Measure Quality Within Your SDLC?

How Do You Measure Quality Within Your SDLC?

Posted on Posted in: Categories ALM, Blog, DevOps, Test Management / Tags

As a QA Director at a previous company, I once had the Tech Support Director ask me during a Ship/No-Ship Decision meeting what the value of showing test case execution metrics in terms of “% of tests passing” vs. “% of tests failing” provided in terms of understanding overall product quality. As the “defender of all things QA,” I provided a long-winded answer about test case coverage against the requirements, the skill-level of my staff in finding bugs, and the fact that our Business Analysts (our company’s product owners) helped review our test cases. She wasn’t buying my answer.

Test-Metrics

After that meeting, I went back to my office and thought about it some more. What the Support Director really wanted to know was, “How do we know this product really does what it’s supposed to do?” And “How do we know we found all of the bugs?” After all, her team was the one who would be taking calls from angry customers or, even worse, issuing critical alerts when really nasty issues were found. Her team had no insight into what kinds of tests we were writing in QA and how well those tests covered the product functionality.

So I did two things: first I started analyzing each and every issue that came in from Customer Support. These were introduced to my team as “test escapes”. I had the application area test leads analyze these escapes for root cause and mandated that at least one test case be added to our regression suite for each, or ideally, a whole set of tests be developed to ensure that the gap was covered. I had the managers review these tests with me to ensure that these tests were valuable. Involving Customer Support in providing access to their data was the first step in assuring them that they had input into the test design process.

The next thing we did (I say ‘we’ because I can’t take credit for this, unfortunately) was to assemble a group of product owners, testers, developers – and the best support representatives – to develop what we called “meaningful use” tests. These tests were end-to-end tests that went from the beginning of our business process to the end of our business process, verifying that each of our applications and application areas worked together to produce a significant outcome.

Meaningful_Use_Tests

The meaningful use tests were based on those most key and critical things that the software did; they covered the essential things that, if we didn’t do correctly, the company would lose customers, lose money, lose business, or, for the healthcare industry I worked in, potentially result in the loss of human life. Once we started thinking in those terms, we developed tests that were critical to overall product quality. Then and only then was I able to say that test execution %pass vs. %fail was truly meaningful.

My questions to you then – How do you know when your product is ready to ship? What metrics do you use to give you confidence in the quality of your software? I’d love to hear your thoughts.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.