Three Types of Software Quality

by

There are three basic types of software quality.

  1. Functional Quality -- a measure of what the software does vs. what it's supposed to do
  2. Non-Functional Quality -- a measure of how well it does it vs. how well it's supposed to do it
  3. Latent Quality -- a measure of how well it will continue to do it in the future

I say basic because out of these three types emerge other aspects of quality -- Usability is one such derived quality. Think of it as filling the gap between the what and the how well -- it's how the product is put together to meet the user's needs. This is a mix of Functional and Non-Functional quality (and perhaps even a little bit of latent quality).

More on usability at the end.

You need Latent quality in the mix because we know things will change in the future. The software will itself change due to changes in business needs, changes in technology, or changes to other software that it needs to play nice with. Something that's readily changeable is better than something that gives you a migraine to extend.

All three kinds of software quality are critical to the value you derive from software once it's released. It needs to do what I want, do it reasonably fast without compromising my privacy, and keep up with my changing usage patterns and new needs. Nothing controversial about that.

But there's something terribly wrong here. Let me explain.

The three types of software quality are all attributes or characteristics of the product itself -- the stuff that gets sold in a shrink-wrapped box or downloaded and installed. Product attributes are very hard to measure as the thing itself is being built. Even once the thing is built, product attributes are difficult to measure. There are two main reasons for this difficulty:

  • product metrics are difficult to define -- how do you measure security? how do you measure the ability to change easily to meet future needs?
  • product metrics (even if we can define them precisely) are difficult to obtain as the product is being built.

Because of these rather severe difficulties, we do something that is understandable, but crazy. We rely almost entirely on measuring things like on-time and on-budget metrics. But these are process metrics -- they have nothing at all to do with the value the product is going to deliver once built.

It's like measuring temperature, when what you really need is a measure of weight.

Measuring product metrics only partially or not at all during the development cycle means that we are rolling the dice on software value post release. The danger of this is exacerbated because many of the practices we adopt to get better on the process metrics can have significant deleterious effects on the product metrics. Not only do we make bad trade-offs between process objectives (meeting the schedule) and product objectives (protects sensitive data), we might make them in such a way that product value is forever sacrificed -- it's impossible to go back and redo it like was supposed to be.

But we don't need to roll the dice -- there are ways to define and measure product quality right from the start of the development life cycle and throughout a software product's useful life.

To see how, go here.

End Note on Usability

It's easier to say when something is not usable than when it is.

Key lapses in usability:

  1. don't know it's there
  2. know it's there, but don't know what it does
  3. know it's there but don't know how to use it
  4. know it's there, know how to use it, but it's not what I need
  5. know it's there, know how to use it, it's what I need but it's too difficult to use (usually a workaround is found and adoption of the feature plunges to zero)
Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|