Quality Is NOT Equal To Testing

by

What's the biggest cultural change that companies who use CAST undergo?

That is the question that Lev Lesokhin and I were asked last week. We were talking to Margo Visitacion and Mike Gualtieri of Forrester.

The answer: The realization that software quality is not equal to testing.

There's a light switch that flips when organizations realize there's much more to quality than functional testing. There's non-functional testing, and even beyond that, "dependability testing" (to borrow a phrase from our Chief Scientist, Bill Curtis).

Let's have a look.

Everyone realizes that functional testing is nowhere close to enough. If all that mattered is the *what*, then every car that lines up at the start of a race will win -- after all, they all satisfy functional specifications!

But winning the race is not only about what you come to the starting line with. It also depends on how well that thing works during the race! In fact, you can have the car that satisfies the functional specs the best but fail to even finish the race -- just ask poor Sebastian Vettel and the Red Bull Formula 1 team!

There are two ways to tackle the "how well".

The first way is to make sure the car performs in race conditions. That's the equivalent of non-functional testing -- you simulate real-world conditions as best you can and fix the problems that appear.

To have any confidence in such testing, you must be confident that your simulation replicates race conditions (or the critical elements of it), you know what to test, and you know how to interpret test results and use them to improve (rather than just have terabytes of test data sitting in a data warehouse somewhere).

The dirty secret of non-functional testing is it's too little, too late. The result: low confidence in how this thing will perform when the rubber hits the road. Production problems. Business disruption. A ruined business case.

No amount of non-functional testing can give you confidence in the car's dependability. Ensuring dependability is the second way to go beyond functional testing.

Dependability is about how the car will perform in those conditions you haven't yet tested. Can we overtake on turn 3 if the tank is quarter full, the tires are worn, it's beginning to drizzle, and the wind blowing from the south east at 34mph?

Dependability is about how the car will perform in conditions you couldn't possibly test for. How quickly can you make a gearbox adjustment to help your driver on a straightaway made slick by an oil spill? What if you had to use a non-factory replacement part to do it? Will the car still perform well enough to win?

So, you've spent tens of millions of dollars on the stuff. The business case depends on it performing up to snuff. You've tested it as best you can. Yet, the day you roll this out feels like a roll at the craps table.

To make it feel less like that, you need to have dependability -- the confidence that you have an effective plan for the unknown unknowns.

The realization that quality is not equal to testing fundamentally changes the way IT organizations develop, enhance and maintain business applications. It fundamentally changes the way they manage their software assets.

Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|