Don’t Let Poor Software Quality Be the Iceberg to Sink Your Ship

by

You’re sailing along to release day with the latest version of an application that management believes will be a “titanic” success for your company. As you near the destination of release day, you give a “lookout” and test the app. Although there are small software quality issues looming in the distance that could present issues later on, the application passes, so you figure the issues are not enough to cause you to change course and delay the release. As you forge onward, though, something below the surface not seen during testing rips a hole in the application and sinks it.

Like an iceberg, much of what ails enterprise software today is not the part that can be seen by the naked eye. Invariably it’s the embedded code way down deep below the water line of applications, cobbled together over time that will bring it down. This is the problem many organizations encounter when they rely upon testing to search for software quality issues, because testing has two fatal flaws:

  1. Too late: testing is an end-of-development process, which means no code is tested until it’s all done. At that point, there’s extra pressure to find problems because you’re up against a deadline, when it’s really too late to do anything about it and the audit function, therefore, has no real chance of success.

     

  2. Pass/Fail: testing yields some not-so-great assumptions, the worst of which is as long as the application passes, it’s good to go. That means it’s possible that only 51% of the code is robust and secure, leaving 49% of the application vulnerable.

This isn’t exactly news to software engineers. We often see release schedules accommodate the shortcomings of software testing. It’s a common pattern – the first release is full of new features, but the next is full of fixes, then features again, then fixes…and so on. Either that or, if you’re a mammoth-sized software company, you schedule the first Tuesday of each month to send out patches as though they’re some magnanimous marketing gesture, when actually you’re covering your own tracks for having released software with suspect application quality.

Organizations need to stop thinking of application quality as a gate that only swings one way or the other, allowing applications to pass through or not. They need to look at it as another parameter to work within that will, in the end, speed the software to better application quality sooner because issues are dealt with on the fly.

To do this, companies should roll testing – or rather, automated application analysis powered by Software Intelligence – into the development process. This will engineer software quality directly into the product and act like a “Risk Redirector” that identifies the flaws testing wouldn’t find. It can also provide a lot of valuable, objective data that IT can use if it needs to push management for a delay in the release of an application.

This process starts with embedding software quality standards into the development process like those developed by The Consortium for IT Software Quality (CISQ). CISQ has released a set of five software health factors (Security, Robustness, Changeability, Transferability, and Maintainability) and 86 standard rules that support them. They also have identified critical violations that are so egregious that organizations MUST remediate them before deployment. Teams that have applied CISQ’s rules -during development have identified an average of more than 80-percent of errors that never were given the chance to become full-blown problems.

Factoring these standards into the development process allows development teams to assess applications dynamically rather than statically. The ongoing assessments identify issues as they happen, which makes them quicker and easier to fix rather than having to go back, find them, fix them, then determine if the fix affected any other code later in the application, which itself may now need to be repaired.

While this kind of raw data might be difficult for someone not involved with the development cycle directly, it can identify risk in terms of severity – low, medium, and high – and with that assessment can put a dollar amount on it. This represents the Technical Debt of the product – the total expense an organization pays out due to inadequate architecture or software development processes within its current codebase.

Also known as code debt, the concept defines the cost of what work needs to be done before jobs are actually complete. If the debt is not resolved, it continues to accumulate interest, thus making it more troublesome to implement future changes. By doing this, Technical Debt puts a dollar figure on the risk of a premature release, enabling decision-makers to make an objective judgement about the “go or no-go” decision.

Similar to Technical Debt in terms of an objective measure of readiness, CISQ has established Sigma-based Quality Levels that can assess whether an application is anywhere from “Very Good” to “Unacceptable” prior to release and whether a company should hold back or move forward. During a recent review of 274 commercial applications, CISQ’s Sigma-based assessment found that companies should have held back more than three-fourths of these applications (76.9 percent) due to code defects. (see the chart), while only 23 percent were of good enough quality to move forward.

It’s time for companies to start using something more objective and insightful than a calendar to determine release time for an application. By incorporating quality standards into the development process as a risk redirector, a company can assess and possibly even avoid the risk of christening an application too soon and launching software badly, which gives itself a better chance of avoiding costly lawsuits and embarrassment when it crashes and sinks.

Tagged:
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Bill Dickenson
Bill Dickenson Director of Solution Delivery at CAST
Highly accomplished executive with extensive operations, delivery, strategy and sales expertise in applications and operations. 30+ years of expertise in a wide range of outsourcing deals, outsourcing contracts, contract negotiations, global delivery and effective delivery models for both custom application development and packaged applications (SAP, Oracle, Infor).
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|