You’re sailing along to release day with the latest version of an application that management believes will be a “titanic” success for your company. As you near the destination of release day, you give a “lookout” and test the app. Although there are small software quality issues looming in the distance that could present issues later on, the application passes, so you figure the issues are not enough to cause you to change course and delay the release. As you forge onward, though, something below the surface not seen during testing rips a hole in the application and sinks it.
Like an iceberg, much of what ails enterprise software today is not the part that can be seen by the naked eye. Invariably it’s the embedded code way down deep below the water line of applications, cobbled together over time that will bring it down. This is the problem many organizations encounter when they rely upon testing to search for software quality issues, because testing has two fatal flaws:
This isn’t exactly news to software engineers. We often see release schedules accommodate the shortcomings of software testing. It’s a common pattern – the first release is full of new features, but the next is full of fixes, then features again, then fixes…and so on. Either that or, if you’re a mammoth-sized software company, you schedule the first Tuesday of each month to send out patches as though they’re some magnanimous marketing gesture, when actually you’re covering your own tracks for having released software with suspect application quality.
Organizations need to stop thinking of application quality as a gate that only swings one way or the other, allowing applications to pass through or not. They need to look at it as another parameter to work within that will, in the end, speed the software to better application quality sooner because issues are dealt with on the fly.
To do this, companies should roll testing – or rather, automated application analysis powered by Software Intelligence – into the development process. This will engineer software quality directly into the product and act like a “Risk Redirector” that identifies the flaws testing wouldn’t find. It can also provide a lot of valuable, objective data that IT can use if it needs to push management for a delay in the release of an application.
This process starts with embedding software quality standards into the development process like those developed by The Consortium for IT Software Quality (CISQ). CISQ has released a set of five software health factors (Security, Robustness, Changeability, Transferability, and Maintainability) and 86 standard rules that support them. They also have identified critical violations that are so egregious that organizations MUST remediate them before deployment. Teams that have applied CISQ’s rules -during development have identified an average of more than 80-percent of errors that never were given the chance to become full-blown problems.
Factoring these standards into the development process allows development teams to assess applications dynamically rather than statically. The ongoing assessments identify issues as they happen, which makes them quicker and easier to fix rather than having to go back, find them, fix them, then determine if the fix affected any other code later in the application, which itself may now need to be repaired.
While this kind of raw data might be difficult for someone not involved with the development cycle directly, it can identify risk in terms of severity – low, medium, and high – and with that assessment can put a dollar amount on it. This represents the Technical Debt of the product – the total expense an organization pays out due to inadequate architecture or software development processes within its current codebase.
Also known as code debt, the concept defines the cost of what work needs to be done before jobs are actually complete. If the debt is not resolved, it continues to accumulate interest, thus making it more troublesome to implement future changes. By doing this, Technical Debt puts a dollar figure on the risk of a premature release, enabling decision-makers to make an objective judgement about the “go or no-go” decision.
Similar to Technical Debt in terms of an objective measure of readiness, CISQ has established Sigma-based Quality Levels that can assess whether an application is anywhere from “Very Good” to “Unacceptable” prior to release and whether a company should hold back or move forward. During a recent review of 274 commercial applications, CISQ’s Sigma-based assessment found that companies should have held back more than three-fourths of these applications (76.9 percent) due to code defects. (see the chart), while only 23 percent were of good enough quality to move forward.
It’s time for companies to start using something more objective and insightful than a calendar to determine release time for an application. By incorporating quality standards into the development process as a risk redirector, a company can assess and possibly even avoid the risk of christening an application too soon and launching software badly, which gives itself a better chance of avoiding costly lawsuits and embarrassment when it crashes and sinks.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.