The True Cost of Bad Software


There is no question that cyber risk has become a top priority for CEOs and boards following several BIG breaches in 2017. From Equifax to WannaCry to Yahoo, business executives are starting to pay more attention to the impact of bad software.

As a result, CIOs are under increasing pressure to educate other c-level team members and boards about the company’s cyber risk in a way that’s clear and measurable. Cristina Alvarez, the former CIO of Telefonica, recently elaborated on this situation, suggesting that a new category of Software Intelligence is emerging to help technical and non-technical executives alike make smart technology decisions.

Consulting firm BCG has written of three strategies for CIOs to engage with the board on cyber security measures, suggesting that CIOs take a more proactive role in shaping the board’s IT conversations. “By putting [cyber risks] in a business context, CIOs can help board members identify and evaluate the true risk related to IT, and also help boards understand why certain IT investments should be made or prioritized,” says BCG.

But armed with Software Intelligence and counsel from advisors and analysts, CIOs still have to orchestrate their teams and align DevOps practices under one roof to fix the bad software that opens the door to cyber threat.

This is not an easy task. It requires a complete shift in behavior from reactive to proactive, moving from a model centered around speed and cost control toward the continuous improvement of application quality.

Automating software quality measurement  

To reverse the upward trend of software breaches in recent history, CIOs should organize their teams around a regular reviews of software structure, leveraging static analysis at multiple points in the DevOps cycle.   

Gartner recommends automating this process to avoid adversely affecting software development and security teams. They also call-out the need for a detailed review and remediation process to avoid exposing risky software to bad actors:

“The lack of a single cohesive view of the risk posed by an application or system can result in it being deployed and operated in a state susceptible to attack. This increases the likelihood of a data breach or other security incident and exposes the organization to regulatory and audit failures.”

In addition, Gartner says, the absence of an automated defect removal process makes “it difficult, if not impossible, to deliver executives meaningful metrics regarding the risk posed by applications. In the absence of such data, the organization will assume more risk than is desirable, and application security programs — lacking a clear, risk-based justification — will continue to be underfunded and poorly prioritized.”

Putting ‘IT’ in dollars and cents

The costs of these cyber breaches are not cheap, both in dollars and reputation. Statistics released in late February from a Council of Economic Advisers report on the impact of cyber attacks on U.S. government and industry set the aggregated cost of malicious cyber activity between $57 billion and $109 billion in 2016. On a per-business level, new research from Ponemon and Accenture set the average U.S. business loss leads the globe at $21.22 million.  

These costs re-enforce the need to shift-left software quality analysis to find and fix software defects as early as possible. The cost of successful cyber-attacks largely stems from security flaws in software. You can’t have high software quality without ironclad security.

The cost of litigation failures and disasters are also something to consider, although these scenarios are less common. In one case against a major ERP company, for example, the company’s own shareholders filed litigation. They asserted that the ERP package quality was so bad that it was lowering stock values.

Cancelled projects due to schedule slippage or IT cost overruns are most frequently caused by bad software quality. In fact, the effort of finding and fixing software bugs is the #1 cost driver in enterprise IT. By introducing system-level analysis early in the software development lifecycle, teams can ensure application development will not only meet deadlines and expectations in term of quality but also run in production.

The past three to five years have seen a significant focus on driving team productivity and delivering new functionality to the market faster with limited resources. Continuous integration and Agile development practices have come into the forefront to fill those objectives. Now, we are seeing more teams step back to consider long-term strategies based on software risk prevention, defect interception and technical debt to ensure predictability in system stability and time-to-market. 
Philippe Guerin Software Analytics & Risk Prevention specialist. Domain Expert in ADM Sizing and Productivity Measurement
A well-rounded technologist with over 15 years of leadership experience in ADM Productivity Measurement, Product Development/Management, Program Management, Solution Architecture, Sales, and Services, and more than 5 years of leading teams.
Load more reviews
Thank you for the review! Your review must be approved first
New code

You've already submitted a review for this item