Last week on the East Coast Main Line, which connects London to Edinburgh, a software malfunction left five trains stranded mid-track and significantly delayed others after a power supply issue knocked out the signaling system. According to reports, software that should have instructed the backup signaling system to kick in failed to function, causing all signals on the line to default to “Red,” halting trains where they stood. The failure left more than 3,000 rail passengers stranded or delayed for more than five hours on a Saturday afternoon.
Software failures like this one have become all too commonplace in recent years. We treat news of software failures as though they were inevitable and almost expected. But why? When exactly did we decide that software failure was an unavoidable part of business?
Shouldn’t we do a better job of assessing the structural quality of software before it is deployed rather than waiting for it to fail and then fixing the problem? After all, we know what causes poor software quality:
- Business Blindspot: Regardless of the industry, most developers are not experts in their particular domain when they begin working for a company. It takes time to learn about the business, but most of the learning, unfortunately, comes only by correcting mistakes after the software has malfunctioned.
- Inexperience with Technology: Mission critical business applications are a complex array of multiple computer languages and software platforms. Rather than being built on a single platform or in a single language, they tend to be mash ups of platforms, interfaces, business logic and data management that interact through middleware with enterprise resource systems and legacy applications. Additionally, in the case of some long-standing systems, developers often find themselves programming on top of archaic languages. It is rare to find a developer who knows “everything” when it comes to programming languages and those who don’t may make assumptions that result in software errors that lead to system outages, data corruption and security breaches.
- Speed Kills: The pace of business over the past decade has increased exponentially. Things move so fast that software is practically obsolete by the time it’s installed. The break-neck speeds at which developers are asked to ply their craft often means software quality becomes a sacrificial lamb.
- Old Code Complexities: A significant majority of software development builds upon existing code. Studies show that developers spend half their time or more trying to figure out what the “old code” did and how it can be modified for use in the current project. The more complex the code, the more time spent trying to unravel it…or not. In the interest of time (see “Speed Kills” above) complexity can also lead to “work arounds” leaving a high potential for mistakes.
- Buyer Beware: Mergers and acquisitions are a fact of life in today’s business climate and most large applications from large “acquirers” are built using code from acquired companies. Unfortunately, the acquiring organization can’t control the quality of the software they are receiving and poor structural quality is not immediately visible to them.
Assessing the Answers
OK, so we know what the problems are, but how do we fix them? First of all, software issues need to be dealt with before they become a problem, not after 3,000 passengers are left stranded on the tracks or a stock exchange is forced to halt trading. To ensure sound structural quality out of the gate, application software should be assessed during the build process using a platform of automated analysis and measurement.
Not only can automated analysis and measurement resolve the code issues that often accompany acquired software, building on top of old software, rapid development and developer inexperience, it also grants significant visibility to the work being done by individual developers. While such scrutiny may seem invasive – like “Big Brother” watching over their shoulders – static analysis via automated analysis and measurement can actually be an effective tool for professional development. After all, if you don’t know where someone needs help, you can’t provide it.
Whether it’s the software or the developer, though, automated analysis and measurement grants visibility into the issues and provides a basis that leads to improved software quality. And ultimately, optimal software quality – not just “good enough” software quality – should be every company’s goal.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.