Waylaying the 'Elephant in the Room'


Each year, software errors cost U.S. corporations in excess of $60 Billion for repairs and maintenance costs. The problem is pandemic, affecting companies of all sizes from those topping the Fortune list to pre-IPO start-ups.

And the cost of software failures is not only financial. The hit to a company’s reputation that results from software malfunctions can result in lost customers, lost new business and damaged reputation, compounding the costs to fix the problem. When it comes to software, quality counts!

Last week, Bruce Craig of Australia-based software modernization firm Micro Focus, wrote that software testing to detect software errors is no longer a practice reserved just for large enterprises. He notes, “From independent software vendors through to one-man-band developers, testing is now an essential element of the IT function. The potential cost of IT failure is simply too high to be ignored.”

Craig believes that testing is the quintessential “elephant in the room” that cannot be ignored. However, in acknowledging the elephant in the room, he ignores the fact that a company can not only shrink the elephant down to mouse-like size, but also be exponentially more effective in eliminating software errors by performing automated analysis and measurement on the software during the build phase, before deployment and before testing.

Shrinking the Elephant

Around the time Craig was preparing his article for CIO Australia, Capers Jones was hosting a webinar, the purpose of which was show how to quantify application risks through static analysis. Long a proponent of static analysis when assessing software, Jones offered a series of statistics that illustrate how and why performing static analysis through automated analysis and measurement is a far more effective means of detecting software errors and giving companies the chance to fix them. Jones revealed that:

  • Testing by itself is time consuming and not very efficient.  Most forms of testing only find about 35% of the bugs that are present.
  • Static analysis prior to testing is very quick and about 85% efficient.  As a result, when testing starts there are so few bugs present that testing schedules drop by perhaps 50%. Static analysis will also find some structural defects not usually found by testing.
  • Formal inspections of requirement and design are beneficial too.  Formal inspections create better documents for test case creation, and as a result improve testing efficiency by at least 5% per test stage.
  • A synergistic combination of inspections, static analysis and formal testing can top 96% in defect removal efficiency on average, and 99% in a few cases. Better, the overall schedules will be shorter than testing alone.
  • The average for a combination of six kinds of testing (unit test, function test, regression test, performance test, system test and acceptance test) without preliminary static analysis is only about 85%.

So 35% of bugs through testing alone or 85% through static analysis – with those kinds of efficiencies, it seems companies should be more proactive about software errors. They should address them before deployment through static analysis and not allow software testing to become the “elephant in the room.”

Filled in: Software Analysis
Jonathan Bloom Writer, Blogger & PR Consultant
Jonathan is an experienced writer with over 20 years writing about the Technology industry. Jon has written more than 750 journal and magazine articles, blogs and other materials that have been published throughout the U.S. and Canada. He has expertise in a wide range of subjects within the IT industry including software development, enterprise software, mobile, database, security, BI, SaaS/Cloud, Health Care IT and Sustainable Technology. In his free time, Jon enjoys attending sporting events, cooking, studying American history and listening to Bruce Springsteen music.
Load more reviews
Thank you for the review! Your review must be approved first
New code

You've already submitted a review for this item