Non-Risky Business: Using Static Analysis to Ensure Software Quality

by

Earlier this week, our own Jitendra Subramanyam joined industry luminary Capers Jones, Chief Scientist Emeritus of Software Productivity Research (SPR) to co-host a webinar on curbing application software outages like the ones seen in the financial sector over the past couple months. The webinar, titled “Stop High-Profile Outages by Quantifying Application Risks,” focused on the importance of static analysis of application software during the build and/or customization phases to identify potential issues than can them be fixed, preventing a future outage.

Effectiveness of Static Analysis

Jones has long been a proponent of static analysis over merely testing software. In his 2009 book, Applied Software Measurement, Jones wrote, “In terms of defect removal, testing alone has never been sufficient to ensure high quality levels. All of the best-in-class software producers such as AT&T, HP, Microsoft, IBM, Raytheon or Motorola utilize both pretest design reviews and formal code inspections. Design reviews and code inspections can both be used with client-server applications and should improve defect removal efficiency notably.”

It was this point that Jones and Subramanyam stressed throughout the webinar. They noted that defects in software design are the hardest to catch and eliminate and urged developers not to wait until testing to try to find these defects. They said that rather than waiting until testing, to identify and catch defects early with automated code reviews and static analysis. To illustrate his point further, Jones revealed the following:

  • Testing by itself is time consuming and not very efficient. Most forms of testing only find about 35% of the bugs that are present.
  • Static analysis prior to testing is very quick and about 85% efficient. As a result, when testing starts there are so few bugs present that testing schedules drop by perhaps 50%. Static analysis will also find some structural defects not usually found by testing.
  • Formal inspections of requirement and design are beneficial too. Formal inspections create better documents for test case creation, and as a result improve testing efficiency by at least 5% per test stage.
  • A synergistic combination of inspections, static analysis and formal testing can top 96% in defect removal efficiency on average, and 99% in a few cases. Better, the overall schedules will be shorter than testing alone.
  • The average for a combination of six kinds of testing - unit test, function test, regression test, performance test, system test and acceptance test - without preliminary static analysis is only about 85%.

A Quality Foundation

The main idea the co-hosts intended to resonate with those who attended the webinar was the importance of building in structural quality from the very start. One way to do this is to incorporate CAST’s Automated Analysis and Measurement into the application software development process.

CAST automates the analysis and measurement of applications. Covering a wide range of platforms, languages and frameworks, CAST incorporates software engineering and application domain expertise into its algorithms. Subject matter experts use CAST’s objective quality metrics to quickly drill down to root causes and remediate quality hot spots. Improvements in quality are quantified using the same quality measures, making it possible to quantify the effectiveness of quality improvement activities and satisfy the six essential ingredients of effective code reviews.

By incorporating CAST into the development processes, businesses can go a long way toward preventing high-profile outages and take the risk out of that part of their businesses.

 

Earlier this week, our own Jitendra Subramanyam joined industry luminary Capers Jones, Chief Scientist Emeritus of Software Productivity Research (SPR) to co-host a webinar, sponsored by CAST, Inc., on curbing application software outages like the ones seen in the financial sector over the past couple months. The webinar, titled “Stop High-Profile Outages by Quantifying Application Risks,” focused on the importance of static analysis of application software during the build and/or customization phases to identify potential issues than can them be fixed, preventing a future outage.

Effectiveness of Static Analysis

Jones has long been a proponent of static analysis over merely testing software. In his 2009 book, Applied Software Measurement, Jones wrote, “In terms of defect removal, testing alone has never been sufficient to ensure high quality levels. All of the best-in-class software producers such as AT&T, HP, Microsoft, IBM, Raytheon or Motorola utilize both pretest design reviews and formal code inspections. Design reviews and code inspections can both be used with client-server applications and should improve defect removal efficiency notably.”

It was this point that Jones and Subramanyam stressed throughout the webinar. They noted that defects in software design are the hardest to catch and eliminate and urged developers not to wait until testing to try to find these defects. They said that rather than waiting until testing, to identify and catch defects early with automated code reviews and static analysis. To illustrate his point further, Jones revealed the following:

· Testing by itself is time consuming and not very efficient. Most forms of testing only find about 35% of the bugs that are present.

· Static analysis prior to testing is very quick and about 85% efficient. As a result, when testing starts there are so few bugs present that testing schedules drop by perhaps 50%. Static analysis will also find some structural defects not usually found by testing.

· Formal inspections of requirement and design are beneficial too. Formal inspections create better documents for test case creation, and as a result improve testing efficiency by at least 5% per test stage.

· A synergistic combination of inspections, static analysis and formal testing can top 96% in defect removal efficiency on average, and 99% in a few cases. Better, the overall schedules will be shorter than testing alone.

· The average for a combination of six kinds of testing without preliminary static analysis is only about 85%. The 6 kinds of testing are 1 unit test, 2 function test, 3 regression test, 4 performance test, 5 system test, 6 acceptance test.

A Quality Foundation

The main idea the co-hosts intended to resonate with those who attended was the importance of building structural quality in from the very start of application software development. One way to do this is to incorporate CAST’s Automated Analysis and Measurement into the application software development process.

CAST automates the analysis and measurement of applications. Covering a wide range of platforms, languages and frameworks, CAST incorporates software engineering and application domain expertise into its algorithms. Subject matter experts use CAST’s objective quality metrics to quickly drill down to root causes and remediate quality hot spots. Improvements in quality are quantified using the same quality measures, making it possible to quantify the effectiveness of quality improvement activities and satisfy the six essential ingredients of effective code reviews.

By incorporating CAST into the development processes, businesses can go a long way toward preventing high-profile outages and take the risk out of that part of their businesses.

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Jonathan Bloom Writer, Blogger & PR Consultant
Jonathan is an experienced writer with over 20 years writing about the Technology industry. Jon has written more than 750 journal and magazine articles, blogs and other materials that have been published throughout the U.S. and Canada. He has expertise in a wide range of subjects within the IT industry including software development, enterprise software, mobile, database, security, BI, SaaS/Cloud, Health Care IT and Sustainable Technology. In his free time, Jon enjoys attending sporting events, cooking, studying American history and listening to Bruce Springsteen music.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|