Reduce Software Risk through Improved Quality Measures with CAST, TCS and OMG

by

Webinar Summary

I had the pleasure of moderating a panel discussion with Bill Martorelli, Principal Analyst at Forrester Research Inc; Dr. Richard Mark Soley, Chairman and CEO of Object Management Group (OMG); Siva Ganesan, VP & Global Head of Assurance Services at Tata Consultancy Services (TCS); and Lev Lesokhin, EVP, Strategy & Market Development at CAST.

We focused on industry trends, and specifically discussed how standardizing quality measures can have a big impact on reducing software risk.  This interactive format allowed attendees to hear four distinct perspectives on the challenges and progress that is being made within organizations directly, and also at systems integrators.

Mr. Martorelli started the discussion by providing insight into four powerful dynamics reshaping our ecosystem:

  1. Innovation revolution
  2. As-a-Service as a norm
  3. Changing demographics
  4. Rise of social and mobile

Mr. Martorelli punctuated the importance in preparing for these shifts by highlighting the impact poor quality can have on the business:

  • Poor performing, unstable applications
  • Diminished potential for brand loyalty, market share, revenues
  • Costly outages and unfavorable publicity

Dr. Soley from OMG built on Mr. Martorelli’s observations by discussing how standards bodies, such as OMG, SEI and CISQ, are helping industry respond to these challenges by providing specific standards and guidance to gain visibility into business critical applications, control outsourcers, and benchmark in-house and outsource development teams.

Mr. Martorelli emphasized the focus he has seen at client organizations in shifting quality to the left, and how quality is bleeding into many new stakeholders’ responsibilities.

Some of the trends covered during the discussion included:

  • Moving test and quality to the left of the waterfall
  • Addressing architectural sprawl with more architectural and engineering know-how
  • Seeing quality measurement become an important component of service levels
  • Emerging combined professional services/managed services offerings
  • Shifting responsibility for quality management to the business user
  • Favoring more results-driven approaches over conventional staffing-based testing services

Mr. Ganesan from TCS provided insight into how TCS Assurance Services is evolving to meet these new challenges.  Mr. Ganesan explained TCS’s rationale for evolving beyond code checkers and simple code hygiene and the need to employ automated, structural analysis to provide world class service to their clients and ensure more reliable, high quality deliverables.

We’d like to thank each of our panelists for their time and insight.  We received a high-level of interest from attendees with a lot of questions submitted for our speakers.  Please find a selection of these questions below.   If you’d like to listen to the recording of the webinar, click here.

Q&A

It is clear how one might apply this to new development, but how does one approach applying a code quality metric to an existing portfolio? Would not the changes be overwhelming?

In truth, this is very possible and happens to be a significant non-starter for many organizations.  The sudden accounting of all the potential issues within applications could be perceived as daunting.  However, many solutions have a tendency to generate a lot of ‘noise’ during their analysis.  At CAST, we propose a risk-based approach: one that focuses on the identification of the most critical violations rather than all possible violations. We also focus on the new violations being added, rather than the ones sitting in your systems for years. This way, your critical path during an initial technical assessment of an application or portfolio should focus on identifying the most critical risks.  CAST AIP provides a Transaction-wide Risk Index that displays the different transactions of the application sorted by risk category (performance, robustness or security). By focusing on these violations, you will improve the critical transactions of the application.  Additionally, AIP generates a Propagated Risk Index to illustrate the objects/ rule pairing that will have the biggest impact on improving the overall health of the application or system.  Any analysis without this level of detail and prioritization will certainly create more obstacles than it removes.

How do you see the use of Open Source code changing software risk?

Open Source, just like code developed by your own team or partner, injects risk into systems. And just like any other code, the biggest risk is lack of visibility into that code.  Studies have found that in general open source code is better than industry averages.  Other studies suggest that the quality of the code is a factor of the testing approach of that open source community.  Code that is tested continuously tends to have fewer defects.  It is nearly impossible to suggest that Open Source is more risky.  What is possible is to suggest that receiving code from any source, Open or contracted, without a proper and objective measure of that deliverable adds risk to your systems.

Bill Martorelli mentioned "Technical/Code Debt" as a quality metric; could you explain a little further, please?

The term “Technical Debt”, first defined by Ward Cunningham in 1992, is having a renaissance. A wide variety of ways to define and calculate Technical Debt are emerging.

While the methods may vary, how you define and calculate Technical Debt makes a big difference to the accuracy and utility of the result. Some authors count the need for upgrades as Technical Debt; however this can lead to some very large estimates. At CAST, our calculation of Technical Debt is data-driven, leading to an objective, conservative, and actionable estimate.

We define Technical Debt in an application as the effort required to fix only those problems that are highly likely to cause severe business disruption and remain in the code when an application is released; it does not include all problems, just the most serious ones.

Based on this definition, we estimate that the Technical Debt of an average-sized application of 300,000 lines of code is $1,083,000 – so, a million dollars. For further details on our calculation method and results on the current state of software quality, please see the CRASH Report (CAST Report on Application Software Health).

Here’s a community dedicated to the awareness and education of the topic: http://www.ontechnicaldebt.com

I have heard a lot focused on Quality discussion today, but curious about this group’s perspective on the other component of CAST AIP, function point analysis?

In addition to measuring a system’s quality, the ability to measure the number of function points as well as precise measures of the changes in the number and complexity of all application components makes it possible to accurately measure development team productivity.  Employing CAST AIP as a productivity measurement solution enables:

  • The calculation of a productivity baseline of either in-house our offshore teams.
  • The tracking of productivity over time by month or release.
  • The ability to automatically generate measures of quality and complexity.
  • The identification of the root cause of process inefficiencies
  • The capability to measure effectiveness of process improvements.

CAST AIP and the CISQ Automated Function Point Specification: The CISQ Automated Function Point Specification produced by the CISQ team led by David Herron of the David Consulting Group has recently passed an important milestone. CISQ has worked with the OMG Architecture Board to get the specification properly represented in OMG’s existing meta-models. This specification was defined as closely as possible to the IFPUG counting guidelines, while providing the specificity required for automation. This fall it was approved for a 3-month public review on the OMG website. All comments received will be reviewed at the December OMG Technical Meeting, and the relevant OMG boards will vote on approving it as an OMG-supported specification (OMG’s equivalent of a standard). From there, it will undergo OMG’s fast-track process with ISO to have it considered for inclusion in the relevant ISO standard.  We believe this standard will expand the use of Function Point measures by dramatically reducing their cost and improving their consistency.

Is the industry average of production incidents 1 per week and 1 outage per month?/ Are these major incidents and outages for the enterprise?

Here’s a site that provides additional insight into the impact of outages.

 

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Pete Pizzutillo
Pete Pizzutillo VP Corporate Marketing at CAST
Pete Pizzutillo is Vice President of Corporate Marketing at CAST. He is responsible for leading the integrated marketing strategies (digital and social media, public relations, partners, and events) to build client engagement and generate demand. He passionately believes that the industry has the knowledge, tools and capability such that no one should lose customers, revenue or damage their brand (or career) due to poor software. Pete also oversees CAST’s product marketing team whose mission is to help organizations understand how Software Intelligence supports this belief. Prior to CAST, Pete oversaw product development and product management for an estimating and planning software company in the Aerospace and Defense market. He has worked in several industries in various marketing roles and started his career as an advertising agency art director. He is a graduated of The Pennsylvania State University with degrees in Business Administration and Art. Pete lives in New Jersey with his wife and their four children. You can connect with Pete on LinkedIn or Twitter: @pizzutillo.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|