I had the pleasure of moderating a panel discussion with Bill Martorelli, Principal Analyst at Forrester Research Inc; Dr. Richard Mark Soley, Chairman and CEO of Object Management Group (OMG); Siva Ganesan, VP & Global Head of Assurance Services at Tata Consultancy Services (TCS); and Lev Lesokhin, EVP, Strategy & Market Development at CAST.
We focused on industry trends, and specifically discussed how standardizing quality measures can have a big impact on reducing software risk. This interactive format allowed attendees to hear four distinct perspectives on the challenges and progress that is being made within organizations directly, and also at systems integrators.
Mr. Martorelli started the discussion by providing insight into four powerful dynamics reshaping our ecosystem:
Mr. Martorelli punctuated the importance in preparing for these shifts by highlighting the impact poor quality can have on the business:
Dr. Soley from OMG built on Mr. Martorelli’s observations by discussing how standards bodies, such as OMG, SEI and CISQ, are helping industry respond to these challenges by providing specific standards and guidance to gain visibility into business critical applications, control outsourcers, and benchmark in-house and outsource development teams.
Mr. Martorelli emphasized the focus he has seen at client organizations in shifting quality to the left, and how quality is bleeding into many new stakeholders’ responsibilities.
Some of the trends covered during the discussion included:
Mr. Ganesan from TCS provided insight into how TCS Assurance Services is evolving to meet these new challenges. Mr. Ganesan explained TCS’s rationale for evolving beyond code checkers and simple code hygiene and the need to employ automated, structural analysis to provide world class service to their clients and ensure more reliable, high quality deliverables.
We’d like to thank each of our panelists for their time and insight. We received a high-level of interest from attendees with a lot of questions submitted for our speakers. Please find a selection of these questions below. If you’d like to listen to the recording of the webinar, click here.
In truth, this is very possible and happens to be a significant non-starter for many organizations. The sudden accounting of all the potential issues within applications could be perceived as daunting. However, many solutions have a tendency to generate a lot of ‘noise’ during their analysis. At CAST, we propose a risk-based approach: one that focuses on the identification of the most critical violations rather than all possible violations. We also focus on the new violations being added, rather than the ones sitting in your systems for years. This way, your critical path during an initial technical assessment of an application or portfolio should focus on identifying the most critical risks. CAST AIP provides a Transaction-wide Risk Index that displays the different transactions of the application sorted by risk category (performance, robustness or security). By focusing on these violations, you will improve the critical transactions of the application. Additionally, AIP generates a Propagated Risk Index to illustrate the objects/ rule pairing that will have the biggest impact on improving the overall health of the application or system. Any analysis without this level of detail and prioritization will certainly create more obstacles than it removes.
Open Source, just like code developed by your own team or partner, injects risk into systems. And just like any other code, the biggest risk is lack of visibility into that code. Studies have found that in general open source code is better than industry averages. Other studies suggest that the quality of the code is a factor of the testing approach of that open source community. Code that is tested continuously tends to have fewer defects. It is nearly impossible to suggest that Open Source is more risky. What is possible is to suggest that receiving code from any source, Open or contracted, without a proper and objective measure of that deliverable adds risk to your systems.
The term “Technical Debt”, first defined by Ward Cunningham in 1992, is having a renaissance. A wide variety of ways to define and calculate Technical Debt are emerging.
While the methods may vary, how you define and calculate Technical Debt makes a big difference to the accuracy and utility of the result. Some authors count the need for upgrades as Technical Debt; however this can lead to some very large estimates. At CAST, our calculation of Technical Debt is data-driven, leading to an objective, conservative, and actionable estimate.
We define Technical Debt in an application as the effort required to fix only those problems that are highly likely to cause severe business disruption and remain in the code when an application is released; it does not include all problems, just the most serious ones.
Based on this definition, we estimate that the Technical Debt of an average-sized application of 300,000 lines of code is $1,083,000 – so, a million dollars. For further details on our calculation method and results on the current state of software quality, please see the CRASH Report (CAST Report on Application Software Health).
Here’s a community dedicated to the awareness and education of the topic: http://www.ontechnicaldebt.com
In addition to measuring a system’s quality, the ability to measure the number of function points as well as precise measures of the changes in the number and complexity of all application components makes it possible to accurately measure development team productivity. Employing CAST AIP as a productivity measurement solution enables:
CAST AIP and the CISQ Automated Function Point Specification: The CISQ Automated Function Point Specification produced by the CISQ team led by David Herron of the David Consulting Group has recently passed an important milestone. CISQ has worked with the OMG Architecture Board to get the specification properly represented in OMG’s existing meta-models. This specification was defined as closely as possible to the IFPUG counting guidelines, while providing the specificity required for automation. This fall it was approved for a 3-month public review on the OMG website. All comments received will be reviewed at the December OMG Technical Meeting, and the relevant OMG boards will vote on approving it as an OMG-supported specification (OMG’s equivalent of a standard). From there, it will undergo OMG’s fast-track process with ISO to have it considered for inclusion in the relevant ISO standard. We believe this standard will expand the use of Function Point measures by dramatically reducing their cost and improving their consistency.
Here’s a site that provides additional insight into the impact of outages.