Technical Debt Measurement Webinar: Reversal Strategy Q&A Follow Up

by

Last Wednesday we had an excellent and very interactive webinar discussion with David Sisk and Scott Buchholz, Directors at Deloitte Consulting, LLC. David and Scott are experts regarding technical debt -- both at a technical hands-on level as well as the strategy and governance topics in IT. So, we talked about the symptoms and causes of technical debt in large IT environments, as well as the organization and processes that need to be put in place in order to reverse the normal trend of technical debt accrual.

One of the topics that came up a lot is how to get the business onboard. Our guest presenters gave us some very interesting approaches to making the case, even when the immediate symptoms of the debt are not evident to business stakeholders. I think this discussion by itself is valuable to listen to.

Another topic that came up a lot in the Q&A was different ways of asking how to set up a technical debt measurement program.  As in our last webinar, we wound up going a couple minutes over our timeslot to address some of the questions, but we had to leave many unanswered due to time. The goal here is to try and answer some of those questions in our blog. If anyone wants to get into a more detailed discussion on any of these points, please contact us and we’ll be happy to talk to you. So, here goes:

1. How do we establish a technical debt measurement program?

There were a number of questions about this topic, asked in different ways. This is a topic onto itself and I believe we should organize a webinar to talk only about this -- perhaps later this year. We’ve seen some best practice already in this regard, so we could share some anonymized dashboards and reports, as well as some process examples. Suffice it to say, the outlines of such a program should include automation for consistent measurement of technical debt, a process that denotes a consistent set of points at which technical debt is actually measured, and reporting for senior IT and business stakeholders with actionable data that can be affected by management.

2. What tools are available on the market to measure technical debt?

Quite naturally, one of the reasons we were interested in hosting this webinar is that CAST provides technology for estimating technical debt and for measuring it precisely in a continuous manner. CAST’s Highlight rapid portfolio analyzer provides a quick code-scan based estimate of technical debt, along with measures of software size, complexity, and estimated technical risk. CAST’s Application Intelligence Platform is capable of precise measurement of technical debt that can be benchmarked to industry standards and trended from release to release, knowing that the measure is reliable enough to put in a sourcing contract.

There are other tools in the market that estimate technical debt, which you should be able to find easily. The only caution I would provide is whether the measure is consistent enough for trending, and whether you can dissect the technical debt into items that are very risky vs. simple hygiene. Those of you who asked for “rigorous” and “repeatable” measures of technical debt have clearly experimented with some of the simple tools available in the open source.

3. How do you avoid that the technical debt metrics are satisfied, but the quality stays bad?

There is a lot we can say in response to this question. To offer a perspective, in our experience this has to do with three key elements of the technical debt measurement program: analysis depth, prioritization, and controls.

  1. Analysis depth. It’s easy to find spurious issues in code and call them technical debt. Clearly, the value of the technical debt measurement has to do with the analysis that’s being aggregated into the measure. If the analysis includes integration-level and architecture-level issues, with a multi-component view, that has more direct impact on the actual quality of the software. If the analysis is simplistic and superficial, the resulting technical debt measure will have a lower correlation to the actual quality of the software.
  2. Prioritization. A typical analysis of a half-million lines of code (LOC) application will reveal over 5,000 software engineering flaws. Most of these will be relatively minor, but some will be pretty significant, depending on the context. In order to prioritize, the analysis needs to have depth (back to point #1). If the analysis includes only simple issues like amount of comments, cyclomatic complexity measures, quality of variable initializations, itemization of empty catch blocks, etc. -- these all have some impact, but are not as critical as finding an entire transaction chain with no error handling, or instances of architecture bypass. A useful measure of technical debt will allow the team to see the weights of the various flaws, so they can start with the most significant. Also, it is useful to measure the technical debt along categories, such as resilience, performance, security, and maintainability.
  3. Central controls. There are two ways to set up a technical debt measurement system -- distributed and centralized. The distributed model has every individual running their own analysis and a dashboard that aggregates the results. On the other hand, the centralized model has all the source code going through a central server, so that the measurement model and inclusions/exclusions are managed by a central administrator. It’s important to have a consistent set of rules and flaws that are being measured, with a consistent exclusion policy. This way you can make sure that the technical debt reduction that takes place has a meaningful impact on software quality, cost, and risk.

4. How do you compare TCO and technical debt?

Inherently, the total cost of ownership (TCO) of an application and its technical debt are two different things. TCO typically includes maintenance cost, operating cost, and some will also include enhancement cost. Strictly speaking, technical debt is the cost of development or architecture effort that would be required to bring an application to a healthy state. Though these are different concepts, TCO is something that is very much driven by the amount of technical debt that accrues in an application.

According to Gartner, for typical project work the cost of development is 8% and the rest of the TCO is 92%. That means for an average project, the organization spends $92 in TCO for every $8 spent on the project to build that functionality. So, if that project has more technical debt than average, then the $8 of functionality could cost much more than the $92. Some of the technical debt has higher impact on TCO than other technical debt. We have some models here at CAST that show the percent impact.

This concludes our answers to some of your questions. There were more questions asked, on which we will get back with you individually. There are also some threads of discussion, about technical debt prioritization, governance, and measurement on which I’m sure our colleagues from Deloitte will have much to contribute -- perhaps a future blog post. Please send us comments and let us know if this would be of interest.

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Lev Lesokhin EVP, Strategy and Analytics at CAST
Lev spends his time investigating and communicating ways that software analysis and measurement can improve the lives of apps dev professionals. He is always ready to listen to customer feedback and to hear from IT practitioners about their software development and management challenges. Lev helps set market & product strategy for CAST and occasionally writes about his perspective on business technology in this blog and other media.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|