Quantifying Technical Debt: Beware of Your Assumptions

by

Our colleagues at Gartner have made a little bit of a stir in the media with their findings on IT debt. Almost every industry pub, and some bloggers, have opined by this point. Here is a stack containing some of the recent articles and posts on the topic:

  1. Computerworld, by Mitch Betts, chronicling some of the recent controversy
  2. Israel Gat, on how Technical Debt relates to SaaS, Mobile, and to toxic code!
  3. Computerworld, by Pat Thibodeau, describing all the research into technical debt and an end-user perspective
  4. NetworkWorld, by John Dix, describing the Gartner report together with the CAST study on the subject
  5. Vinnie Mirchandani, saying that Gartner’s estimates may be overstated
  6. Dennis Moore, describing tech debt and how it relates to purchased software

By the way, Israel Gat has been writing about the topic as a practitioner for a long time. I highly recommend his thoughtful posts.

The key question in defining technical debt – for pundits and practitioners – is scope. What’s in and what’s out. The most canonical definition focuses on the tradeoffs made during the implementation (i.e., coding) of software. The analysts at both Gartner and Forrester sometimes mix in things like upgrades of Microsoft Windows and network infrastructure.  Actually, the Gartner definition of IT debt sounds a lot like technical debt, and Forrester’s technical debt seems more like IT debt. That’s the subject of another post.

For most business executives, upgrading to the latest version of SAP or Windows OS is not a debt at all. It’s an option to take at some point when it makes sense to business operations. The corners we cut when coding in the heat of project pressure – architectural shortcuts, inefficient or dangerous implementations, and rewrites instead of reuse, all seem to qualify as debt in the Ward Cunningham sense. These are all structural quality issues that will at best slow down future development, and at worst explode in test or live use.

My perspective is that Garnter’s macroeconomic numbers are probably about right. I don’t know their calculation methodology, but it works. At CAST, we disassembled and analyzed our sample of about 300 mission critical custom applications and came out with a conservative estimate of about $1 million of technical debt per a cross-industry average ~375 KLOC app. Most Global 2000 companies will have over a hundred apps that fall into this category. That’s over $200 billion already. Most big companies will have a long tail of hundreds of custom apps that are not all as large and mission critical. And #2000 on the Forbes Global 2000 list has revenues of $2.69 billion, so there’s another long tail of technical debt beyond this select group of companies. Of course, our estimate only quantified the most important engineering problems buried inside these apps – those that will have to be fixed one way or another for the apps to continue to support the business.

As a side note, I get the sense that most of the folks offering their opinions on the Gartner research didn’t actually have access to Andy Kyte’s research. It’s kind of cynical for Gartner to put the press out there without providing access to non-subscribers. But even if most of the frenzy has been speculation, it doesn’t take anything away from the importance of the issue. This is a valuable discussion and, whether their numbers are accurate or not, we can thank Gartner for raising the temperature on the problem and placing some focus on the work we need to do to improve the situation.

Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Lev Lesokhin EVP, Strategy and Analytics at CAST
Lev spends his time investigating and communicating ways that software analysis and measurement can improve the lives of apps dev professionals. He is always ready to listen to customer feedback and to hear from IT practitioners about their software development and management challenges. Lev helps set market & product strategy for CAST and occasionally writes about his perspective on business technology in this blog and other media.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|