CRASHing Into Technical Debt

by

Without going into specific finances, I make twice as much money as I did just 10 years ago. You would think this would be an indication that times, for me anyway, are good; yet I still seem to have the same question every month the week before I get paid, “Where did all my money go?”

It really isn’t rocket science, though. While my income has more than doubled, my debts have gone up at least that much, if not more. Besides the obvious inflation factors (food, gas and entertainment costs have all gone way up in the last decade), there are many other things for which I am indebted. I now own a home, have a child and whereas a decade ago I drove a used car that I paid for in cash, I now drive a nice SUV…that has almost three years worth of installment payments left on it.

Were I to suddenly lose my job or have to take a pay cut like so many others in this economy, there are areas where I would need to cut back. Obviously, I would cut my entertainment budget first followed by other non-critical things. The deeper the cuts went, though, the more I would need to sit down and calculate just which debt could be cut and which was necessary debt.

It is in much the same way that companies need to look at their technical debt and was what CAST had in mind when it performed its calculations in its recent CAST Report on Application Software Health (CRASH) study.

Looking for Not-So-Easy Money

When calculating technical debt in its recent CRASH study, CAST set out to establish a realistic, true-to-business type of approach rather than merely taking the authoritative approach. Building off the methodology of its original software quality study in 2010, CAST made adjustments to enhance and improve the calculation…and according to the architect of the study, Dr. Bill Curtis, they are still open to ways to improve it.

“Our goal is to provide an automated and repeatable process for our many clients to use technical debt as an indicator,” says Curtis. “We provide this information in combination with many technical, quality and productivity measures to provide guidance to our clients.”

As cited in the CRASH report, CAST’s approach to calculating technical debt can be defined by the following:

  1. The density of coding violations per thousand lines of code (KLOC) is derived from source code analysis using the CAST Application Intelligence Platform (AIP). The coding violations highlight coding issues around the five health factors of application software: Security, Performance, Robustness, Transferability and Changeability.
  2. Coding violations are categorized into low, medium and high severity violations. In developing the estimate of technical debt, it is assumed that only 50% of high severity problems, 25% of moderate severity problems and 10% of low severity problems will ultimately be corrected in the normal course of operating the application.
  3. Conservative estimates of time and cost were used all around. To be conservative, it is assumed that low, moderate and high severity problems would each take one hour to fix, although industry data suggest these numbers should be higher – in many cases much higher – especially when the fix is applied during operation. The estimated rate for the developer who fixes the problem is also conservatively estimated at an average burdened rate of $75 per hour.
  4. Technical debt is therefore calculated by taking the sum of 10% of Low Severity Violations, 25% of Medium Severity Violations and 50% of High Severity Violations, then multiplying that sum by the number of hours needed to fix the problems and multiplying that product by the cost per hour ($75) to fix the problems.

Pay Me My Money Down

Great, so now we know how to calculate technical debt…but what’s next?

Technical debt can identify just how much issues with application software are costing a company, but then the question arises: "What do we do with that information?" The answer to is to develop a technical debt action plan that determines how much technical debt can be absorbed before the application (or applications) in question begin to lose their value to the business.

Sounds complicated, but CIOs and heads of applications can start by using an automated system to evaluate the structural quality of their five most mission-critical applications. As each of these applications is built, measure its structural quality at every major release or, if the applications are in production, measure their structural quality every quarter.

In particular, keep a watchful eye on the violation count; monitor the changes in the violation count and calculate the technical debt of the application after each quality assessment. Once you have a dollar figure on technical debt, compare it to the business value to determine how much technical debt is acceptable versus how much is too much based on the marginal return on business value. (A framework for calculating the loss of business value due to structural quality violations can be found in “The Business Value of Application Internal Quality” by Dr. Bill Curtis.)

Calculating technical debt and how to manage it is not that different from managing personal debt; in fact, it can be easier because it is more formulaic.

I sure wish I could just as easily find a framework for calculating the loss of value of my SUV.

Filed in:
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Jonathan Bloom
Jonathan Bloom Technology Writer & Consultant
Jonathan Bloom has been a technology writer and consultant for over 20 years. During his career, Jon has written thousands of journal and magazine articles, blogs and other materials addressing various topics within the IT sector, including software development, enterprise software, mobile, database, security, BI, SaaS/cloud, Health Care IT and Sustainable Technology.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|