How To Calculate Technical Debt: A Top-Down Approach


As business leaders become more involved with IT investment decisions many CIOs have found it more difficult to receive funding for maintenance of applications and infrastructure. The result of this is that technical debt has become an even more useful term to explain to business stakeholders the importance of IT maintenance investments. This post goes into detail on how to calculate technical debt.

Technical debt is the accumulated costs built up within an organization due to poor coding techniques, trading off between quality and time, and using suboptimal standards. The swap for short term advantages can lead to serious issues in the long term such as: application outages, security vulnerabilities, and increased maintenance costs. Technical debt began as a term referring to quick and dirty practices but has been extended to cover issues with architecture, infrastructure, integration, and processes. CIOs are now trying to identify how to calculate its actual costs and the benefits of paying down technical debt in order to justify core renewal projects, to increase maintenance budgets, or to identify the most pressing issues for maintenance work.

The procedures for measuring code quality have become more “advanced and quantitative than the techniques typically used for measuring the technical debt associated with IT infrastructure, architecture, integration, and processes” –  in the latter areas mentioned, CIOs can achieve directionally accurate estimates of the cost of technical debt.

Before beginning to evaluate the levels of technical debt within an organization, the CIO should decide what the goals of this evaluation is; for example, if the CIO is looking to replace a 40 year old system not ever small defect within the system needs to recorded for his/her use. What should be calculated in this case is how much the system costs while running and how much would be saved once they system is retired. On the contrary, a CIO who wants to fix a system will need to identify problems, the severity of these problems, and the cost to fix them.

In order to calculate code quality debt in relation to things like claims processing or trading platform using a code-scanning tool can be a good first step. These tools often identify quality, security, and performance defects within an application and evaluate the amount of effort needed to resolve such issues. Organizations should, therefore, assess the tools available to see which meet their needs accurately. After identifying the issues that comprise the code quality debt, CIOs can calculate the costs associated with these issues, the costs to maintain them as they are, and the possible costs of debt reaching levels of business disruption. After these costs and potential costs are determined the costs needed to pay down that debt must also be assessed.

The method outlined above is a top-down approach of calculating technical debt that is more subjective than a bottom-up approach the relies more on benchmarks, estimates, and proxies. Calculating technical debt, especially from the bottom-up approach of an entire app portfolio, is a significant project to take on that could take several months to undertake. In comparison, a “bottom-up analysis of a single system or a small set of systems, or a top-down estimate of infrastructure technical debt take considerably less time”. Using such analysis allows to prioritize those areas that present the greatest risk of business disruption or areas that cost the most to carry.

To read the full post go to:

Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
Making sense of cloud transitions for financial and telecoms firms Cloud  migration 2.0: shifting priorities for application modernization in 2019  Research Report
Load more reviews
Thank you for the review! Your review must be approved first
New code

You've already submitted a review for this item