The Risks of Measuring Technical Debt

by

It has become a recent practice in organizations to measure technical debt in their software - but how often do you think about why you are measuring technical debt? What will you and your team do once you have information on how much technical debt you have in your software? What risks have you uncovered in the process? Simply because you have the ability to measure technical debt, doesn't mean you necessarily should.

Measuring technical debt can be extremely valuable but only in the right context. Your organization first needs to have a working definition of what it means for a software component to be complete; in other words, your organization needs to have a measurable definition of "done". What teams need to keep in mind is that the gap revealed by measuring technical debt (the gap between the work being put to release and what your working definition of 'done' is) has far reaching consequences.

The scenario drawn out in this post presents an organization that went through the process of measuring technical debt. However, their software at one point fails causing physical and financial damages to the organization. Here is where the measurement of technical debt can become an issue, especially if your processes are not well equipped to deal with such a failure.

After this failure let’s say you are being questioned by jury on the software failure that occurred and you mention that technical debt was present in the version of the software that was released and then failed; it could then be argued that you deliberately released a product that was flawed and faulty. Even if the technical debt that was present came from poor practices unrelated to functionality (like poorly documented code) these nuances are likely to be lost in a case on business’s liability for the failure.

As software functions today, it is easy to consistently update it thus creating an environment where it is common for organizations to release imperfect or incomplete software. Therefore, if your development team is deliberately measuring and documenting code you could be building a case up against yourself in case of technical failure.

However:

If you have a well moderated risk management policy in place and processes that ease the identification and measurement of technical debt, as well as contingency plans to deal with the risk of building software with technical debt, you are in a better place than without.

Another downside of measuring technical debt is that it impedes innovation. If technical debt measures are demonstrated as persistently increasing it could then be used as another excuse by developers for not delivering features by their deadline. Ultimately, if a business does not consider work that is left undone as a pressing business concern than it should not be labeled technical debt. Instead it should simply be stored in the backlog, along with documented efforts through robust testing, and automation.

So what then is the difference between measuring technical debt and recording defects or publishing known issues? The word debt inherently carries negative connotations; it sounds deliberate and risky (technical debt by definition is both of these things but if you are aware of the technical nuances of inducing technical debt than you know that this is not necessarily a bad thing).

When business and development teams are in synch with their understanding of ensuring that software is meeting predefined standards of software quality that is the time that technical debt measurement is best utilized. If you plan to release software with quantified technical debt present it is necessary that all aspects of business are aware of the risks present of putting speed before quality.

Ensuring that your organization has a set of best practices and follows them closely is imperative to using and managing technical debt to the benefit of your product.

To read the full post visit here.

Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|