Technical Debt Gets the Message Across

by

A couple weeks back I read the most vastly understated opening line of a blog that I’ve seen in the six months since I began blogging here on OnQuality.

Blogger @tadanderson, a .NET architect by trade, recently opened a post on his Real World Software Architecture blog by noting, “Finding the perfect balance of influence between IT and the Business Owners… is not easy.”

That isn’t particularly insightful – in fact, it’s right up there with “water is wet” and “diamonds are expensive” – yet it very simply spells out a sad truth about the political difficulties facing IT departments. The main issue here is just how can an IT department illustrate the best allocation of financial resources for their efforts to a bunch of non-technical – or at best, only slightly tech savvy individuals who handle the purse strings for IT projects.

An Inconvenient Truth

Except for IT vendors, it’s a fairly well-accepted fact that business owners are unfamiliar with the challenges faced by their IT departments. All too often, the “guys in charge” seem to believe that technology “just happens,” that the tools provided have little-to-no bearing on the result or IT is just a money pit and the folks “in the basement” had better just “make due.”

As a .NET architect, @tadanderson has likely found himself on the short end of this stick. He points out:

“The business feels that technology should not be a factor in making sound business decisions. In the business owner's eyes, whatever the solution is, the IT department should be able to support the technology that comes with that solution.”

Where’s the love here, folks? We live in an age where the stability of a company’s information technology system can be the difference between a solid business reputation and a public relations disaster. As such, you would think the front office types would give the back office team the tools they need to get things done right the first time.

Instead, businesses too often choose to slap together something that is just "good enough," thinking they are saving money up front and will worry about problems IF they occur. Well, as we’ve seen quite often this year, these problems DO occur and when they do, they occur on a grandiose scale – witness Sony, Sega, Citi, London Stock Exchange, West End Rail, the Pentagon, etc., etc., etc. And when these failures have occurred, not only have the organizations had to spend a lot to fix the failures, they’ve had to pump money into fixing their reputations as well.

Wait! Isn’t that the very definition of “Technical Debt?”

Technically Speaking

In previous blogs, we have identified technical debt as the cost to fix the structural quality problems in an application that, if left unfixed, put the business at serious risk. This also reveals where IT departments and the business owners can find common ground on which to discuss technology needs. By monetizing technical debt and revealing the true cost of risking failure, an IT department can demonstrate how much it costs to do something the “good enough” way versus “the right way.”

Historically, an assessment of technical debt had been left to guesswork, which almost invariably led to showdowns between IT and financial corporate leaders. But through the use of static analysis during the build process – typically used to identify issues with the structural quality of software before they become problems – monetizing technical debt now employs a scientific process.

Assessing structural quality through a platform such as automated analysis and measurement, a company is able to calculate the number of density violations in the source code before deploying an application. This is done by taking into consideration the number of violations, their severity, the number of hours it costs to fix these violations and the average hourly cost of a software developer’s time.

The result of this calculation provides an IT department with a dollar figure it can show to corporate management as concrete evidence of the need to address issues before the software is deployed. And since money is the international language of business, being armed with a definitive dollar figure is sure to get the IT department’s point across.

Filed in:
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Jonathan Bloom
Jonathan Bloom Technology Writer & Consultant
Jonathan Bloom has been a technology writer and consultant for over 20 years. During his career, Jon has written thousands of journal and magazine articles, blogs and other materials addressing various topics within the IT sector, including software development, enterprise software, mobile, database, security, BI, SaaS/cloud, Health Care IT and Sustainable Technology.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|