It’s been over 20 years since Ward Cunningham introduced the debt metaphor with an experience report at OOPSLA, the conference for Object Oriented Programming. At the time, Cunningham was arguing that debt was a good idea -- you could get the software out faster by taking shortcuts, collect additional revenue, and come back later to pay it off.
The risk with this kind of strategy, of course, is that debt carries interest, and if you don’t pay it off, eventually all your payments go to service interest. For technical debt, the interest slows down forward progress. He argued that one should stop avoiding debt and instead get good at paying it back. He mentioned the process we now call refactoring. Refactoring would be paying back some principle -- at least, that’s the metaphor.
I like to think that I took over where Ward left off by serving as the lead organizer of a Peer Conference on Technical Debt in 2008, serving as a peer reviewer for an SEI-sponsored workshop, and then writing a feature magazine article on the topic. Today I would like to revisit the concept and ask -- is technical debt still relevant? Has the metaphor helped people? What do we know, and how should we respond?
How Technical Debt Shows Up
Today, when I work on software teams that talk about technical debt, the programmers say something like this: “The new features will take more time to develop than we would hope; we are dealing with a lot of technical debt here.” In other words, the new team is slowed down by poor decisions the older, now-gone team made to hit the original deadline. I also hear this from managers when they encourage teams to “take on a little technical debt” in order to hit a deadline. The rare third case is a software engineering group or firm that wants to quantify the debt, using a measure like cyclomatic complexity or code coverage; I will come back to this later.
In the first two cases above, the staff is talking about taking on bad work from someone else; the term is a sort of slur, like ‘legacy’ can be. Using the term is a sort of trick, to either argue for slower work (from the programmers) or faster work (from management). A management team, motivated by deadlines, willing to take shortcuts to hit today’s date, is unlikely to decide to suddenly pay off the debt on the next project. Instead, since the shortcut appeared to work, the management team is likely to try that same trick again next time. Elizabeth Hendrickson’s board game, the “shortcut game,” allows teams to experience the consequence of this kind of thinking without having to live it, and that’s a good thing.
The problem is that technical debt, as a quality, is hard to measure, like stress. Just like stress on a human, it may be possible to ‘hit’ on time, feature-complete, and with few defects. But that impacts our long-term sustainability.
Let’s talk about what stress does.
Personally, when I’m stressed out, my physical desktop becomes cluttered. Measuring the clutter on my desk and controlling it would mean a clean desktop, but might actually add to my stress. Instead, I want to observe the clutter on my desk, to use it as an advance warning, not try to control it. When people talk about measuring debt, turning a metaphor into a real thing, with the intent to understand what is going on, we often find a lot of agreement.
My counsel in this case is to be aware of Goodhart’s Law -- that when you start to control indicators of performance, they cease to be good indicators. Note that my concern here is not about measurement, but about control.
The Final Analysis
My final concern with the term is that it is so abstract. Saying “we can take on some debt here” is not specific. The manager means “go faster,” the programmer hears “take shortcuts”, and the result is a mess. ... but that’s not really what Ward meant when he introduced the term.
When I interviewed him for this piece, Ward took strongly objected to this merging of smart debt (it was a good thing, remember?) and shoddy code written to hit a deadline.
That shoddy code, we can analyze. We can calculate how complex it is, which means more variables at one time than one person can keep track of, which means defects. We can analyze when objects have multiple responsibilities, which will be hard for maintenance.
Without that analysis, any comment about the code can be dismissed with a wave of the hand and the comment “well, that’s your opinion.”
There’s a lot more to say here, but for now, let me just say, that the ability to get beyond “that’s your opinion”, to evaluate systems, and find the area of greatest risk (and greatest improvement opportunity)?
That will get you farther than any opinion ever will.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.