Welcome to the OnTechnicalDebt Expert Interview Series. We’re kicking off this year with an interview from the founder of Technical Debt himself, Ward Cunningham, and a specialist in software engineering methodologies, Capers Jones. This debate will focus on addressing the viewpoints expressed by the founder of the term “Technical Debt,” Ward Cunningham, and those of Capers Jones, which take on a much wider economic approach to the topic.
1. What problem does the Technical Debt metaphor solve?
Carolyn Seaman: Ward, you first coined the term “technical debt” in 1992 as a software design metaphor that highlights the compromises between the short term gain of getting software out the door and getting the release done on time versus the long term viability of the software system. When you first used that term in 1992, was there a problem you were trying to solve or an issue you were trying to clarify? What was your motivation for using that term?
Ward Cunningham: I was very aware of that exact tradeoff and I felt that in commercial software development being able to find the right balance and, in particular, having some choices when finding that balance was important. So I was drawing the analogy between, say, venture-funded enterprise, where going into debt (building a company with other people’s money) was a good strategy but I was also pointing out the notion that part of that strategy is planning to pay back that debt. At the time, I was thinking that refactoring was kind of a new concept. People would say, “If it isn’t broken, why fix it?” and of course there are very good reasons. Even if something seems to be working right you might want to go back and improve it with the knowledge that you’ve gained through developing software. So it was really an explanation for the economics of refactoring – the continuous learning process of developing software.
2. How are you using the term Technical Debt today?
Carolyn Seaman: And of course that concept has taken off, especially in the developer community and even in the software economics community, and, to some extent more recently, in the research community as well. Its definition and scope has evolved a bit. Capers, I wonder if you could say a little bit about how you see that term being used now in its more popular sense?
Capers Jones: Well, it’s used in a sense Ward already articulated. If you skimp on quality before you deliver software, you end up paying heavy interest downstream after the software is released for things you could have gotten rid of earlier, had you been more careful. I’ve been measuring quality since the 1960’s. I come at it from a slightly different fashion. I believe most companies don’t have a clue on how to get rid of bugs before release. So they’re not making conscious choices; they’re acting out of ignorance because they don’t know what they’re doing. If you look at the companies that do know what they’re doing, such as IBM, ITT and Motorola, they don’t have much technical debt and they don’t have to pay very much up front either because they’re using a synergistic combination of defect-prevention, pretest removal, static analysis and inspections, and really good testing. So they have minimum upfront cost and they have minimum technical debt because they know what they’re doing. My take is that a lot of the real technical debt, the high cost to us after delivery is really ignorance rather than deliberate choice.
Carolyn Seaman: Ward, talk a little bit about your current working definition of technical debt. If it’s evolved since 1992 and what you think of Capers just said about technical debt now coming out of ignorance and neglect on the part of the developer community.
Ward Cunningham: Of course, I think of software development as a learning process. Even with highly skilled developers, if they’re developing new software, there’s an aspect of learning what’s required to, in a sense, satisfy the customer and how to best express that with contemporary technology. That learning happens in the act of programming. My work in agile software development is to say, let’s take part of what we understand and express that. And then react to that and express more. And react to that and express more. It’s an incremental approach. I think it’s important if you’re going to take an incremental approach, you might learn something tomorrow that influences design decisions you’ve made today. That should be considered a good thing – to be able to take tomorrow’s learning and not discard it because you’ve made a decision today. That requires a set of practices that were unfamiliar early on, especially in something like assembly language programming where you had to make a bunch of decisions up front early, how your registers were going to be used and so forth. The power of modeling in a computer now, especially object-oriented programming, let’s you delay those decisions. In fact, they’re taken care of for you. So to take advantage of object-oriented programming and to have a lot of choices in a competitive environment that you go out with maybe not a poorly written solution but an incomplete solution. People will call it “the minimum viable product.” What’s the least you could produce to get you into a learning loop with your own customer? That’s important.
3. What should or shouldn’t be included in its definition?
Carolyn Seaman: Capers, some people view two types of technical debt that are sort of opposite sides of the same coin. One is suboptimal design decisions that are made to help get the software out the door. The other is actual flaws or defects that are already in the software that are not fixed and are left to be latent in the code. Are both of those things, in your view, correctly labeled technical debt?
Capers Jones: I would call them both aspects of technical debt…and I’d add a third one, too, which is the security flaw in software. When someone exploits them they can generate enormous costs from things like recovery from denial of service or recovery from data theft. So you have poor design decisions, you have quality bugs that cause the software to misbehave and produce incorrect results and then you have security flaws that don’t do much of anything until some clever person finds them and exploits them and then you have a big bucket of expense.
Carolyn Seaman: Ward, what do you think about pushing a little bit on the definition of technical debt, what it includes and what it doesn’t include?
Ward Cunningham: As a metaphor, it was comparing technical issues (which in my mind were the things computer programmers were necessarily concerned with), relating their ability to wield the program that they had written so far and equating that ability to a financial situation. Certainly when you borrow money that enables you to buy things and you can have things now that you haven’t, in a sense, really paid for. I see it very similar, you can have things early in a development and gain experience and that’s a good strategy, as long as you have a plan to pay it back. If you don’t pay it back then you get a compounding – learning that you’ve consciously avoided comes back to hurt you. So you’ve got to learn a lot to get a good program written… and the question is what order are you going to learn it?
4. How is Technical Debt linked to venture financing?
Carolyn Seaman: Let me push a little bit on the metaphor again, and see if we can stretch it into the finance domain a little bit more. Some people have said that technical debt is a little like venture financing. Does that extension of the metaphor into venture financing make sense? How would that work?
Capers Jones: Well, sort of. But in a sense I hope it’s not a good analogy because in venture financing nearly 10% of the companies that get venture funding are successful and the other 90% fail. Venture companies are willing to put up with that because the returns on the ones that succeed are so high compared to the ones that fail. They end up with money in questionable things. I would prefer to see a better plan for what the market really needs and a better way of building it so that you don’t have so many failures. So that instead of a 10% success rate for venture funding, you have an 80% success rate for venture funding, which we don’t have yet. I’d like to see the same for software – more careful planning up front so that we have reduced cost and if we do it right we’ll have reduced costs up front too.
Carolyn Seaman: Ward, do you have any thoughts on venture financing and if that extension of the metaphor works in any way?
Ward Cunningham: That was what I had in mind. I was, in a sense, promoting well-managed debt. One thing we like to say is if we’re developing value every day and that value is persistent, the customer might decide that they have enough value now even though they might have aspirations later, they are ready to stop paying. If we do that, then you want to know that the value is real. I think Capers has some very interesting statistics on what was expected when a project was started versus what was actually delivered with the first release and ultimately what was included in a project by the time it reaches end of life. Those are three very important points: when the project was conceived, when it was first delivered and the lifetime of the project. And of course Capers is very good at counting out features in function points and that’s a fine metric to normalize all the numbers he had. Capers, can you maybe mention that breadth of projects and the long term growth of projects over time?
Capers Jones: I’ve been measuring the rate at which projects grow during development and after release. In a typical project, from the end of the requirements until it’s release will grow at an average of 2% per calendar month, and about 10% per calendar month if it’s an agile project, and once it’s released into the field, it will grow at about 8% per calendar year forever for as long as anybody is using it. And I know that because that’s the rate of growth of the IBM operating system, started back in the 70’s by Dr. Manny Lehman and some colleagues to look at how fast the IBM operating system is growing, and Microsoft Windows has been growing at about the same rate. We’re looking at 2% per month and 8% per year forever, as long as anybody is using it, and that never seems to stop.
Ward Cunningham: So I would say that the curse of success is that you have to keep working on the program and often you have to keep working on the program adding features that were unanticipated at the project inception, and that’s what we consider success. Let me sharpen that a little bit. Let’s say when we’re adding those features that it’s still a sound economic proposition – the cost of adding the new features doesn’t escalate because the program has somehow not turned into an architectural mess and that’s what we’re trying to avoid. I would say when the time is right we can take what we’ve learned so far and fold that back into the program to keep the program healthy and fit throughout its lifetime. And we want these programs to be successful and we want them to grow so that we don’t have to throw them away prematurely and start over from scratch, and repeat all the same mistakes again.
5. Can we monetize or quantify Technical Debt?
Carolyn Seaman: Let me ask a question that goes beyond the metaphor or using the technical debt concept as more than a metaphor and a communication vehicle. What do you think of the idea of actually putting hard numbers on technical debt – on the costs and benefits – and actually figuring that in a very quantitative way to planning and software development? Ward, I’ll start with you. If we could do that (and a lot of people say we can’t yet but we might someday), if we could actually put good, hard numbers that have meaning on technical debt, would that be a useful thing?
Ward Cunningham: Well, that’s taking the term beyond the metaphor and saying let’s take it out of the notion of software development and learning, and turn it into finance. So instead of being the CTO’s responsibility, it becomes the CFO’s responsibility. I think when you do that you’re throwing a much broader net and the CFO should be concerned with that because he’s worried about the profitability of the company. The CTO has some responsibility there but he also has to make sure what he’s going to be asked to deliver in the future, he’s capable of doing. In other words, he also has some banking of talent and experience and it’s a different kind of bank. Can we attach a dollar value to the experience of the team? Well, probably if we step back far enough, statistically we can. But if we’re talking about a particular team and a particular set of experiences and a particular style of learning, quantifying it in that becomes harder. It’s like if you’re the CTO and you have to make decisions, you can step back and take a statistical point of view or you can step forward and get close to your team and understand where they are.
Carolyn Seaman: Capers, what do you think about that idea of quantifying the concepts related to technical debt?
Capers Jones: There’s an older metric called “cost of quality” that has quantified data that’s been around for quite a few years. One of the things I’ve been measuring at IBM, ITT and dozens of other companies is what it really costs to achieve various levels of quality. Let me just give you some industry numbers as an example. The average cost of building a piece of software in the United States is about $1000 a function point, and during development it’s about half of that – $500 per function point is the cost of finding and fixing bugs. And then once the software is released, the first year it goes out, companies spend about $200 per function point in finding and fixing bugs and scripts delivered and then they spend about $250 per function point in adding features and enhancements. After five years, if you’ve done it right, you’re down to about $50 per function point in bug repairs but what you spend on enhancement stays pretty constant. So if you’ve done it right, you have a first year quantity of defect repair cost declines quickly over time. On the other hand, if you botched it up, if you didn’t develop the software well and were careless, you’re going to spend $1200 per function point to build it and $600 per function point fixing bugs, $300 per function point after release fixing bugs. And instead of that number going down after five years, it’s going to either stay constant or go up. So after five years you can have a really bad project that you’re spending $350 per function point finding and fixing bugs at a time when it should have been down to $50 per function point. Actually, that kind of data – cost of quality – is relatively widely known.
6. How do different development approaches (Agile, Scrum, RUP, etc.) have an effect on Technical Debt?
Ward Cunningham: Capers, I know that you break down information in lots of different ways. Just a minute ago you quoted some numbers and adjusted them for agile practices. I wonder, from your perspective, if you see agile as delivering any value or if there is some value anywhere close to what proponents of agile have promised?
Capers Jones: You get savings, compared to waterfall, of about 25% on schedules and about 30% on quality in the flavor of agile and that’s most flavors of agile, including XP, scrum and so forth. On the other hand, there are other methods that are just as good. Team Software Process (TSP) developed by Watts Humphrey and Rational Unified Process (RUP) generate results that are just as good as agile. So agile is not the only successful method but it’s a lot better than waterfall.
Ward Cunningham: I think there’s something that we both share in common – that there is an inclination to assume the naïve perspective that the first time something runs that it’s done and you can move on and several different approaches of getting to a completeness that you would be happy with. I would say that naiveté or foolishness has led to a lot of costs, unnecessary risks, and, you know I’m a big fan of computers, so to unrealized value. Computers can do things that people are not willing to attempt to do simply because they’re afraid of them because they’ve seen so many disappointments. Of course my interest is, what we can do today that was hard to do yesterday, so agile is certainly part of that. But it comes down to you’re going to have to pay at some time and if you pay at the right time, you’re better off. That’s really the CTO’s job – to manage that timing and spend people’s attention and the company’s dollars well.
Capers Jones: I agree with that. And if you are careful and prudent in the way you build the software upfront (you use static analysis and inspections), you’ll find that you can lower your development costs, shorten your development schedules and then have very little technical debt when it’s released because you did such a good job in the beginning.
Ward Cunningham: Yeah, that’s pay up front. Don’t take the risk of spending money you don’t have, or shipping code you haven’t thought through or shipping half the function points when you know you have more to do that could impact your design. Because there are, of course, a lot of approaches to software that work, one that kind of surprises even me now is this notion that people who run big websites and are capturing logins as fast as they can are inclined to say that they’re less concerned with quality than they thought they needed to be and more concerned with resilience – that is the ability to push forward even in the presence of a mistake, and that usually means when a mistake turns up all the systems are in place to control the costs of that mistake and get backed up to a prior version. A lot of this is associated with AB testing and the gradual rollout of new functionality and when we say quality, resilience is another kind of quality. There are more choices on the CTO’s table now than there might have been just a few years ago.
7. Technical Debt vs. End-of-Life?
Carolyn Seaman: I think part of what you’re saying is there are costs of quality and there are choices about how much quality you want to pay for and which qualities. That makes sense and technical debt fits well into that because you could trade off savings in the short term versus costs in the long term and fold into that your expectations for quality. But one place where you might run into trouble, I think, with that idea is when the quality gets to the point that a project is cancelled and never delivered. So then making these subtle choices between the costs of different types of quality and different levels of quality breaks down because then you’ve lost everything. Is there a way that the technical debt idea can help us think about that or help us manage the risk of that doomsday scenario?
Ward Cunningham: I know Capers has some numbers. So let’s let Capers take that first and paint the landscape for us.
Capers Jones: As applications get big, the percentage of projects that are cancelled and never released goes up – above 10,000 function points, it’s about 35% of projects that are started and never released. The most common reason they don’t get released is because they had so many bugs they never worked. There is a huge cost of quality connected with not doing things right and ending up with cancelled projects.
Ward Cunningham: I know as a developer it’s a real heartbreak to put your heart and soul into something and see a project struggle and watch it get cancelled is a real personal blow. I think it has a psychic damage on the industry. But I would like to carry on this venture funding notion. The venture capitalists in how they manage their financial portfolios. They’re less concerned about a company failure as they are about what’s called the “living dead,” which is a company that just won’t die but fails to succeed. In their logic, time is in the denominator. If you make a lot of money over ten years, that’s not as good as making a lot of money over two years because time is in the denominator. A fail fast is something they appreciate. Big wins they appreciate much more. If you can afford to take a portfolio strategy…and I would say that all these little web startups that are doing this that and the other really is a portfolio strategy. It’s not something like equipping a submarine… submarine software has to work. If you’re throwing up some new little trinket on the internet then this venture strategy could be valuable as well as sensitivity to size. If you’re the CTO of a big corporation and you’re trying to figure out how to deploy some new internal systems, if you can figure out how to do it in ten small projects instead of one big project, and have some resilience in there so that if project number four fails you can reconceive it three months later, that’s a good approach. Of course, ultimately, at the highest level there’s an architecture that provides that resilience. How are you going to organize those ten parts to be a whole? There are lots of choices in front of us now in that way. That’s what I would encourage everyone to explore – to know how many different ways you can deal with that and then match the approach to the needs of your software and capability of your people.
8. What are the correct measurements for tracking Technical Debt?
Carolyn Seaman: Related to that, as far as choosing your approach to dealing with technical debt… Capers, I have a question for you. A lot of your work is in numbers and you’re the measurement guy. What if your organization has a very mature understanding of technical debt but a very immature measurement culture? A lot of companies have very low maturity in terms of measuring things, collecting data, building models and having historical baselines but it doesn’t take all that to understand technical debt and how it manifests itself. So are there ways an organization like that can track or manage or do something about technical debt without a big measurement infrastructure?
Capers Jones: There are some measures that don’t take much effort that are useful and play into the concepts of technical debt. There’s a comparatively straightforward metric called “defect removal efficiency.” You keep track of the bugs you find during development and then 90 days after the product is released you keep track of the bugs the customers report and calculate the percentage of the defects found internal. For example, if you found 90 bugs inside and the customers reported 10 bugs, you were 90% efficient in removing defects. That’s not a particularly burdensome measurement but it generates enormous value and it plays nicely with technical debt because the leading projects (the ones where people really know what they’re doing) are going to achieve something like 99% defect removal efficiency, so only 1% of the bugs get released into the field. The US average is only 85% defect removal efficiency and for a software organization to release 15% of all defects to an excited public, to my mind, is hovering near malpractice. If you don’t know how many bugs you find inside and the only thing you know is how many bugs are found after release, you have no ability up front to make it better. So if you don’t measure anything, you have very little ability to get better over time.
Carolyn Seaman: Ward, do you have any thoughts on how the technical debt concept doesn’t really need quantification to be a useful thing. But if people really want to manage it, how would they go about that on a day-to-day management basis without some good measures?
Ward Cunningham: Let me take two steps back. You want to look at risks – what risks you’re assuming and so forth. As a business you want to know where your money is coming from and where you’re spending it and scenario analysis sort of thing – what could go wrong and what could go wrong probably will go wrong – so you want to remove the possibility of things going wrong. That seems to me to be good business. When it comes to talking about a development organization, I like to take two steps closer. I like to actually sit down with people and program at the limits of our abilities together, this pairs-programming, and you can kind of see into each other’s heads and see learning strategies and test strategies and so forth. I would ask myself the question – in that hour of programming together, how many minutes were we grappling with problems that we could explain to the customer? And our customer would say I’m glad you were grappling with those. I’m happy to be paying you to think through that kind of decision. And how many minutes of that hour were spent just dealing with foolish, crazy things that just happen using the computer that if you had thought twice you could have prevented? But you didn’t think twice so you just have this nuisance level and it’s so complicated you can never explain to your customer. It isn’t about the customer’s business, it’s about your business and you’re letting your business get in the way of doing the customer’s work. That’s an intuitive ratio but I think that computers and computer programming is filled with those little nuisance things but if you could beat that back to the point where you do a significant amount of grappling with the customer’s problems every hour, then you’re in good shape. If you can do something for the customer every hour, then you’re moving forward.
Carolyn Seaman: I like that ratio of how much time you spend in the solution space versus how much time you spend in the problem space, I guess is another way to say what you said.
Ward Cunningham: I’ve heard stories of new managers walking down the hall, stopping people and asking, “What do you do here?” And then listening to the answer and deciding if that has any true value creation or if they’re just doing busy work because of the bureaucracy of the organization. We can talk about the bureaucracy of the computer programs people make one way or the other, and if the computer program is turned into its own little bureaucracy where you can’t actually on a daily basis create value, then you know that the technical debt – the debt I was talking about in my original metaphor – has piled up and you’re paying that interest.
9. Can financial concepts be adapted to Technical Debt?
Carolyn Seaman: My next question is again back to expanding the metaphor into the finance domain. A number of us have had long conversations about which financial concepts can fit into this technical debt framework. I’d like to ask both of you what you think of things like litigation expenses, liabilities, and opportunity costs? Are there any other concepts or tools, really, from the finance domain that we could adapt in the software domain under the framework of technical debt?
Capers Jones: I think the generally accepted accounting practices – the standard practices for financial accounting used by all American companies – could be merged, modified and used as the basis for technical debt accounting standards. I think if you don’t have accounting standards for technical debt, every company will do it differently and then you can’t compare one company’s technical debt to another.
Ward Cunningham: I hear what Capers is saying… What I might have called technical debt is kind of at a personal level – commitment on the individual to do a good job – but as you move up in the organization there becomes more responsibility. I compared the CFO to the CTO. We know that the CTO as a financial officer has financial responsibilities (everybody at that level has financial responsibilities) and they should be held accountable to stakeholders and so forth to be considering that. Whether that’s called technical debt or risk analysis or something like that, I think that technical debt is a new name for an old thing. Certainly at the CTO level that matters.
Carolyn Seaman: A little while ago, Capers, you brought up the issue of security flaws and security issues in software and what costs those could incur. I wonder if you could talk just a little bit more about how the technical debt metaphor can be used to think about security flaws that either cause data theft or denial service attacks or some other bad thing down the road. Can we use the technical debt idea to think about how to manage those risks?
Capers Jones: Well you can, but security flaws are a special breed. You need security experts as well as thinking about the long range costs of fixing things once the software is released. There’s one alarming news security problem that just surfaced…I’ll give you an example on how that might work. You know those plastic cards that hotels use to open door locks? It turns out that there’s a hacking device available and people have been stealing computers, jewelry and everything else out of hotel rooms because people can hack into those locked doors and the hotels are being sued for not having adequate locks. Is the company that put that software in the door locks liable for the theft? How would technical debt handle that particular kind of security flaw that the door locks of hotel rooms aren’t secure anymore?
Ward Cunningham: That’s a scary thought and we know that most criminals are pretty dumb but there’s a few that are pretty smart and we do see situations where crime pays.
Capers Jones: I’d like to raise another issue too, if I may? A lot of the problems that end up being technical debt after the release really originate with the clients; they don’t originate with the programmers. I worked as an expert witness in a lawsuit where a company was suing a software vendor for a Y2K problem. You know the two digit Y2K thing. It turned out in the deposition of the case that the vendor warned the client not to use two digits. The vendor told the client they would get in trouble and should use four digits. The client refused to accept the vendor’s advice and insisted on two digits. When this came up in the lawsuit obviously the plaintiff dropped the suit because it was their fault. The programmers didn’t want to do it, told them not to do it but were overruled. So when the client themselves are the source of the problem, that’s another aspect of Technical Debt. It isn’t just the developers, sometimes it’s the clients themselves that are insisting on doing the wrong things.
10. Concluding on Technical Debt
Carolyn Seaman: Are there any final thoughts that either of you would like to add before we sign off?
Capers Jones: It’s a great metaphor. I think the concept of technical debt has reverberated through the industry and its making people think about design issues and quality in a fashion they never did before. So I see it as a significant value added concept for the industry.
Ward Cunningham: I feel the same way and I’m proud that something that kind of started as an off-handed conversation with my own CEO has legs beyond what I had even intended. My only hope in this process is that we don’t take it so high that it stops being useful at the level where I wanted it to be useful, which was basically encouraging developers to not only be fortune tellers and anticipate the future but when the future turns out to be different than anticipated, to just take the extra effort to keep their work current with their understanding. It’s a learning process for them and the software should learn too. If there’s a little cost to doing that “housekeeping” then learn how to keep that cost down so we can produce more quality software and get more return on our investments and let computers grow to be everything they can be.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.