The concept, to me, seems simple, intuitive, and obvious: Technical short-cuts lead to a slight increase in value today at the expense of speed tomorrow.
Then Ron Jeffries, a co-author of the Agile Manifesto, got up to speak, along with his partner, Chet Hendrickson. Ron and Chet had served as part of the team that invented Extreme Programming in 1999.
What they had to say turned the workshop upside down.
A Radical Proposition
Ron and Chet begin by pointing out that every time we build software, are spending money, developing an asset. The asset might not be carried on the books as having value, but it does -- we develop software in order to eliminate pain, improve customer service, create sales opportunities, or even to run an entire line of business. Every night, when the bank runs transactions, the insurance company pays claims, and the e-commerce company ships books, the software is making money for the company.
Each and every day, the process runs, and the software generates money. That is, after all, why the company wrote it; it is the reason that updating the software is worth doing, even as it goes slower.
Even if all the changes does is allow the money machine to run, even if the price of the change, due to something like 'technical debt' seems too expensive, keeping that core system running generates us a million dollars a day. As Chet put it in the workshop, "We want to talk about increasing the things that are good -- assets -- instead of decreasing the things that are bad -- debt."
Ron and Chet suggest that code should be a library that is cheaper to extend than starting from scratch.
The idea was just a little bit too wild for our workshop in 2008. Even Ron, when he introduced it, had to include the preface that while they should get faster over time, "it never does, but it should."
Our focus, at the time, was on reducing loss through technical debt.
It may be time to bring gaining value back.
Assets in Action
Say you start as a new programmer at Google. After a one-week training program, you have access to a piece of code called MapReduce that allows you to search large, unstructured databases for simple terms, which essentially gives you search as an API. You also get a massively distributed cluster of computers to run it on, along with simple, code-library like access to databases with maps, businesses, and the internet. Combine that with 20% time, and you’ve just created a recipe to crack out innovative products like Gmail, Google News, Google Sky, and AdSense, and to do it reliably.
Many of the 'products' that google makes, from Maps to Image Search, are a smattering of code built on top of a strong core system. Perhaps maintaining the core system does get more expensive over time, but that's not the point. The point is that extending the core system creates new opportunities in revenue. You just have to find out how.
It's not just Google’s MapReduce.. Years ago, before Amazon was selling cloud computing, I was talking to my friend Goranka Bjedov, who was a Performance Tester for Google. Goranka was explaining how all the problems with performance testing massive websites were social; creating the grid was two or three lines of Python. I asked her what about the folks that do not work at Google, that don't have access to an API for create a grid of computers at will -- what did she recommend? Goranka did not have an answer for me, but I suspect it would involve a lot of re-doing work the folks at Google had already done for her.
At Facebook, the power is the social graph, combined with a very powerful screen-drawing library. Programmers can add a 'like' button, comments, lists of your friends who also like things with just a few lines of code. Linkedin can do the same thing with your business friends, or your resume. Amazon is famous for taking its grid computing framework and offering it as a utility, but they also took the project they used to get small, human tasks done on the CD, like entering CD jacket information, into a public service -- collecting a small percentage of every transaction.
Amazon also has the ability to change the front page, on the fly, and to build that front page based on what the customer has viewed lately, what customers purchased who also purchased those books, what products have been highly rated by people who rated the same things highly as you did, and more.
Once you start to look to drive you functionality, you start to see it everywhere. But where do you start looking?
Find The Core
Most large IT shops have a spreadsheet lying around somewhere with a list of the 120 applications the team supports. When the new audit comes, or ERP upgrade, someone looks at the list and checks for in scope/out of scope, then starts ticking off compliance.
That spreadsheet may be a pain point, but it is not a liability; it's a gold mine.
The thing to look for in the spreadsheet is duplicate behaviors that happen again and again. At the insurance company I spent years with, the core repeat functionality was pulling members and claims data. When I started eligibility extract might be a three month job; when I left the technical pieces could usually be done in a week.
That is core functionality at work -- algorithms that can be re-used.
When I finished at the insurance company, we had some code-reuse -- a single API the programmers could use to get lists of data.
It was also a single point of failure.
Ken Schwaber, who helped popularize Scrum, refers to this as the Core Systems Problem; I tend to think of it as the Sand pile problem.
Imagine the code in the core system is a bit like a pile of sand. Over time you are both relying on it while adding new features. The pile gets bigger, and eventually the base is too small to support its weight. The pile becomes unstable and there is a small avalanche.
With real sand piles, the avalanche makes the pile a little wider, and little harm is done. With software, the unstable system means one bug fix introduces two (or four! or six!) new bugs in production. The choices available to management becomes taking huge risks in production, letting testing grind the project to a halt, or, perhaps, rolling back to an earlier stable system.
Except - if we have exposed the core system to the business as an API, we can’t roll back. Other teams are using this current version of the system to enable their business processes. Rolling back an exposed core system means stopping everyone who depends on you.
Search at Google is the heartbeat of the business in a number of ways. If it stops, the business really is dead.
If the first rule of technical assets is to find them and turn them into services, the second is to protect those services. That is a great deal of what Ron and Chet were talking about -- building the software with craftsmanship, avoiding the needless complexity, keeping the platform easy to change and evaluate. (Yes, you can make a codebase easier to evaluate. One way is to develop it in components that separate concerns. If you can test, or at least understand, the components in isolation, they’ll be much easier to change than code that has logical elements combined into a big ball of mud.)
Yesterday, Today, And Tomorrow
The overall message I took from Ron and Chet’s presentation was that there is "gold in them thar' codebase” in the form of code reuse. Five years later, when I watch the video, though, I am struck with how much time they spend talking about doing the work in order to keep the system running clear -- to prevent the sand pile avalanche
What are the hidden opportunities for value in your business, and where are the sand piles about to fall over? Isn't it time to find out?
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.