I spend some of my time with CEOs or CFOs, and time and again they tell me that IT is a black box that’s difficult, if not impossible, to measure. They can’t measure productivity. They can’t measure output. They can’t measure outcomes. They can’t measure risk. But, the thing they can measure is the IT cost.
Just this week the CEO of a well-known financial services company told me: “I have 2,000 people working in IT with a budget of $200 million a year, and yet I have no idea how the development teams are doing in relation to the competition, or if I’m even getting my money’s worth. And if I ask my CIO what’s going on, he just tells me he’s putting processes in place but it’s taking time -- that creating software is a difficult art -- and eventually making me understand I should let him manage his team because IT folks might not like the idea of having their work measured.”
I’m certain any CEOs reading this are nodding in agreement. The fact is, CEOs can do more than simply ask their CIOs or CTOs for status updates on major projects and initiatives, and gauge success based on whether deadlines are hit. Like any other functions, IT performance, productivity, and value should be measured. The secret is to move away from status updates and into scientific measurement. As physicist John Grebe said, “If you cannot measure it, you cannot control it.” The good news is that today, the software aspect of the IT black box can be turned into a glass box. Here, software development can be measured in several ways, and KPIs can be established and benchmarked.
The bad news? Development teams, and sometimes IT leaders, reject measurement on the basis that they should be measured by their outcomes -- does the system work, by complying with functional specs and end-users expectation, or doesn’t it -- and not their performance. Performance measurement for software development is not maturing compared to marketing, finance, or manufacturing.
That’s too bad, because enterprise software development is no different than any complex industrial process. As an industrial process, it can, should, and must be measured. Testing, functionality, and outcomes are certainly valid measurements, but you can’t improve those outcomes and optimize their production if you are not visualizing the process in ways that are meaningful to both the technical and executive leadership. It’d be like waiting for the end of the assembly line to test if the manufactured products work.
And development teams can’t visualize the process until the CEO steps in and demands transparency into IT. CEOs must stop allowing their IT organizations to get away with a wait-and-see approach, and measure success based solely on outcomes.
IT is not only a huge business expense. Today, it’s the DNA of the organization.
Even though most CEOs can’t decode this DNA, they must understand how it evolves, and how that evolution impacts their organization. Louis Pasteur said, “A science is as mature as its measurement tools.” In computer science, the tools for measurement are available. And at CAST, the tools for visualizing those measurements and making them useful for CEOs has never been better. It’s now just a matter of getting CIOs and development teams to see measurement as a boon rather than a hindrance.
That isn’t always easy. Most CIOs are dealing with one of two scenarios: 1) they have a small group of coding gurus making their apps who have direct access to the business; or 2) they have huge development teams (in-house or outsourced) where a single programmer focuses on a big block of code, unaware of how it interacts with the rest of an application system.
Of course, coding gurus offer the greater pushback against measurement. Small teams of experts will say they’re lone wolfs who do things in their own super-powered way. This defense is known as the “coding cowboy” argument. It’s a red flag that must be eliminated if an organization is to reduce its spending on application development and maintenance, as well as maximize overall system quality. Even gurus must understand that transparency is a good thing for them, eventually.
For larger, distributed teams, the pushback is typically much smaller, and productivity programs are even welcomed from some software factories who’d like to show they do a better job than others. In both cases, the main, legitimate question is about the measures and the reliability. When one gets measured, one expects high precision.
The Consortium for IT Software Quality (CISQ) has defined a series of software characteristics and attributes that must be taken into account to offer a measurement framework that really works. Being CISQ, they receive support from the Software Engineering Institute, the Object Management Group, and dozens of big IT organizations. It is a complex framework, but offers precise measurement that can raise credibility and decrease pushback. The trap to avoid here is to believe that you can measure productivity and quality by counting lines of code and applying some quality checks offered by freeware or cheap code-quality tools. It’s appealing because it’s simple and sounds like a “good start,” but that just gives wrong indications, unfair measures, and unwanted behavior, making the situation even worse.
A CIO needs to get development teams to see software analytics and measurement as a way to quickly improve their work by visualizing workflows, practices, productivity, and quality. Moreover, they need to understand that the most sophisticated measurement systems today help make their work understandable to the CEO, and even more valuable to the organization. Once these changes are in place, CEOs will no longer see IT as a black box. It will become as manageable and measurable as any other part of the business.