Don’t Give Up on Software Metrics!

by

Is software management obsolete? Yes, according to Tim Berry, President and Founder of Palo Alto Software.

You can read my comment on his blog. Here's a fuller version.

I read Tim's post along with Jeff Atwood's post on Coding Horror and Tom DeMarco's article. There are a number of cool ideas in this set of articles, and it's helpful to separate them.

So here goes.

1) Building software is a creative activity.

2) Most projects, despite the controls, metrics, and process, end up being completed by the sheer passion, heroics, and expertise of a few. (I'm reading a bit more between the lines on this one, but it's a charitable interpretation.)

3) In a typical software project "nobody can see the whole elephant."

4) A software metrics paradox: only projects that don't deserve or need the scrutiny end up getting them.

5) To figure out how to build software without a lot of metrics fanfare, just pick the projects with very high ROI -- the return on these will dwarf the investment, so you don't really need to pay very close attention to the investment (i.e. the cost to build).

6) It's about time we abandoned this charade and stopped using software metrics (on most or all of our projects).

1, 2, and 3 are factual claims, 4 is descriptive (e.g. like Moore's Law), and 5 and 6 are prescriptive.

I agree with 1, 2, and 3. As Fred Brooks puts it in his book, "The Mythical Man Month", "The complexity of software is an essential property, not an accidental one". And he goes on to explain how this complexity manifests, and why it is essentially different from, say, building a skyscraper.

But despite that, I think it's possible to impose the right controls throughout the software development life cycle. Let me expand on this just a bit.

Throughout the course of development, a large number of developers, architects, IT workers, business users, and managers make hundreds of independent decisions which need to come together in a coherent fashion for the end product to work well.

So, “software design is not an orderly top-down process. It is a collection of multiple decisions made at varying levels of abstraction, brought together to satisfy an overall business goal.” (This is Dr. Bill Curtis, co-author of the CMM framework.)

As the code base grows linearly, architectural complexity grows exponentially! Performance bottlenecks multiply but become very hard to detect. There is no human or team of humans capable of having a comprehensive end-to-end view of how the application is put together to satisfy its long list of continually-evolving requirements.

What's a project manager to do? You can't blame them for doubling down on process! But that's the wrong type of metric. What you need in this case, in addition to process metrics (time spent, adherence to coding standards, etc.) are PRODUCT metrics. In particular, measures of the QUALITY of the evolving product.

Think about this for a second. Software is the only product where quality is equated with meeting functional specifications.

You don't make this mistake with a Toyota. Its quality comes not just from having four wheels and a transmission. Chevy Cobalts have those too. The quality of a Toyota comes from something more -- it's HOW WELL the four wheels and the transmission work when you drive the car. How well do they hang together when you hit nasty potholes? How do the parts wear with time?

How well something works is closely tied to the process by which that something is made. There's usually a process for putting things together, and software engineering has grasped at process as a panacea.

Here's the mantra: I have a repeatable process in place, I will end up with a high-quality software product, I have a repeatable process in place...

Unfortunately, it doesn't work this way. Think of all of the FDA approved products that have killed people, the planes that have had all their maintenance checks but crashed, and the Airbus A380 that couldn't be put together because the parts made by different suppliers wouldn't fit together.

Having a repeatable process in place isn't by itself enough. In real life, most processes outside of manufacturing lines are difficult to make repeatable.

Repeatable means same inputs result in same outputs. But in a software engineering process, cost and time pressure, variance in expertise levels, the (un)availability of expertise, the (un)availability of input information -- these can and will lead to a wide variance in the output of a process.

Making software is like making the Airbus A380 -- hundreds of people making independent decisions that have to come together in the end as the finished product. Indeed, it's even harder when it comes to software because the product is not a tangible thing like an airplane part. And the kicker: when it all comes together, there's no foolproof way to know how this thing is going to work in the real world!

I work for a software company called CAST -- we define and precisely measure the quality, size, and complexity of large-scale software systems as they evolve. No human can see the whole thing end to end, but CAST can. Write me if you like, I can show you exactly how we do it. Bottom line: There is a way you CAN see the whole elephant.

The lesson is PRODUCT metrics, not PROCESS metrics. And humans can't do this -- but an automated system can.

Point 4 may be right, but unfortunately, it doesn't lead to 5. We don't get to choose the projects we work on, and DeMarco's guidance is simply not practical.

Finally 1+2+3 is not reason enough to believe in 6. We can get complete visibility over the entire software system, and we can measure it the way engineers measure their output.

Give up on process metrics if you like (and if you can in your company), but don't give up on controlling software projects - use product metrics to take control!

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|