Output-Based Application Management

by

You need an application delivered fast. And you’re willing to pay more to get it done quickly. But how much more should you pay?

That depends of course on your supplier’s productivity. The more productive they are, the more they can charge on a per-hour basis. That’s because their productivity enables them to deliver the same size application in fewer hours than a less productive supplier would be able to. Which means that the cost to deliver a function point (function point per $) might actually be less than a supplier whose labor rates are much lower!  In other words, a supplier with a higher labor cost can actually be more cost efficient – and at the end of the day, this output metric – cost efficiency – is what matters. (Along with application quality, as we'll see a bit later.)

There’s nothing surprising about it, but a picture definitely helps sort out the relationship between labor cost, productivity, and cost efficiency.

How Labor Cost and Productivity Drive Cost Efficency

Each curve above tracks a productivity level. For example, if one supplier’s productivity is 10 function points per hour (that’s not realistic, but stick with me), he can charge $70 per hour and still be just as cost efficient (around 0.175 function points per dollar) as another supplier whose labor cost is $40 per hour.

Try setting cost efficiency and productivity thresholds in your contracts. Once you’ve done a few of these projects, you’ll have enough to create your own productivity-quality curves.

With an output measure like cost efficiency (Function Points delivered per dollar spent), you can make decisions based on the output (FP per $) rather than on the input ($ per hour). After all, what matters is not what someone's charging per hour but what the job as a whole ends up costing you. Fixed-price contracts are supposed to get you there, but they’re not a panacea. The problem is they’re usually missing a handle on the critical output measures you need to effectively manage the development or enhancement of an application.

One missing output measure is supplier productivity. Another missing output measure is application quality. Your supplier can give you a bushel of function points per dollar, but it won’t matter if what they turned out was of poor quality, leading to erratic application behavior, continuous headaches in production, and an application that’s just a bear to enhance.

So what does the picture look like with application quality thrown in the mix? Just to make the bubbles quite distinct, quality here is measured on a scale of 1 (low quality) to 100 (very high quality).

Application Quality in the Mix

Interesting in a Jackson-Pollock-meets-Klimt sort of way, isn’t it? What I did was just randomly assign a quality between 1 and 100 to each point on each of the productivity curves.  You don’t always get what you pay for; it would be nice if the quality of an application increases as labor cost increases, but unfortunately, this is not always the case. So, a random assignment of quality is what we have here – just for simulation purposes.

Now the quality of an application is hard to define, let alone quantify. But there are some good software engineering guidelines on how to do it. There are automated solutions out there that measure application quality. (Since application quality is not just a matter of summing up the quality of an application’s different components – how these components are linked up is a critical factor in the overall quality of the application – avoid quality measurement solutions that treat application components as independent entities. By the way, this applies to most desktop-based code checkers.) Use these automated solutions to get a grip on the quality of a supplier’s output – not just the number of hours they put into the job.

What next? Set productivity and quality thresholds. What if cost efficiency had to always be above 0.2 function points per dollar? A quick glance at the bubble chart above shows that setting this floor on cost efficiency pretty much rules out labor costs that are $60 per hour or greater. When you need cost efficiency to be over a certain value, you naturally put a ceiling on labor cost.

And what if quality always had to be above 75? With these two thresholds set, we get the simplified diagram below which makes it easier to make the right tradeoffs between labor cost, cost efficiency and quality.

Filtering by Quality and Cost Efficiency Thresholds

When it comes to managing application development or enhancement, conventional wisdom gets it backwards – it focuses on the input metric of labor cost and not on the output metrics of productivity and cost efficiency. That’s why it is, relatively speaking, much easier to manage a complex manufacturing production line or an airline routing system than it is to manage even moderately complex software projects.

An advantage of cost efficiency as I’ve defined it is the following. Suppose you define the value delivered to the business in terms of dollars per function point. It could either be dollars of revenue or dollars of cost savings, no matter. Now, when you multiply cost efficiency (FP/$) with value delivered ($/FP) you get (drumroll…), a dimensionless quantity – ooh, spooky! Why not call this leverage? It’s the degree to which a bit of  functionality in your application powers the engine of your business.

So, what’s the leverage of your mission-critical applications? Wouldn’t it be cool to benchmark this number against your peers?

 

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|