All too often, software projects both exceed their budgetary limitations and are labeled too slow by stakeholders. What is the root of this problem? To isolate the cause of—and fix—this phenomenon, project managers need a new approach.
Enter trend-tracking metrics. To initiate good software project management, it is essential to select and measure relevant variables consistently. The first step is to standardize measurement, whether your concern is cost, effort, time, quality or any combination of those factors. But the second step must be to use that standardization to measure applications over time and illuminate trends.
Standardizing metrics is a prerequisite for trend tracking, both within an application and, eventually, when benchmarking against other applications. Conversely, trend tracking is what makes standardizing metrics a useful practice in the first place. The two are separate processes, but they are best understood as codependent tools to maximize application—and, thus, business—improvement.
Let’s say your software (application X) is 1/3 as productive, 1/2 as expensive, and 1/4 as large as an application you want to compare it to (application Y). How are you supposed to measure X’s technical defects against Y’s? Enter the Application Mass Index—AMI. An application’s AMI is the ratio of the number of technical defects over the size of the application, otherwise known as the Function Point. Remember in elementary school math, when you couldn’t believe that 3/4 was equal to 12/16 was equal to 300/400? By measuring the density of defects as opposed to the raw number of defects in an application, function points equalize applications against past versions of themselves and each other.
Unlike other standardization metrics you may be familiar with, like Body Mass Index (BMI), you—the project manager—are the one who decides the variables that comprise AMI. To standardize measurement, you need to set the parameters of the process: the boundaries of the application. Using function point-based analysis as a standardized metric is the optimal way to maintain control over which variables you’re assessing without injecting human bias. And to preempt questions that will arise about the use of Story Point: yes, there are cases in which its human specificity is preferable to the agnostic measurement Function Point offers.
Story Point accounts for variables that cannot be standardized, like a team’s knowledge of the existing application, the complexity of the code to be implemented, the complexity of the code to be modified in the existing code and the technical solutions they must use in terms of language and framework. But while this may be useful for a dev team measuring its own, internal progress, Story Point cannot serve management-level inquiries on comparative success. A measure that is independent of the human factor is perfect for a comparison between two teams or even two projects. Thus, for our purpose of guiding you to a better tracking practice by making comparative measurement more accessible, Function Points are the clear answer.
But before you can jump to the comparative step of X against Y, you need to ensure you can compare X from last year to X from this month. This, too, is possible, thanks to AMI standardization, even if your application has tripled or quadrupled in size over the past year. Once enough internal AMI analyses have occurred to mature your application, you may be ready to test against other applications. AMI enables close examination of the characteristics that you might be interested in adjusting in the target application by illuminating what those factors look like across the compared applications: factors like technology, development methodology, geographic area of development and more.
Interested in learning how to use AMI? Stay tuned for Part 2 of this blog series, which will post next week!
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.