Now that we’ve made the case for Application Mass Index (AMI) as a tool to standardize applications and make comparisons in Part One, it’s important to outline the key steps for success when utilizing this metric.
I can’t stress enough how crucial it is to establish a comprehensive scope to get the most useful comparison. Take a simple feature—your bank account balance, for example. To the naked eye, the feature seems incredibly basic. Money goes in, and the account shows you the corresponding number. But as IT-ers, we must remember that applications are not static. We must account for necessary transformations in the live environment.
In the case of the bank account balance, consider everything that must happen for the correct number to be displayed. There are constraints that secure the flow of information throughout the broadcast chain, manage connectivity for mobile devices, ensure an acceptable response time for a large mass of users via multithreading, and the list goes on. Every potential externality translates into development effort and, therefore, an induced cost. As such, when measuring cost, it is necessary to account for both the primary functionality and the cost of technical constraints. Because cost reflects software density, it is essential that size assessment also incorporates the functional and technical sizes. It is no wonder then that the latest OMG standards finally incorporate the entire development effort, including technical realization.
The last thing essential to creating an effective comparative analysis is to understand the expected levels of software quality and complexity. Standards of quality and complexitydepend on the industry, development lifecycle and operating environment of the application. But to properly compare applications, you need to normalize the quality levels of the application at hand at equal levels. Accounting for quality is crucial, because by bettering the quality of your software (code and architecture), you decrease costs by cutting the amount of back and forth between dev and QA teams.
The information above may seem obvious, but it is easy to make mistakes when choosing a measurement metric. Metrics must represent exactly what we are trying to measure, an intuitive fact that can be lost in the speed and high-energy of the SDLC. Metrics must be chosen to answer specific questions, and they must account for the scope and governance of that application before any improvements can be made.
So what’s the big picture here? While tracking application benchmarks and reducing development costs are important first steps, a continuous improvement process should be the long-term goal. This can be achieved by implementing low-cost monitoring for technical and functional features while monitoring their evolution against a maturity spectrum and stimulating rapid improvement.
The new OMG standards mentioned above, which includes Automated Enhancement Points, automatically elucidate both the functional and technical aspects of an application, therefore keeping the analysis at a very low cost. The rationalization of an application portfolio can also be done quickly through a software risk assessment, but to conduct a proper analysis each time, it all boils down to the standardization of metrics and knowing your defect density.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.