Know Your Defect Density: Part Two

by

Now that we’ve made the case for Application Mass Index (AMI) as a tool to standardize applications and make comparisons in Part One, it’s important to outline the key steps for success when utilizing this metric.

I can’t stress enough how crucial it is to establish a comprehensive scope to get the most useful comparison. Take a simple feature—your bank account balance, for example. To the naked eye, the feature seems incredibly basic. Money goes in, and the account shows you the corresponding number. But as IT-ers, we must remember that applications are not static. We must account for necessary transformations in the live environment.

In the case of the bank account balance, consider everything that must happen for the correct number to be displayed. There are constraints that secure the flow of information throughout the broadcast chain, manage connectivity for mobile devices, ensure an acceptable response time for a large mass of users via multithreading, and the list goes on. Every potential externality translates into development effort and, therefore, an induced cost. As such, when measuring cost, it is necessary to account for both the primary functionality and the cost of technical constraints. Because cost reflects software density, it is essential that size assessment also incorporates the functional and technical sizes. It is no wonder then that the latest OMG standards finally incorporate the entire development effort, including technical realization.

The last thing essential to creating an effective comparative analysis is to understand the expected levels of software quality and complexity. Standards of quality and complexitydepend on the industry, development lifecycle and operating environment of the application. But to properly compare applications, you need to normalize the quality levels of the application at hand at equal levels. Accounting for quality is crucial, because by bettering the quality of your software (code and architecture), you decrease costs by cutting the amount of back and forth between dev and QA teams.

Tracking as a Stepping Stone to Improvement

The information above may seem obvious, but it is easy to make mistakes when choosing a measurement metric. Metrics must represent exactly what we are trying to measure, an intuitive fact that can be lost in the speed and high-energy of the SDLC. Metrics must be chosen to answer specific questions, and they must account for the scope and governance of that application before any improvements can be made.

So what’s the big picture here? While tracking application benchmarks and reducing development costs are important first steps, a continuous improvement process should be the long-term goal. This can be achieved by implementing low-cost monitoring for technical and functional features while monitoring their evolution against a maturity spectrum and stimulating rapid improvement.

The new OMG standards mentioned above, which includes Automated Enhancement Points, automatically elucidate both the functional and technical aspects of an application, therefore keeping the analysis at a very low cost. The rationalization of an application portfolio can also be done quickly through a software risk assessment, but to conduct a proper analysis each time, it all boils down to the standardization of metrics and knowing your defect density.

Philippe Guerin Software Analytics & Risk Prevention specialist. Domain Expert in ADM Sizing and Productivity Measurement
A well-rounded technologist with over 15 years of leadership experience in ADM Productivity Measurement, Product Development/Management, Program Management, Solution Architecture, Sales, and Services, and more than 5 years of leading teams.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|