We just finished up the 30-minute webinar where Andrew Agerbak, Associate Director from BCG, described some of the ways IT executives use software measurement in driving transformational programs. Andrew cited four case studies, where output metrics helped drive transformation, or at the very least measure its results. We had a number of questions come up in the webinar, so we couldn't get to them all, and not all of you could get to the Q&A session. We went 15 minutes over the 30 minute time slot for Q&A. The main point of this post is to document some of the more important questions and my summary of the answers provided by Andrew, especially for those of you who could not stay on past the half hour.
The most typical stakeholders we see that are interested, of course, is the top management in the company -- the CFO, CEO, CIO and other executives. But, it’s important not to forget that a measurement process should also serve the rest of the IT organization. So, if there’s a way for the development leaders and architects to get metrics about their code quality and architectural integrity that can be tremendously valuable. As we all know, the same problem that costs $1 to fix during development will take $10 to fix in test or $100 in production.
There are many possible definitions of a function point out there and it’s easy to get mired in “religious” debate about counting rules and which function point definition to use. The more important thing is to pick one definition that’s consistent across the organization being measured. CAST happens to automate the OMG definition, but the mere fact of automation might remove some of the heated debates about specific counting rules or definitions.
Let’s not let “the best” be the enemy of “the good.” Even if you pick a couple metrics off the measurement menu and start to consistently deploy them, it’s a good start. You don’t have to be a top-maturity organization and dive into a big measurement program right away. The key is to pick the right metrics to start – so that they have the most meaning and impact. Factor cost per function point is a good start. Or, compliance to target architecture. Don’t try to boil the ocean and overbuild the measurement right away.
We would not recommend starting a measurement program just for the sake of having measurement. As with any other area of the business, the measurement needs to support specific business objectives. As we saw in the examples we discussed in the webinar, in some cases the metrics can be used to guide an IT transformation program and ensure successful outcomes. In other cases, the purpose might be to make specific quality improvements, or to measure risk. Then the business case will depend on the extent to which the measurement capability helps achieve business objectives.
In the case of package applications such as the large ERP vendors, you’re really only concerned with the customizations you’ve done to the code shipped by the vendor. There’s not much direct business value in measuring the size or quality of the codebase your vendor maintains – beyond the curiosity to see how good the code quality might be. In terms of your own customizations, you would use the same approach with function points to measure the effectiveness of the development and maintenance activities.
CAST comment: Our product can analyze most major ERP systems and Andrew is correct in that the same approach applies to analyzing package customization for function points or structural quality. Interestingly, in our benchmark repository from which we do our CRASH reports, the average size of custom applications are around 500 KLOC. The average size of the ERP customizations? Over 800 KLOC. Those of us dealing with custom SAP or Oracle systems know how much pain is involved – but these numbers put the size of the problem into sharp perspective.
You want to make sure that your measurement is aligned to your business objectives. The metrics should be as meaningful and actionable as possible. If they involve analysis of application assets at the source code level, make sure you’re measuring the entire system rather than aggregating the sum of its parts if you want to measurement to be meaningful. It’s the only way to make sure you can prioritize the findings and use them to drive decisions and improvement. It’s also important to make sure you can trust these metrics -- because it’s very easy for measurement to become obsolete if your stakeholders don’t believe the data -- so you need to make sure all aspects of your measurement system are stable and calibrated. Lastly, measurement that relies on standards has a higher level of credibility than spurious or proprietary measures.
These were the high points of the Q&A session. There were also a number of questions about tools recommendations. Our apologies to the listeners who asked for that during the webinar -- Andrew is an expert in IT governance and measurement, but it is not Andrew’s expertise to compare tools. Of course, CAST is happy to provide our opinion about the tools that are best for structural quality and function point measurement. We will be very pleased to talk about our measurement platform to any and all of you. But, we’ll do it offline to spare the details for those of you who are just interested in the concepts and the business case.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.