In this post, we wanted to take a step back and break down exactly what a function point is and how an IT organization can use them to measure application development productivity, improve IT project planning and estimating, and better manage application service providers.
A function point -- first defined by Allen Albrecht of IBM in 1979 -- is a measure of the amount functionality in a software application. But while Albrecht defined the attributes of functionality, he did not define a method or a process for measuring these attributes. And to be fair, that’s because function points aren't computable, meaning they cannot be captured with an algorithm. Human judgment is necessary at certain stages of the process to determine the function point count.
This meant that measuring function points was a very tedious, and therefore expensive, process. The de facto standard for counting function points was made by the International Function Point Users Group (IFPUG), and relied on the combined work of function point experts and subject matter experts within the company to calculate changes manually. With that much human intervention, the results of the function point counts was often heavily biased and couldn't be used to benchmark measures over time.
The short answer is: they’re the best measure we have. The value that function points provide are beneficial to the business, it’s the manual process that’s costly and time consuming. That’s why we developed the capability that that complies with OMG CISQ’s standard for counting function points. This automated technique can be used to make objective, repeatable, and cost effective function point counts. The following table lists use cases and illustrates the applicability of manual and automated counting techniques.
Function point counts can be used in three ways: to measure software assets or activities, to estimate effort, and to measure developer productivity. Using function points to measure software assets is by far the most commonly used. You can simply count the number of function points in an application under maintenance, or the number of defects per function point, or development cost per function point. However, to use function point counts to estimate developer effort, you must translate function points into effort measured in man hours. Then once you know the number of function points that need to be developed, you can estimate the developer effort required to complete the project. Finally, to measure developer productivity, you can take the number of function points in an application and divide it by the amount of man-hours it took to complete the build. For more information on how automated function points support enterprise wide productivity measurement please see: Modern Software Productivity Measurement Guide
The old way of measuring function points relied very heavily on the availability and quality of support documentation, and subject matter experts. But when the counting is automated and brought in-house, we’ve seen project managers rely on our more detailed information -- how the number of components and their complexity changes over time -- to calibrate their counts. At the end of the day, it doesn’t really matter what you measure. It matters how that measure helps inform critical decision making. Once you have a consistent way of measuring function points, you’re guaranteed that the amount of change from one time to the next can reliably inform critical decisions at virtually no cost.
There’s currently no accepted standard for measuring development output, which leaves businesses with very little leverage at the negotiation table with their software vendors, or even their own internal development teams. But because automated function point counts are generated by a computable algorithm, they produce reliable, repeatable, and consistent results time and time again. The consistency of automated measures makes for an ideal benchmarking standard as it can quickly create a baseline benchmark against which changes in productivity can be readily measured. In this way, intra-company benchmarks are equally, if not more valuable to IT decision makers than external benchmarks. With these new insights, IT executives have the vital information they need for strategic planning, roadmap creation, budgeting, and initiative prioritization. They can better align resources and take into account not just the size, but the complexity of an application to match the skills and expertise specifically to the appropriate development team. Going beyond IT management, a detailed map of component interdependencies, size, and complexity enables program and project managers to better sequence work, helping cut costs. And sourcing managers are better able to fix their onshore-offshore resourcing mix by better matching size, complexity, and productivity data to vendor capabilities. Without a baseline benchmark and automated function point counting, IT organizations are leaving themselves in the dark about the true nature of their application portfolio. The value of function point metrics is well established. As barriers are eliminated, such as manual effort, cost and scalability, high performing organizations are applying measurement to business areas which have been traditionally difficult to measure. With the standards set forth by OMG and the adoption of automated function point counting solutions, the industry will see increased use of functional sizing at the application and portfolio level. This in turn will lead to more effective and efficient management of IT portfolios, improved vendor management, better valuation of software assets, improved ADM performance management, and a reduced ADM costs. As David Herron, DCG explains “One of the major advantages of automated software sizing over manual sizing approaches will be the consistency of results produced by repeated sizing exercises on the same set of inputs.” Consistent measures, regardless of the standard or method, are one of the pillars of effective measurement. IT professionals that rely on automated function point counting for accurate, precise, and consistent measures as the basis of the guidance they provide to their leadership teams understand that consistency breeds confidence and credibility.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.