Function Points — A Primer


This post covers function points in general and then takes a deeper dive into the CAST approach to automated function point counting.

1. What is a Function Point?

2. How are function points used?

3. What is a CAST-Computed Function Point?

4. How does CAST’s automated approach to counting function points compare with IFPUG’s manual approach?

5. Which approach is better? Which one should I use?

6. When should I use CAST-computed function points?

7. Can CAST-Computed Function Points be used for benchmarking?

8. Can CAST-Computed Function Points be used to measure productivity?

9. How can CAST’s automated function point counts inform key management decisions?

10. What steps can I take today to improve my organization’s productivity?

11. Where can I get more information?

A function point, first defined by Allen Albrecht of IBM in 1979, is a measure of the amount of functionality in a software application. While Albrecht defines the attributes of functionality, his definition does not specify a particular method or process for measuring these attributes. Hence, two function point counts may differ because they follow different procedures for measuring or summing the constituents of functionality (the attributes) to come up with a total count of function points.

The de facto standard for counting function points is specified by the International Function Point Users Group IFPUG ( The IFPUG counting specification is not computable. In other words, it cannot be captured by an algorithm that can be programmed into a computer. Human judgment is necessary at certain stages of the process to determine the function point count. Back to the top

Function points are used in three ways:

1.      As a measure of software assets or activities. For example, the number of function points under maintenance, the number of defects per function point developed, the development cost per function point, or the maintenance cost per function point.

2.      To estimate effort. Using databases of completed projects, researchers have published formulas that translate function points into effort measured in man-hours. Hence, if you know the number of function points that need to be developed, and the formula is accurate, you have a method for estimating the development effort.

3.      To measure productivity. Because function points measure the amount of functionality it an application contains, the number of function points divided by the amount of effort in man-hours to build the application can serve as a measure of productivity. However, there are situations when legitimate effort is applied without a commensurate increase in the function point count (indeed, there might even be a decrease!). If not corrected, productivity will be underestimated in these situations. Question 8 below covers this issue in detail. Back to the top

Over the course of 5 years of intensive R&D, CAST has developed a computable standard for counting function points. The use of this computable standard enables the CAST Application Intelligence Platform to automatically generate function point counts. CAST-Computed Function Points (CCFPs) are objective, repeatable, and cost effective. Back to the top

As the table below shows, there's very little difference. The CAST AIP follows the same process followed by IFPUG-certified counters except for the last step in the process.

CAST Closely Replicates IFPUG
CAST Closely Replicates IFPUG

Because of this variation in the last step of the procedure, CAST-computed function points will vary from counts made by an IFPUG-certified professional. In detailed calibration studies done with function point experts David Consulting Group (DCG), we have found the difference between the IFPUG manual approach and the CAST automated approach is between ± 11%. For IFPUG manual counts the typical accepted certified practitioner variance is generally falls within ± 10%. However, variances can significantly exceed this range based on the availability and quality of supporting documentation, and subject matter experts. The CAST automated approach does not rely on any documentation or the knowledge of subject matter experts.

When customers have their own in-house function point teams, we've seen function point experts rely on CAST's more detailed information -- how the number of components and their complexity changes over time - to calibrate their counts. Back to the top

There is nothing inherently good (or bad) about the CAST-computed or the IFPUG-defined function point measures. Like any measure, the ability to inform critical decisions is what matters.

The value of every metric must be evaluated along three dimensions:

a)     Reliability - the extent to which measuring the metric yields the same results on repeated trials.

b)     Validity or Verifiability - the extent to which the metric is a good indicator of what it purports to measure - in this case, a measure of work output (not man-hours, which is a measure of work input).

c)     Cost Effectiveness

How much control a metric provides over a desired outcome depends on its reliability and validity. This in turn determines the role and significance of that metric in informing important decisions.

Critical decisions usually depend not on the absolute number of function points in an application or a group of applications but the change in function point count from one time to another. Once you have a consistent way of counting function points, you are guaranteed that the amount of change from one time to the next can reliably inform critical decisions.

CAST's automation means unparalleled consistency - every business-critical application in the portfolio is counted in exactly the same way. Once CAST is set up to do a count, any number of additional counts can be done at virtually zero additional cost.

Because successive counts are virtually cost free, CAST can generate large datasets that form the basis of powerful data-driven insights into how your software factory operates. Back to the top

Use CAST-computed function points at the start and the end of any major change to a business-critical application.

In general, CAST's automated approach works well when you have very little time to do the count, want to do it as inexpensively as possible, have little or no documentation, or don't have access to experts who know how the application works. Indeed, under such conditions, CAST is the only viable solution. Back to the top

Absolutely. Because CAST-computed function points are generated by a computable algorithm, they produce reliable, repeatable, consistent results time and again. There is currently no objective world-wide standard for measuring the output of AD or AM software teams. Because any non-computable standard requires expert human intervention, counts taken at two different times even within the same company's portfolio can be difficult to compare objectively. The problem is exacerbated when such non-computable comparisons are made across companies and across industries.

The consistency of CAST-computed function points makes it an ideal benchmarking standard. Because the count is made in exactly the same way, our customers (AT&T, AXA, Allianz among others) use it to quickly create a baseline benchmark against which changes in productivity can be readily measured. Benchmarking experts agree that these intra-company benchmarks are equally, if not more valuable to IT decision makers than external benchmarks across companies, industry sectors, or geographies. Back to the top

Yes, CAST-computed function points are a solid baseline upon which improvements in productivity can be measured.

Productivity is defined as the quantity of output divided by the quantity of the input. If I invest $100 today and get back $105 in one year, the productivity of my investment is 105/100 = 1.05. In software development (AD) and maintenance (AM), the inputs are the man-hours of effort logged by AD and AM teams. The output is the software product itself. The function point count of this end product - the result of the sum total of the effort logged - can be used as the output measure in the productivity equation. So in software development and maintenance, a potential measure of productivity is the number of function points per hour of effort.

However, when the effort made to improve an application does not accurately register as a commensurate change in the function point count, productivity is underestimated because legitimate input (effort applied) has increased without a commensurate change to the output (number of function points).

The Problem

For example, a sequence of Requests for Change (RFCs) might lead to increasing and subsequently decreasing function point counts. Because some of the change in output is missed, productivity is underestimated.

Change in Function Points Due to Change Requests
Change in Function Points Due to Change Requests

These are problems that any function point counter must face, automated or manual. CAST provides detailed information about how the number of components and their complexity change over time. This detailed information is used to correct for productivity underestimation.

The Solution--Using Complexity Change to Correct for Productivity Underestimation

Even when the function point velocity is zero, CAST measures how each of the Data and Transaction functions has changed since the previous CAST-computed sizing. This is done at two levels: at the level of the Data and Transaction component, and at the level of the sub-elements of each Data and Transaction component. This additional information is automatically used by the CAST AIP to correct for productivity underestimation.

Let's look at the types of additional information captured by the CAST AIP.

For each Data and Transaction function element CAST measures:

  • The number of function points added since the previous CAST-computed size measurement
  • The way in which the element has been modified since the previous CAST-computed size measurement
  • The complexity of the Data or Transactional function and the change in its complexity since the previous CAST-computed size measurement.
  • The number of violations that have been added or deleted against the element since the previous CAST-computed size measurement.
  • The number of elements that have been deleted since the previous CAST-computed size measurement
  • The number of sub-elements added, modified, or deleted since the previous CAST-computed size measurement and the function points of each prior to the change
    • When a sub-element has been added or modified
      • a measure of its current complexity and the change in its complexity since the previous CAST-computed size measurement.
      • The number of lines of code and the change in number of lines of code since the previous CAST-computed size measurement.
  • An estimate of the effort to add or modify the Data and Transaction function elements

By keeping track of the function points that have been added, modified, and deleted during the course of a project, CAST generates a more accurate measure of project productivity.

Project Productivity
Project Productivity

Back to the top

CAST's ability to generate size (in CAST-computed function points), complexity, and change in size and complexity from one point in the life cycle to another, provides IT executives and managers with unique control over strategic and operational decisions.

  • Having information about size, complexity and how these change from time to time enables CAST to generate accurate estimates of the effort required for these changes. While this information is not available in time to make forward-looking estimates, it is indispensible for answering a fundamental question (often posed by CFOs) of software AD and AM: "I see this [AD or AM] initiative took 2000 man-hours to complete. Should it have taken 2000 man-hours?"
  • CAST's detailed, quantified X-ray of size and complexity changes of the critical applications in the portfolio gives IT executives the vital information they need for strategic planning, roadmap creation, budgeting and initiative prioritization.
  • IT executives and mangers can better align resources to the right projects at the right time, taking into account not just the size of the project but its complexity as well, to match skill sets and expertise levels to the right tasks at the right time.
  • A detailed map of component interdependencies, size, and complexity enables program and project managers to better sequence work, compressing delivery times, increasing business value, and cutting cost.
  • Sourcing mangers are better able to fix their Onshore-Offshore resourcing mix by better matching size, complexity, and productivity data to vendor capabilities. CAST's large customers have reported savings of $500,000 per year by shifting the Onshore-Offshore ratio by 10 percentage points in the Offshore direction (e.g. from a 30-70 Onshore-Offshore blend to a 20-80 Onshore-Offshore blend).

Back to the top

Through our in-house expertise and our close collaboration with industry experts, we stand at the ready to serve your sizing, benchmarking, and productivity measurement needs today. Our engagements are typically structured in the following way.

What We Do

1. Establish a baseline value of the number of function points per man hour on which improvement in productivity can be measured.

2. Calculate the number of function points per man hour for every major release of a mission-critical application and compare it with the established baseline. The difference translates into an increase or decrease in productivity.

3. Generate detailed measures of quality and complexity for all application components and aggregate these measures up to the level of the entire application.

4. Analyze differences in productivity to highlight points of process inefficiency. Having quality and complexity information in addition to size makes it easier to find and quickly fix the root causes of inefficiency.

5. Recommend corrective action to improve productivity. 6. Measure productivity once again (by repeating steps 2 and 3) to quantify the effectiveness of process improvement.

Key Deliverables

1.      A map of productivity hotspots in your key applications and actionable advice to prevent productivity sinks.

2.      Quarterly reports on productivity that are critical inputs to portfolio prioritization, resource allocation, and vendor management.


1.      Measure and communicate improvements in operational efficiency to business partners (and your CFO).

2.      Use productivity results to improve estimation and resource allocation.

3.      Measure and improve the effectiveness of your operational processes and controls. Back to the top

Call us at the CAST Customer Information Center (North America: +1 212-871-8330; Europe: +33 1 46 90 21 00). One of our technical experts will be in touch with you right away.

Select Articles on Software Productivity Measurement:

  • IT Measurement; Practical Advice from the Experts, Addison-Wesley. Capers Jones Article - "It is fair to assert that function point metrics are rapidly becoming the dominant metric of the software world?"
  • Cutter IT Journal, 6/2003. "Hitting the Sweet Spot: Metrics Success at AT&T", J. Cirone. Based on a 12-quarter trend line, 1.4% productivity improvement, quarter over quarter using a FP-based metrics program.
  • "Practical Lessons for Software Measurement", D.L. Suppin, 2003.

Excerpt: "Similar improvements have been achieved on software application development projects.  During a three-year period, the completion statistics of over 400 application development projects were analyzed.  During that time, teams working on projects of similar application size and scope reduced their development time between 18% and 50%". Back to the top

Load more reviews
Thank you for the review! Your review must be approved first
New code

You've already submitted a review for this item