6 Best Software Productivity Measurements You Need to Implement (and 3 Worst You Need to Avoid) in 2019

Sep 24, 2019 | 6 Best Software Productivity Measurements You Need to Implement (and 3 Worst You Need to Avoid) in 2019

Software productivity measurement has eluded the software design and engineering industry, which happens to be the fastest growing industry in the world according to Investopedia.  However, the software industry also carries the notoriety of having the worst metrics of all.  In a recent article, Capers Jones, VP and CTO of Namcook Analytics, discussed the best and worst software metrics used in the industry

Software Product Measurement Foundation: Function Points

Function points were invented by IBM and placed in the public domain in the late 1970s.  Function points measure economic value of software by quantifying the business value of software.  Many new software productivity measurements have been introduced since, such as story points and use case points, but none of them truly capture software size, are technology agnostic, and can be standardized internationally.  

There are two main international standards for functions points:

Modern Software Productivity Measurement: The Pragmatic Guide

  1. International Function Points User Group (IFPUG)

  2. Object Management Group (OMG) - Automated Function Points

Because of these well established standards, function point-based benchmarks are abundant, allowing organizations to set targets internally and compare with the whole industry.

Top 6 Software Productivity Measurements

To deploy a successful software metrics program in your enterprise, you need to include the following metrics.

  1. Work Hours per Function Point
    Measures the amount of time needed to maintain or develop a new function point.  This productivity metric allows organizations to track the efficiency of their software productivity over time.  Benchmarks from organizations like Namcook Analytics and Gartner are also available.

  2. Defect Potentials Using Function Points
    The sum of defects likely to be found in requirements, architecture, design, source code, and documents.  Invented by IBM, defect potentials is a prediction of defects in the SDLC. This number should be adjusted based on actuals, but serves as a strong indication of development productivity.  

  3. Defect Removal Efficiency (DRE)
    The original IBM version of DRE measured internal defects found by developers and compared them to external defects found by clients in the first 90 days following release.  If developers found 90 bugs and clients reported 10 bugs, DRE is 90% (The Economics of Software Quality).  

  4. Delivered Defects per Function Point
    Delivered Defects per function point measures how many defects are in delivered software while standardizing over function points.  This is an important overarching software quality defect.  

How Automated Function Point Counting Made Portfolio Sizing & Assessment Possible at Broadridge
  1. High-Severity Defects per Function Point
    Not all defects are created equal!  High-Severity defects are typically those that cause (or can cause) production incidents.  By measuring high-severity defects, organizations can track and predict work needed based on the type of release.

  2. Security Flaws per Function Point
    Security flaws are those that expose your software to potential exploitations, data breaches, and, worse, a PR disaster.  Tracking the level of security vulnerabilities over time and ensure that it reduces is an important activity in software quality assurance.

The 3 Worse Software Productivity Measurements

Don’t fall into the same trap as many organizations have.  Avoid relying heavily on the following metrics in your software metrics program.

  1. Cost per Defect
    There are several issues with the cost per defect metric.  First, cost per defects tends to always be the cheapest where more defects are found.  This phenomenon is because as software becomes bigger, variable costs tends to be shifted to the fixed cost bucket which reduces the cost of a single defect over time.  Further, because more bugs are found at the beginning of the SDLC than at the end, the increase in cost per defect at the end of the SDLC is artificial. Lastly, this metric ignores the savings of shorter development schedules and overall boost in productivity due to better quality.  Therefore, cost per defect understates the true value of quality.

  2. Any Metrics based on Lines of Code (LOC)
    Metrics based on LOC penalize high-level languages and give low-level languages an advantage.  LOC metrics ignore software complexity which are derived from architecture and design. Further, there are no standards for counting LOC metrics.  About half of all publications that use LOC metrics use physical LOC, and half use logical LOC. There can be up to 500% difference between counts of physical and logical LOC on the same piece of software.  

  3. Use Case Points
    Used by projects with designs based on “use cases” which often utilize IBM’s Rational Unified Process (RUP).  There are no standards for use cases. Use cases are ambiguous and vary by over 200% from company to company and project to project.  

What software productivity measurements are you using today? Talk to a software metrics expert at CAST to find out better ways to measure software with CAST Application Intelligence Platform (AIP), the only solution that offers OMG Standard Automated Function Points measurement.