Five Reasons You MUST Measure Software Complexity

Mar 24, 2015 | Five Reasons You MUST Measure Software Complexity

There’s an old adage in the IT industry – you can’t manage what you can’t measure. Knowing how complex an organization’s application portfolio is provides insight into how to manage it best. The problem is the issues that comprise software complexity – legacy system remnants, antiquated code, overwritten and rewritten code, the integration of formerly proprietary applications, et al – are the same things that make measuring it difficult.

With multiple system interfaces and complex requirements, the complexity of software systems sometimes grows beyond control, rendering applications and portfolios too costly to maintain and too risky to enhance. Left unchecked, software complexity can run rampant in delivered projects, leaving behind bloated, cumbersome applications. In fact, Alain April, an expert in the field of IT maintenance, has stated, “the act of maintaining software necessarily degrades it.”

Complexity Metrics

Fortunately, there have been many methods developed for measuring software complexity. While some may differ slightly from others, most break down software complexity according to the following metrics:

Cyclomatic Complexity:  measures how much control flow exists in a program - for example, in RPG, operation codes such as IF, DO, SELECT, etc. Programs with more conditional logic are more difficult to understand, therefore measuring the level of cyclomatic complexity unveils how much needs to be managed. Using cyclomatic complexity by itself, however, can produce the wrong results. A module can be complex, but have few interactions with outside modules. A module can also be relatively simple, but be highly coupled with many other modules, which actually increases the overall complexity of the codebase beyond measure. In the first case, complexity metrics will look bad. In the second, the complexity metrics will look good, but the result will be deceptive. It is important, therefore, to measure the coupling and cohesion of the modules in the codebase as well in order to get a true system-level, software complexity measure.

Halstead Volume: measures how much “information” is in the source code and needs to be learned. This metric looks at how many variables are used and how often they are used in programs, functions and operation codes. All of these are additional pieces of information programmers must learn and they all affect data flow.

Maintainability index: formulates an overall score of how maintainable a program is. Unlike Cyclomatic Complexity and Halstead Volume, the Maintainability Index is more of an empiric measurement, having been developed over a period of years by consultants working with Hewlett-Packard and its software teams. The index weighs Cyclomatic Complexity and Halstead Volume against the number of lines of source code and number of lines of comments to give an overall picture of software complexity.

Organizations that have this information can capitalize in a number of ways. Here are the top five:

  1. Greater predictability: knowing the level of complexity of the code being maintained makes it easier to know how much maintenance a program will need
  2. Software Risk Mitigation: managing software complexity lowers the risk of introducing defects into production.
  3. Reduced Costs: being proactive when it comes to keeping software from becoming excessively or unnecessarily complex lowers maintenance costs because an organization can be prepared for what is coming.
  4. Extended Value: as illustrated in the CRASH report from past years, excessively complex applications cause issues. Organizations can preserve the value of their software assets and prolong their usefulness by keeping complexity in check.
  5. Decision Support: sometimes code can be so complex that it just is not worth saving. With proof of how much it would cost to rewrite, a decision can be made whether it is better to maintain what exists or just rewrite new code.

Fred Brooks, in his landmark paper, No Silver Bullet — Essence and Accidents of Software Engineering, asserts that there are two types of complexity. Essential complexity is the unavoidable complexity required to fulfill the functional requirements. Accidental complexity is the additional complexity introduced by poor design or a lack of complexity management. Left unchecked, non-essential complexity can get out of hand, leaving behind a poor TCO equation and additional risk to the business.

Excess software complexity can negatively affect developers’ ability to manage the interactions between layers and components in an application. It can also make specific modules difficult to enhance and to test. Every piece of code must be assessed to determine how it will affect the application in terms of robustness and changeability. Software complexity is a major concern among organizations that manage numerous technologies and applications within a multi-tier infrastructure.

The Benefits of Software Complexity Analysis

Without the use of dependable software complexity metrics, it can be difficult and time consuming to determine the architectural hotspots where risk and cost emanates. More importantly, continuous software complexity analysis enables project teams and technology management to get ahead of the problem and prevent excess complexity from taking root.

When measuring complexity, it is important to look holistically at coupling, cohesion, SQL complexity, use of frameworks, and algorithmic complexity. It is also important to have an accurate, repeatable set of complexity metrics, consistent across the technology layers of the application portfolio to provide benchmarking for continued assessment as changes are implemented to meet business or user needs. A robust software complexity measurement program provides an organization with the opportunity to:

  • Improve Code Quality
  • Reduce Maintenance Cost
  • Heighten Productivity
  • Increase Robustness
  • Meet Architecture Standards

Automated analysis based on defined software complexity algorithms provides a comprehensive assessment regardless of application size or frequency of analysis. Automation is objective, repeatable, consistent and cost effective. A software complexity measurement regime should be implemented for any organization attempting to increase the agility of software delivery.

When it comes to measuring, the importance can best be summed up by paraphrasing Muhammed Ali’s famous saying, “You can’t hit what you can’t see.” Measuring software complexity gives organizations the insight they need to hit (i.e., perform application portfolio management) effectively.