Estimating the Hidden Costs of Cost Estimation

by

A recent Government Accountability Office (GAO) report found that most federal agencies, with the exception of the Department of Defense, are not properly equipped to give accurate cost estimations of their IT infrastructure. There are many reasons for this, but the problem starts with the data that is being fed into most cost estimation practices and models.

For any organization, federal or commercial, the ability to credibly estimate the time and budget required for a project to reach a successful conclusion is crucial. The many benefits of good estimating have been explained over and over.

According to the GAO, federal agencies are not setting a good precedent in estimating their IT projects.  Most federal agencies have weak processes that rely on expert opinion, while some employ tools such as parametric models. At the root of any process, whether parametric or expert opinion, agencies need access to information about the systems they are supporting or seeking to develop -- and this is precisely when the process begins to break down.

Incomplete, Bad and Unattainable Data

Collecting data is not cheap, and it takes time and effort to do it properly. When the budget is tight, data often gets cut from programs. As a result, agencies have an incomplete view of their systems. If they do have data, it is often ‘dirty,’ meaning that poor time keeping or project tracking practices generated data that is effectively meaningless. In many cases, system integrators that are performing the work have the data, but agencies don’t have access to it.

So, in lieu of data, agencies rely on expert opinion to provide basic inputs into their estimating process. But depending on the day your expert is having, they’ll give you some stats about the applications (or not) and send you on your way. But how can you be sure that data is reliable?

The end result is you can’t. By front-loading your estimation process or model with uncertain data, any result that comes out will be unreliable … garbage in, garbage out.

Shrinking the Cone of Uncertainty

Federal organizations would benefit greatly from automated software analysis and measurement systems that generate unbiased metrics of their applications. The value of injecting fact-based measures into the front end of an estimating process greatly reduces the Cone of Uncertainty.

The Cone of Uncertainty describes the evolution of uncertainty in a project. In the beginning, when little is known, estimates are subject to large uncertainty. As more information is learned, the uncertainty decreases.

By injecting an accurate calculation of a system’s size, we greatly reduce the amount of uncertainty. And supporting size data with measures of that system’s technical and functional complexity, and an objective assessment of its underlying structural quality, reduces uncertainty even more. An estimate that has little uncertainty or a high degree of confidence is the foundation of accurately predicting development teams’ productivity. This is important because planning and budgeting are merely exercises to determine how to allocate resources and plan when new capabilities will be available to your clients. How many developers will I need? How long will I need them? When will they be finished?

There are several sources (Standish’s Chaos Reports) that document the IT industry’s legacy of poor delivery. And there are many reasons why IT projects continue to fail.

We know that most IT budgets, both federal and commercial, are spent maintaining and supporting existing systems. It is clear that agencies that own these existing systems suffer from a lack of visibility into their complexity. Without this information, any planning and budgeting is handicapped. IT intensive programs that require the most planning to deliver systems on-time and on-budget would improve if we can shed some light into these systems and arm agencies with objective, fact-based insight.

Making the Invisible Visible

Through static code analysis, you can measure your application in real time, and gather unbiased metrics to share both internally and externally.

By getting these metrics right from the product that’s being managed and worked on in real time, the data is consistent across all the programs. This independent, unbiased data can then be used to support program decisions around the ongoing management of the application.

When a racing team is tuning a car’s engine, the team isn’t going to ask the engineer what he thinks and run a race based solely off of that data. It’ll fill the engine with sensors and monitor every metric it can grab. If your organization approaches software estimation the same way, you’ll create a repository of useful data to show how your IT infrastructure has evolved, and what it will take to bring it to the next level.

Be sure to flip over to Dan Galorath’s article on data driven estimation for more information on this topic.

Filed in: Risk & Security
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Pete Pizzutillo
Pete Pizzutillo Vice President
Pete Pizzutillo is Vice President at CAST and has spent the last 15 years working in the software industry. He passionately believes Software Intelligence is the cornerstone to successful digital transformation, and he actively helps customers realize the benefits of CAST's software analytics to ensure their IT systems are secure, resilient and efficient to support the next wave of modern business.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|