Software Benchmarks and Benchmarking

by

Reifer Consultants LLC’s recent white paper, Software Benchmarks and Benchmarking, discusses software benchmarking process and provides information on industry

Benchmark on Dean Street railway bridge. The copyright on this image is owned by Roger Templeman and is licensed for reuse under the Creative Commons Attribution-ShareAlike 2.0 license.
Benchmark on Dean Street railway bridge.

software benchmarking services and products that are available. Donald J. Reifer is recognized as one of the leading figures in the fields of software engineering and management with more 40+ years of progressive management experience in both industry and government.

Software development processes have evolved over the past decades as people learn what works and what doesn't work. Although we don't have centuries of experience like other fields such as building houses, we have learned a great deal. There has been some pushback against the long formal software development processes (waterfall) that have been replaced with Agile software development. But some formal processes are still needed to match the agility. Included in these processes is the important concept of software benchmarking.

What is software benchmarking?

Software benchmarking is the collection of and comparison of data from multiple sources. This process doesn't necessarily have to do with software, and recognizing that helps you build a formal plan for software benchmarking. On the other hand, you might be benchmarking the software development process itself to determine if teams are working as efficiently as possible.

Consider the situation of comparing insurance rates. You might make a few phone calls, visit a few websites, and gather up the data but before you can compare the data, you need to normalize it to make sure you're not comparing apples to oranges. Are you comparing monthly rates to bi-annual rates? Are the rates for the same services? Similarly, with benchmarking software development processes, you have to make sure your data is normalized. To get the data normalized, you need to use a formal process, just as you do with software quality in general. In a sense, designing a software benchmark is similar to architecting software itself.

Mr. Reifer explains the software benchmarking process as:

  1. Needs assessment: What are you trying to accomplish? Create the questions you need answered, and determine if those questions cover everything, you need to know. Are you factoring in cost? Often software developers think of software benchmarking strictly in terms of time, i.e. which program runs faster. But benchmarking is much broader and includes costs, quality, and so on.
  2. Industry/Domain Classification. You need to determine what type of benchmarking you need based on the industry and domain. From there you can determine whether you can use standard benchmarking tests developed by experts, or if you need to develop your own, specific benchmarking tests.
  3. Data Collection. This step is, of course, vital, yet it's easy to take it too lightly. In the previous steps, you determined what data you need, in this step, we cover not just the data collection itself, but how you're going to collect it. Some data might be collected automatically by software; other data might be collected through surveys from users. The data collection process might take only minutes; or, it might take months, depending on your needs. In either case, you'll want to survey the data you're collecting to make sure it's consistent and is providing the information you need.
  4. Data Normalization and Purification. As the data comes in, you need to make sure it's normalized. A trivial example is that if some data is in US dollars and other data is in Euros, you need to normalize it to a single currency. You also need to remove data that might be inaccurate or could improperly skew the results. This, of course, will likely involve standard statistical models and methods.
  5. Data Comparison and Benchmark Preparation. After the data is collected, you have your benchmarks. More analysis takes place here, however. If the data seems incorrect, then perhaps it is. Do some more digging and find out why, and if the tests need to be performed again.
  6. Benchmark Reporting. The process of reporting the data is also not trivial. The results need to be presented in an accurate but usable fashion target at a specific audience. Business managers would need different information than technical managers.
  7. Improvement Recommendations. As with any process, your benchmarking process will likely have room for improvement. Are you gathering the necessary data? Are you packaging the results into reports that are useful?

Benchmarking is a form of engineering and architecture in its own right. If you want to learn more, you can; visit Reifer Consulting, download the Software Benchmarks and Benchmarking white paper, or visit Application Measurement & Benchmarking.

 

About Reifer Consultants LLC:  Reifer Consultants LLC is a small business that specializes in software productivity benchmarking. In addition to the referenced semi-annual productivity benchmarking report that we publish, we offer the benchmarking products and services listed below. To inquire about these and other services, please get in touch with us using the contact information provided below.

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Pete Pizzutillo
Pete Pizzutillo VP Corporate Marketing at CAST
Pete Pizzutillo is Vice President of Corporate Marketing at CAST. He is responsible for leading the integrated marketing strategies (digital and social media, public relations, partners, and events) to build client engagement and generate demand. He passionately believes that the industry has the knowledge, tools and capability such that no one should lose customers, revenue or damage their brand (or career) due to poor software. Pete also oversees CAST’s product marketing team whose mission is to help organizations understand how Software Intelligence supports this belief. Prior to CAST, Pete oversaw product development and product management for an estimating and planning software company in the Aerospace and Defense market. He has worked in several industries in various marketing roles and started his career as an advertising agency art director. He is a graduated of The Pennsylvania State University with degrees in Business Administration and Art. Pete lives in New Jersey with his wife and their four children. You can connect with Pete on LinkedIn or Twitter: @pizzutillo.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|