How to Estimate Software Health and Software Size

by

The “Application Trends” feature, also known as delta analysis, dramatically increases the value of using CAST Highlight in an Agile context. In a nutshell, Highlight now computes software health scores and metrics of scanned source files based on their status, whether they have been added or modified during the last iteration. This post will explain how to work with this feature.

Added/Modified Files vs. The Cold Part of the Software

The mechanics behind the feature are pretty simple. For each file Highlight scans, a unique signature (a CRC – Cyclic Redundancy Check – technically speaking) is calculated. When a new scan occurs for the same file (same name and path), Highlight calculates a new signature. If both signatures are identical, it means that the file hasn’t changed between the first and the second scans. If the signatures are different, it means that the file has been modified between the scans. If a file wasn’t there in the first scan, it obviously means it has been added between the two scans.

Using the specific scope of added and modified files between two sprints/iterations/scans, Highlight computes the different KPI scores for each file status category: Software Resiliency, Agility, Elegance, Lines of Code and Back-Fired Function Points.

How to Use Delta Analysis in Agile-Driven Code Bases

You now see that the 28 new files that have been introduced in the codebase during the last sprint was made of 993 lines of code (equivalent to a total of 19 BFPs) and that they had very high scores in Resiliency (92.2 out of 100) and Agility (74.7), but also that the development team took advantage of working on existing files to reduce code complexity (Software Elegance on modified files is 79.8, improved by 20% compared to the last scan). Visually, you can see that the activity done during this sprint increased the health scores, while the total BFPs have only grown by 1.2%.

Highlight-Code-Lines

This delta analysis is especially appreciated by contributors of large legacy apps who want to measure the impact of their effort on improving software health. Since fixing 100 issues within a 1-MLOC application can’t really be tangible when looking at the overall score variation, having KPIs available only on modified and added files let development and maintenance teams concentrate on the “living” (i.e., changed) part of their code base.

Highlight-Software-Health

Quick trick: when scanning your code with the Local Agent or the command line, ensure the folder structure remains the same across versions, as it is used to cipher scanned files.

C:\src\version1.0\myfile.java will be considered as a different as D:\src\version1.5\myfile.java, and myfile.java will be considered as a new file in the second scan.

That’s all folks! I’m confident you’ll be happy with this feature. Don’t hesitate to share your feedback and product experience with our product team!

Filed in: DevOps
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
Making sense of cloud transitions for financial and telecoms firms Cloud  migration 2.0: shifting priorities for application modernization in 2019  Research Report
Michael Muller
Michael Muller Product Owner Cloud-Based Software Analytics & Benchmarking at CAST
Michael Muller is a 15-year veteran in the software quality and measurement space. His areas of expertise include code quality, technical debt assessment, software quality remediation strategy, and application portfolio management. Michael manages the Appmarq product and benchmark database and is part of the CAST Research Labs analysis team that generates the industry-renowned CRASH reports.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|