Risk Detection and Benchmarking -- Feuding Brothers?


Risk detection is the most valid justification to the Software Analysis and Measurement activity: identify any threat that can negatively and severely impact the behavior of applications in operations as well as the application maintenance and development activity.

"Most valid justification" sounds great, but it’s also quite difficult to manage. Few organizations keep track of software issues that originate from the software source code and architecture so that it is difficult to define objective target requirements that could support a "zero defects" approach. Without clear requirements, it is the best way to invest one's time and resources in the wrong place: removing too few or too much non-compliant situation in the software source code and architecture, or in the wrong part of the application.

One answer is to benchmark analysis and measurement results so as to build a predictive model. This application is likely to be OK in operations for this kind of business because all these similar applications show the same results.

Different needs?

On the one hand, by nature, benchmarking imposes to compare apples with apples and oranges with oranges. In other words, measurement needs to be applicable to benchmarked applications -- stability over time -- so as to get a fair and valid benchmarking outcome.

On the other hand, risk detection for any given project:

  1. benefits from the use of state-of-the-art "weapons", i.e., the use of any means to identify serious threat, that should be kept up-to-date every day (as for software virus list)
  2. should not care about fair comparison. It’s never a good excuse to say that the trading applications failed but that it showed better results than average
  3. should heed contextual information about the application to better identify threats (an acquaintance of mine -- a security guru -- once said to me there are two types of software metrics: generic metrics and useful ones), i.e., the use of information that cannot be automatically found in the source code and architecture but that would turn a non-compliant situation into a major threat. For instance: In which part of the application is it located? Which amount of data is stored in the accessed database tables -- in production, not only in the development and testing environment? What is the functional purpose of this transaction? What is the officially vetted input validation component?


Is this ground for a divorce on account of irreconcilable differences?

Are we bound to keep the activities apart with a state-of-the-art risk detection system and a common-denominator benchmarking capability?

That would be a huge mistake as management and project teams would use different indicators and draw different conclusions. Worst case scenario: Project teams identify a major threat they need resource to fix but management indicators tell the opposite so that management deny the request).

Now what?

Although not so simple, there are steps that can be taken to bridge the gap.

It would be to make sure:

  1. that "contextual information" collection is part of the analysis and measurement process
  2. that a lack of such information would show (using the officially-vetted input validation component example, not knowing which component issues are a problem that would impact the results; not an excuse for poor results which much too often the case
  3. that the quality of the information is also assessed by human auditing

Are your risk detection and benchmarking butting heads ? Let us know in a comment. And keep your eyes on the blog for my next post about the benefits of a well-designed assesment model.

  This report describes the effects of different industrial factors on  structural quality. Structural quality differed across technologies with COBOL  applications generally having the lowest densities of critical weaknesses,  while JAVA-EE had the highest densities. While structural quality differed  slightly across industry segments, there was almost no effect from whether the  application was in- or outsourced, or whether it was produced on- or off-shore.  Large variations in the densities in critical weaknesses across applications  suggested the major factors in structural quality are more related to  conditions specific to each application. CRASH Report 2020: CAST Research on  the Structural Condition of Critical Applications Report
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
Making sense of cloud transitions for financial and telecoms firms Cloud  migration 2.0: shifting priorities for application modernization in 2019  Research Report
Philippe-Emmanuel Douziech
Philippe-Emmanuel Douziech Principal Research Scientist
Philippe Emmanuel Douziech is a Principal Research Scientist at CAST Research Labs and is the Head of European Science Directorate at CISQ. He has worked in the software industry for more than 20 years and is skilled at assessing software risk and quality.
Load more reviews
Thank you for the review! Your review must be approved first
You've already submitted a review for this item

A safer approach to application modernization and re-factoring for cloud