Risk Detection and Benchmarking -- Feuding Brothers?


Risk detection is the most valid justification to the Software Analysis and Measurement activity: identify any threat that can negatively and severely impact the behavior of applications in operations as well as the application maintenance and development activity.

"Most valid justification" sounds great, but it’s also quite difficult to manage. Few organizations keep track of software issues that originate from the software source code and architecture so that it is difficult to define objective target requirements that could support a "zero defects" approach. Without clear requirements, it is the best way to invest one's time and resources in the wrong place: removing too few or too much non-compliant situation in the software source code and architecture, or in the wrong part of the application.

One answer is to benchmark analysis and measurement results so as to build a predictive model. This application is likely to be OK in operations for this kind of business because all these similar applications show the same results.

Different needs?

On the one hand, by nature, benchmarking imposes to compare apples with apples and oranges with oranges. In other words, measurement needs to be applicable to benchmarked applications -- stability over time -- so as to get a fair and valid benchmarking outcome.

On the other hand, risk detection for any given project:

  1. benefits from the use of state-of-the-art "weapons", i.e., the use of any means to identify serious threat, that should be kept up-to-date every day (as for software virus list)
  2. should not care about fair comparison. It’s never a good excuse to say that the trading applications failed but that it showed better results than average
  3. should heed contextual information about the application to better identify threats (an acquaintance of mine -- a security guru -- once said to me there are two types of software metrics: generic metrics and useful ones), i.e., the use of information that cannot be automatically found in the source code and architecture but that would turn a non-compliant situation into a major threat. For instance: In which part of the application is it located? Which amount of data is stored in the accessed database tables -- in production, not only in the development and testing environment? What is the functional purpose of this transaction? What is the officially vetted input validation component?


Is this ground for a divorce on account of irreconcilable differences?

Are we bound to keep the activities apart with a state-of-the-art risk detection system and a common-denominator benchmarking capability?

That would be a huge mistake as management and project teams would use different indicators and draw different conclusions. Worst case scenario: Project teams identify a major threat they need resource to fix but management indicators tell the opposite so that management deny the request).

Now what?

Although not so simple, there are steps that can be taken to bridge the gap.

It would be to make sure:

  1. that "contextual information" collection is part of the analysis and measurement process
  2. that a lack of such information would show (using the officially-vetted input validation component example, not knowing which component issues are a problem that would impact the results; not an excuse for poor results which much too often the case
  3. that the quality of the information is also assessed by human auditing

Are your risk detection and benchmarking butting heads ? Let us know in a comment. And keep your eyes on the blog for my next post about the benefits of a well-designed assesment model.

Philippe-Emmanuel Douziech Principal Research Scientist at CAST Research Lab
Load more reviews
Thank you for the review! Your review must be approved first
New code

You've already submitted a review for this item