Do I look like someone who needs representative measures?

by

No offense, but I’m not addicted to representative measures. In some areas, I am more than happy to have them. Like when talking about the balance of my checking and savings accounts. In that case, I’d like representative measures, to the nearest cent.

But I don't need representative measures 100 percent of the time. On the contrary, in some areas, I strongly need non-representative measures to provide me with some efficient guidance.

Risk level assessment

One of those areas is software analysis and measurement; especially when you’re dealing with risk levels. Letting me know that an application is one billionth (1/1 billion) times better than another doesn't help me. I need to know quickly if its risk level is acceptable or not. And when it’s not, I need to know when it’s back to normal. I don't need to know -- at first, at least -- if I have just improved, but not solved, the problem at hand.

It may sound "tough and unfair" to the people that were involved in the improvement, but this is not the productivity I am talking about here; this is about risk, such as security- or performance-related risk. That is, the way apps will behave in operations. In some cases, this can be the future of your organization.

When discussing this topic with a security officer in an insurance company, I was -- rightly -- told that a component will fail its tests as long as there is one security vulnerability, even if dozen vulnerabilities were removed. The status for this component will change only when the last occurrence of this vulnerability is removed. As a customer of this organization, I am kind of happy with this "tough and unfair" assessment. In case my private data is compromised, I will not accept apologies such as, "We understand your identity was stolen, but you should be pleased to know that most of the other vulnerability occurrences were removed, simply not the one the hacker just used to steal from you."

Should we always overlook the progress made in mitigating the risk? Of course not. But this is a different context and a different use case. All I’m saying is that decision-making processes benefit from non-representative measures. Non-representative measures can lead me straight to the point without the burden of the "representativity" mathematical purity.

What about benchmarking and monitoring?

Benchmarking and monitoring use cases seem to dictate the need for representative measures. But, as a Frenchman, I would say "yes and no."

I won't detail the "yes" part as it’s the obvious answer; I'd rather detail the "no.”

Simply because it’s mathematically possible, some people will try to find what is needless when dealing with risk mitigation. Some will focus on variations of representative measures that are mathematically correct but completely irrelevant when running a business. For example, when the ratio or number of violations are beyond unacceptable, what good is it to compare them more accurately?

Is it specific to the software analysis and measurement of security-, performance-, robustness-, changeability-, or transferability-related risk level? If not, it might be time to consider evaluating your benchmarking. If I am to believe Wikipedia, Ireland’s grading system includes an "NG" grade for situations beyond failure, "unworthy of marking."

What does it mean concretely?

That I’m okay with using non-representative measures for software risk level management because I have more options offered to me.

Some options are:

  • non-linear behaviors, to model an exponential response to a risk assessment measure,
  • limited range, to provide an assessment answer that is not likely to be questioned. (Indeed, virtually unlimited values can always lead to the question "what is too much?" With thresholds and range limits, the results bring an answer, such as "Unworthy of marking.")

With detailed information on demand, one can still get to the bottom of software risk, but the initial result interpretation is quicker and unequivocal.

Filled in: Software Analysis
Philippe-Emmanuel Douziech Principal Research Scientist at CAST Research Lab
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|