Philippe-Emmanuel Douziech - Principal Research Scientist at CAST Research Lab

It’s no question that Cloud is no longer a passing phase. In the span of a few years, Cloud has moved from an interesting concept to a useful business tool. What began as a creative tool for testing has moved into the mainstream as a way to improve hardware utilization and expand capacity. The benefits for Cloud are well established, and more customers are moving to consumption-based models, either with captive or public Cloud solutions. Many tools exist to help with Cloud migrations, but few have the flexibility to “see through the Cloud” to the application code, and make that code fit this new world.

See Through the Cloud!

It’s no surprise that organizations are moving more and more of their business critical systems to the cloud because of its availability, speed, and ease-of-use. But how does this effect and organizations ability to properly test and maintain the quality of those systems?

SODA, Anyone?

When dealing with Software Analysis and Measurement benchmarking, people's behavior generally falls in one of the following two categories:

Why are there so many hurdles to efficient SAM benchmarking?

In my last post, I shared my opinion on the benefits of non-representative measures for some software risk mitigation use cases. But does that mean I am always better served by non-representative measures? Of course not.

No bipolar disorder here, just a pragmatic approach to different use cases that are best handled with some adapted pieces of information.

Representative vs. non-representative measures: Bipolar disorder?

No offense, but I’m not addicted to representative measures. In some areas, I am more than happy to have them. Like when talking about the balance of my checking and savings accounts. In that case, I’d like representative measures, to the nearest cent.

But I don't need representative measures 100 percent of the time. On the contrary, in. some areas, I strongly need non-representative measures to provide me with some efficient guidance

Do I look like someone who needs representative measures?

Making technical debt visible already proves to be quite a challenge, as it’s all about exposing the underwater part of the iceberg.

But how deep underwater does it go? To know for sure, you would need the right diving equipment. To go just below the surface, you would start with a snorkel. But to go far down, you need a deep-sea exploration submersible.

Technical Debt: Principal but no interest?

Many software solutions feature the detection of duplicated source code. Indeed, this is one cornerstone of software analysis and measurement:

There is code duplication detection and code duplication detection

I recently found myself in yet another endless discussion about how bug fixes and extra capacity impact the results of a Software Analysis and Measurement (SAM) assessment.

Would you be so nice as to not tell me the truth?

In my last post we discussed the complimentary nature of remediation cost and risk level assessment. As a follow up, I wanted to dwell on the objective risk level assessment. Is it even possible? If not, how close to it can we get? How valuable is an estimation of the risk level? Could it be the Holy Grail of software analysis and measurement? Or is it even worth the effort?

The Holy Grail: Objective risk level estimation

While working in a CISQ technical work group to propose the "best" quality model that would efficiently provide visibility on application quality (mostly to ensure their reliance, performance, and security), we discussed two approaches that would output exposure. The first is a remediation cost approach, which measures the distance to the required internal quality level. The other is a risk level approach, which estimates the impact internal quality issues can have on the business.

Remediation cost versus risk level: Two sides of the same coin?

Risk detection is about identifying any threat that can negatively and severely impact the behavior of applications in operations, as well as the application maintenance and development activity. Then, risk assessment is about conveying the result of the detection through easy-to-grasp pieces of information. Part of this activity is about highlighting what it is you’re seeing while summarizing a plethora of information. But as soon as we utter the word "summarizing," we risk losing some important context.

Is Every Part of the Application equal when Assessing the Risk Level?

Risk detection is the most valid justification to the Software Analysis and Measurement activity: identify any threat that can negatively and severely impact the behavior of applications in operations as well as the application maintenance and development activity.

Risk Detection and Benchmarking -- Feuding Brothers?