Is Every Part of the Application equal when Assessing the Risk Level?

by

Risk detection is about identifying any threat that can negatively and severely impact the behavior of applications in operations, as well as the application maintenance and development activity. Then, risk assessment is about conveying the result of the detection through easy-to-grasp pieces of information. Part of this activity is about highlighting what it is you’re seeing while summarizing a plethora of information. But as soon as we utter the word "summarizing," we risk losing some important context.

Application split impact as a strength in risk assessment

An application can be considered as a whole in its purpose of servicing one area of the business, yet it is composed of multiple technical and functional parts. In other words, an application is not about one single feature.

The ability to split an application into its main features, or groups of features or functional domains, is critical to map the occurrences of risky situations. Indeed, considering that every single piece of code or software construct is equivalent with regards to the risk an application incurs is valuable for objective comparison. Yet it misses the point that they serve different features and that these features are not equal, would they fail in operations. For instance:

  1. The very location where the violations occur is key as it might be in a piece of code or in a construct that is supporting a non-critical feature that does not handle any sensitive data. Or, on the contrary, is supporting a mission-critical feature that does handle sensitive data with a customer-facing front-end over the Internet.
  2. Likewise, a piece of code or software construct that is involved in many such critical features creates a much higher risk even though it is still occurring in one location.
  3. Taking the context into account will help provide a better assessment than a purely objective one.

The same issue holds true when dealing with application upgrades as well. I faced a situation where the team in charge of evolving the application would complain about the huge difficulties to perform their task, saying it was, "terrible to maintain." Paradoxically, the compliance ratio with applicable coding and architectural practices were pretty good. Issues related to less than one tenth of a percent of the code. The real issue is that the few occurrences of non-compliance were located in the very part of the application they had to evolve regularly in response to business requirements. It all made sense once they knew that this small fraction of the code was the one that mattered.

As it is critical to know the kind of application we are dealing with to adapt the risk assessment accordingly, this mapping ability will provide context to the findings; it will -- or should I say must -- change the resulting risk level assessment.

A walk through

Let’s look at the following situation of four applications composed of 10 components each:

The color is designed to provide you with a risk assessment of each component of these applications, with green being the right place to be and red being the wrong one. Would you say the risk level is the same in these four cases?

Then, let us look at another situation:

And now this one:

They all look different and I assume you would like to be responsible for the application showing in the first row, and dread the responsibility for the application in the third row.

And yet:

  1. All of them are based on the same number of defects (10 percent)
  2. Sample #1 uses a linear scale from green to red to show defect percent from 0 to 100
  3. Sample #2 uses a linear scale from green to red to show defect percent from 0 to 50, then a red plateau when more than 50 percent of defects
  4. Sample #3 uses a linear scale from green to red to show defect percent from 0 to 50, then a red plateau when more than 50 percent of defects with 3 modules that are more critical than the others

Does it mean there is no truth in it?

As for me, I would see an opportunity to deliver better risk assessment results:

 

What do you look for when assessing risk in an application?

 

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Philippe-Emmanuel Douziech Principal Research Scientist at CAST Research Lab
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|