Reduce False Positives in Application Security Testing

by

Security Testing of a software is critical to business but how useful is it when one cannot take any remedial action on the issues that are found. In other words, what is the use of findings if the sheer volume of the issues reported do not allow you to work on resolving them.  So, it is imperative that any testing let alone security testing provide results that are accurate and actionable.

Let us look at how we define accuracy in security testing.  We may have the following scenarios when we verify the results of security testing:

  1. True Negative (TN) - Result says no vulnerability and there is none
  2. True Positive (TP) - Result shows vulnerability that is present
  3. False Negative (FN) – Result shows no vulnerability when in fact there an undetected vulnerability is present.
  4. False Positive (FP) – Result shows vulnerability when in fact there is none

The objective of the tool is to maximize the True Positive and True Negative and minimize the False Negative and False Positive.

While the objective of the tool looks obvious, it is not easy to implement the same.  When a tool tries to reduce the False Positives, we have a side effect, i.e. an increase in False Negatives.  The trade- off between False Positives and False Negatives is something that needs to be clearly understood by the IT Management in an enterprise.  The business stakeholders may prefer one type of error over the other. So, it’s important for the IT and Business teams to work together on this.

What can help you decide?

The cost of False Positive (Resources spent on a non-violation) {<, >, =} Cost of False Negative (Repercussion of not addressing a flaw).

Often, in industries such as the Defense, Healthcare and Financial Services, the cost of a False Negative is much higher than the cost of False Positive. For instance, the cost of criminal hacking into a bank’s system due to an ignored flaw is much higher than few resources working on validating the non-issues incorrectly reported as vulnerabilities.   Hence, most tools are skewed towards providing false positives than false negatives.

Implications of huge number of FPs

Once a tool is known for crying wolf, the testers learn to ignore even the real issues. If every vulnerability reported needs to be checked manually by a person if it is a FP (False Positive) or TP, then the testing is only as good as the person. This defeats the purpose of using a tool that is built with several years of research.

We know that False Positives are something we cannot live without. So, let’s see what we can do to minimize them.

  • One of the ways is to couple your security testing with software intelligence. By knowing which parts of the application are “dead code” or just libraries that are not being invoked, and which are being used by the rest of the application, we can ignore the flaws from the unused code
  • Another option is for the tool to automatically verify its findings by exploiting the identified flaws and present the user with a proof of exploitation.
  • Using Machine learning algorithms to classify if a finding is a False positive or TP. Even if clear classification is a problem, the probability of a results being FP can be published.As with any ML algorithm, the accuracy of the results will improve with more data being added to the system.
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Srinivas Kedarisetty
Srinivas Kedarisetty Security Product Owner
Srinivas has more than 18 years of experience in leading IT delivery teams across India, the U.S. and Europe while managing product security, microservices and SDK. Highly skilled in developing and driving products from conception through the entire product lifecycle, Srinivas has a track record of improving products and teams to create value for customers.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|