Software Risk: 3 Things Every IT Manager Must Know About A Risk-Based Testing Model

by

Because the world of software development is so incredibly complex and modular, quality assurance and testing for software risk has become costly, time-consuming, and at times, inefficient. That’s why many organizations are turning towards a risk-based testing model that can identify problem areas in the code before it’s moved from development to testing. But be careful, because hidden risks can still exist if you don’t implement the model properly throughout your organization.

What is a risk-based testing model? It’s a method of prioritizing functional tests based on the likelihood of failure, the importance of the functionality, and the weighted impact to the business if a failure occurs. However, some of the hidden risks of risk-based testing are that it ignores two very crucial aspects of application development: the structural quality of the system, and what complex components were changed during development.

I recently gave a presentation about some of the problem areas in risk-based testing and how to implement it in an organization at the TesTrek Toronto 2013 conference. The reception was so great, the South Western Ontario Software Quality Group asked me to perform a special webinar of my presentation, which you can view on-demand here. So be sure to download the presentation and slides if you want to find the holes in your testing strategy.

Some of what we’ll cover will include:

  1. Managers need to know and understand the tools that their development teams are using.
  2. Understanding how different parts of an application, and its integration points, introduce hidden risk into an organization.
  3. Understanding that though one piece of code might be a small part of the overall application, its integration with the entire application makes it riskier than one bug fix.

Let’s face it, software development organizations have become too siloed. Management rarely has any insight into how the application actually gets made, what types of tools its developers are using, or if they’re even using any to begin with. And its commonplace now for organizations to use developers from countries all over the globe who are building their individual components with no collaboration with other teams. So when the “final” application gets sent off to testing, they have no idea what they’re getting. And because the application is so complex, addressing every bug quickly becomes an insurmountable task.

Because of this, when an organization begins implementing a risk-based testing strategy, there’s usually a make or break moment when the organization realizes it’s either going to work, or fail miserably. Well recently, I was able to witness one of those ‘aha!’ moments first-hand.

I was doing a software risk assessment on an application that included a Java component and a COBOL mainframe component. And these components talked to each other a lot. The amusing thing was that assessment was the first time the two programmers had ever met, even though their particular components were integral to each other. And it was the first time they’d actually seen how their components were interacting with each other.

So when we presented them with the list of problems in their components, we were expecting the usual back and forth screaming match about whose code was at fault. But it never happened.  Rather, we had a holistic conversation about how these components interacted and how these two developers could optimize their code to get it to run perfectly.

This is what makes risk-based functional analysis tools -- like our Application Intelligence Platform -- so powerful. It gives organizations the ability to understand how each piece of the application fits into the entire development process as a whole. Now, rather than simply throwing it over the wall, developers can create an outline for testers showing them exactly where the most risk lives in the codebase. It’s like giving your Q&A its own drone program.

Don’t get stuck trying to use 20th century technology to fix a 21st century problem. Download the webinar, “Your Risk-Based Testing Is Missing The Real Risks”, and learn how to keep your organization’s application portfolio free of hidden risk.

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|