If you read the news these days, one would think that software security is something that is layered on top of existing software systems. The truth is, however, that software security needs to be woven into the very fabric of every system and this begins with eliminating vulnerabilities by measuring software quality as the system is built.
During the CAST Software Quality Fall Users Group, Dr. Carol Woody, PhD, senior member of the technical staff at the Software Engineering Institute (SEI) at Carnegie Mellon University, whose research focuses on cyber security engineering, discussed the importance of software quality as a basis for security.
In her presentation, which she reprised during a webinar hosted by CISQ, Woody identifies security as a “lifecycle challenge.” She says that security is not something that can be layered on as an afterthought, but rather needs to be integrated into every stage of the software building process from design, to coding, to implementation by ensuring software quality at every stage.
“The SEI has quality data for over 100 Team Software Process (TSP) development projects used to predict operational quality,” offered Woody in her presentation. “Data from five projects with low defect density in system testing reported very low or zero safety critical and security defects in production use.”
Woody cited this as evidence that when fewer defects are measured, this can predict a decreased risk of security vulnerabilities. She also notes that between one and five percent of defects are vulnerabilities, yet 50 to 70 percent of security vulnerabilities stem from software defects.
That one to five percent doesn’t seem like it should create that much trouble, and Woody points to Ross Anderson’s 2001 paper on “Why Security is Hard” where he states, “it’s reasonable to expect a 35,000,000 line program like Windows 2000 to have 1,000,000 bugs, perhaps only 1% of them are security-critical.”
However, even if only one percent of those one million bugs are security critical, that means there are 10,000 possible vulnerabilities in that software. That’s 10,000 lines of code in a 35,000,000 line program – that’s 0.03 percent of the lines of code…talk about finding a needle in a haystack! But that’s what it could be like if the software quality is not measured from the outset. If left until “after the fact,” organizations can be left trying to figure out which of those three-hundredths-of-one-percent of lines of code caused a security breach of their system.
“If you have a quality problem then you have a security problem,” said Woody. “Quality does not happen by accident and neither does security. Neither quality nor security can be ‘tested in.’”
Woody’s answer to minimize this kind of software risk is to ensure software quality from the start. Her answer to how to mitigate security risk is to begin by implementing software quality approaches that focus on personal accountability at each stage of the life cycle:
- Clearly define what “right” looks like
- Measure and reward the right behaviors
- Reinforce by training, tracking and independent review
To achieve these goals, however, requires the right tools. Manually performing risk assessments or static analysis wastes resources, time, and money. What is needed is an automated source code analysis solution designed for enterprise use that is capable of assessing multiple languages in complex, multi-tier infrastructures. Tools like this – such as the CAST Application Intelligence Platform – are able to detect poor structural quality, insufficient practices, or missed vulnerabilities that create future problems for organizations.
So the choice is yours – perform automated analysis and measurement to ensure software quality from the outset, or search for that needle of vulnerability in the haystack of millions of lines of code after security has been breached.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.