Individual Code Quality in an Enterprise Software Development World

by

Finding the right tools for the right challenge

The growing cost of most software development efforts can be traced back to one underlying cause – the lack of visibility into the software. As the size and system complexity grows for business critical applications -- along with the complexity of sourcing environments -- there is an increasing need for app owners, architects, and developers to truly understand their codebases. Without visibility into the implementation, it is hard for a developer to understand all the nuances of the code. This explains the disproportional amount of time that is needed for developers to identify the root cause of defects.

CAST-individual-code-quality-in-an-enterprise-software-development-world

Other than wasting time, the lack of visibility leads to overall poor code quality and hinders the development team in several ways:

  • Myopic design decisions made locally can lead to an inconsistent and fragile system architecture
  • Potential for new code being placed in incorrect modules and eroding the architecture
  • Inability to test new code effectively

Code Quality is an Accumulative Problem

It’s natural for code quality to deteriorate over the life cycle of an application. Updating the architecture, adding new features, and fixing bugs on systems with poor code quality takes longer, and results in more bugs being introduced. This phenomenon was first explained by James Wilson and George Kelling’s Broken Windows theory. They explained that if you’re working on good code, you’ll keep it clean. But if it’s already a mess, you’ll probably jury-rig your updates so you can finish it as quickly as possible.

“Software productivity usually declines across subsequent releases of an application, caused in part by continuing enhancements and modifications that degrade the architectural integrity and overall quality of the application since these changes frequently inject new defects or make the application more complex. This is why it is imperative to evaluate the productivity of the current release in the context of how it may affect the productivity of future releases.” Dr. Bill Curtis explains this idea further in “Modern Software Productivity Measurement.”

Code quality is everyone’s problem -- and responsibility

While code reviews are one of the most popular techniques in improving code quality, doing them effectively requires the development team have a good visibility into the codebase. Without it, effective review and discussions of potential remediation are impossible.

Further complicating the issue, code quality improvement on large, complex systems is not sustainable using manual methods. Gaining deep visibility into legacy code or into new code written halfway around the world requires standardized, scalable, and automated code quality processes and tools.

The answer lies in enterprise software analysis and measurement, which couples automated code quality analysis, automated blueprinting, and architectural compliance. The ability to automate code reviews, while documenting new and legacy components -- as well as defining and monitoring adherence to architecture specifications -- is the only way enterprise class development can break the bad code quality cycle, as well as improve developer efficiency and overall product quality. It doesn’t take a team, it takes a village.

Fragmented analysis = fragmented results

There are many organizations and developers that have incorporated code quality analysis into their work streams. Yet, as mentioned above, the systems they are working on have grown too large and complex, while their teams have evolved to more sophisticated development processes and sourcing models. As such, these individual code quality analysis efforts fail to contribute to the overall improvement of the codebase. There are simply too many moving parts. Development organizations must shift to a more holistic approach that can scale and match the size and complexity of the systems they are supporting.

Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Pete Pizzutillo
Pete Pizzutillo Vice President
Pete Pizzutillo is Vice President at CAST and has spent the last 15 years working in the software industry. He passionately believes Software Intelligence is the cornerstone to successful digital transformation, and he actively helps customers realize the benefits of CAST's software analytics to ensure their IT systems are secure, resilient and efficient to support the next wave of modern business.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|