Finding the right tools for the right challenge
The growing cost of most software development efforts can be traced back to one underlying cause – the lack of visibility into the software. As the size and system complexity grows for business critical applications -- along with the complexity of sourcing environments -- there is an increasing need for app owners, architects, and developers to truly understand their codebases. Without visibility into the implementation, it is hard for a developer to understand all the nuances of the code. This explains the disproportional amount of time that is needed for developers to identify the root cause of defects.
Other than wasting time, the lack of visibility leads to overall poor code quality and hinders the development team in several ways:
- Myopic design decisions made locally can lead to an inconsistent and fragile system architecture
- Potential for new code being placed in incorrect modules and eroding the architecture
- Inability to test new code effectively
Code Quality is an Accumulative Problem
It’s natural for code quality to deteriorate over the life cycle of an application. Updating the architecture, adding new features, and fixing bugs on systems with poor code quality takes longer, and results in more bugs being introduced. This phenomenon was first explained by James Wilson and George Kelling’s Broken Windows theory. They explained that if you’re working on good code, you’ll keep it clean. But if it’s already a mess, you’ll probably jury-rig your updates so you can finish it as quickly as possible.
“Software productivity usually declines across subsequent releases of an application, caused in part by continuing enhancements and modifications that degrade the architectural integrity and overall quality of the application since these changes frequently inject new defects or make the application more complex. This is why it is imperative to evaluate the productivity of the current release in the context of how it may affect the productivity of future releases.” Dr. Bill Curtis explains this idea further in “Modern Software Productivity Measurement.”
Code quality is everyone’s problem -- and responsibility
While code reviews are one of the most popular techniques in improving code quality, doing them effectively requires the development team have a good visibility into the codebase. Without it, effective review and discussions of potential remediation are impossible.
Further complicating the issue, code quality improvement on large, complex systems is not sustainable using manual methods. Gaining deep visibility into legacy code or into new code written halfway around the world requires standardized, scalable, and automated code quality processes and tools.
The answer lies in enterprise software analysis and measurement, which couples automated code quality analysis, automated blueprinting, and architectural compliance. The ability to automate code reviews, while documenting new and legacy components -- as well as defining and monitoring adherence to architecture specifications -- is the only way enterprise class development can break the bad code quality cycle, as well as improve developer efficiency and overall product quality. It doesn’t take a team, it takes a village.
Fragmented analysis = fragmented results
There are many organizations and developers that have incorporated code quality analysis into their work streams. Yet, as mentioned above, the systems they are working on have grown too large and complex, while their teams have evolved to more sophisticated development processes and sourcing models. As such, these individual code quality analysis efforts fail to contribute to the overall improvement of the codebase. There are simply too many moving parts. Development organizations must shift to a more holistic approach that can scale and match the size and complexity of the systems they are supporting.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.