Improving Software Quality – Developer Notes

by

A common challenge that IT functions across large and small enterprises face is to manage Software Quality. While most of the organizations acknowledge the need for Quality, very few succeed in the process of implementation. This is despite procuring the best in class software quality tools.

Based on my experience as a developer and my current conversations with CAST customers, here are few pointers for a successful implementation of Software Quality tools.

Like in any change management initiative three factors need to be considered – People, Process and Technology. The Process and Technology should enable People (in this case developers) to enhance and maintain the quality of software. Quite often the 3 aspects of People, Process and Technology do not align with each other, preventing the success of a quality initiative.

Technology

One can’t stress enough the importance of selecting the right tool to ensure software quality. We should look beyond the basic checks of usability and reliability of the software quality tool. The crucial question is whether a tool helps achieve the end objectives of the developers. As a developer my main objective was to ensure that my code doesn’t fail or cause any outage in production.  However, the developer level code analysis tool that my team and I used hardly helped us find these outage causing bugs. Eventually we lost interest and stopped using the same.

 It is important to realize that most of the outage causing/critical defects are structural system issues that can be detected only by tools that analyze inter-technology/inter-tier linkages and not by tools that do unit level analysis.

Furthermore, the tool must be automated and seamlessly integrated with other project management tools such as JIRA. Any manual work in the code analysis process would be seen as an overhead by developers.

Lastly, the veracity of the results matters the most and should be the top be the top criterion during selection. Even if the tool is built into the delivery pipeline, a lot of false positives or ambiguous findings would only dissuade the developers from using the tool.

Process

The entire process (Analysis, Review, Prioritization and Remediation) should be well defined. The processes can be defined, and the tools configured by a dedicated shared services team for software quality tools. The team’s expertise can be shared across multiple projects, reducing the burden on the developers.

The entire feedback loop on software quality should be as frictionless as possible. Analysis should be followed by regular meetings for review and prioritization of the findings.  Many quality tools communicate with project management tools like JIRA to create defects, but it is the onus of the project manager to augment this feature with a remediation plan to address the defects.

A broken feedback loop with adhoc planning would disincentivize developers from remediating existing code and following quality guidelines.

 

People

First and foremost, all the stakeholders - developers, architects and ADM managers must be convinced that quality improvement and maintenance is a priority for the executive management

The top-level management should track software intelligence metrics such as overall Robustness, Efficiency, Security, and Maintainability of the software alongside tangible outcomes such as reduction in number of incidents or reduction in outages due to improved software quality.  The improvement in such metrics and the acknowledgement from management create a strong buy-in among the developers

Some of the other tactical changes would be to tie a part of the developer’s performance appraisal to the quality of his/her delivered code, and to ensure that project managers have enough bandwidth to drive and monitor the quality initiatives within their teams. The latter may seem trivial, but I have seen teams when managers are bogged down with delivery deadlines that ensuring software quality becomes the least of their priorities

While the above suggestions do not fully account for complexities inside an organization, they will certainly help in your Software Quality journey.

 
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
Making sense of cloud transitions for financial and telecoms firms Cloud  migration 2.0: shifting priorities for application modernization in 2019  Research Report
Shibin Michael
Shibin Michael Product Marketing Manager, CAST
Shibin started his career as a developer and has spent close to a decade in the tech industry across a wide range of roles. He is passionate about using Software Intelligence to help IT practitioners.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|