DevOps and Agile adoption continues to accelerate and scale across organizations, yet the question many executives and researchers are asking is:
Are DevOps and Agile Practices Improving Software Quality?
To offer deeper insight, Carmine Vassallo, Fabio Palomba, and Harald C. Gall of the Department of Informatics at the University of Zurich, have released several studies that look at DevOps and Agile outcomes by examining the impact of continuous refactoring on software quality.
Their paper, An Exploratory Study on the Relationship between Changes and Refactoring, analyzes refactoring during the evolution of a software system to help understand which code components are likely to be refactored. While in their follow-on paper, Continuous Refactoring in CI: A Preliminary Study on the Perceived Advantages and Barriers, the authors seek to understand how developers perform refactoring and the pros and cons of adopting Continuous Refactoring. Specifically, they dig into the common perception that developers understand the value of refactoring but are reluctant to do so.
The authors further enhanced their findings by adding a qualitative research component that analyzed source code extracted from an open source repository, identifying projects that employed both a Continuous Integration and a Continuous Code Quality platform, specifically Travis CI and SonarQube.
Why Developers Do and Don’t Refactor
The Zurich team identified that removing duplicated code, improving readability and addressing technical debt were the most popular reasons why developers refactored.
The authors go on to classify refactoring motivations into three areas: software quality improvement, better code comprehension and avoidance of quality gate failures.
Developer Perceptions on Refactoring
These results align with previous findings regarding developers’ “refactoring to understand” attitude. Essentially, developers tend to focus on documenting and re-organizing code to improve readability and comprehension, versus addressing software quality issues or concerns.
The team identified two major motivations behind developer attitudes toward refactoring: (1) risks associated with the re-structuring of a portion of source code and (2) effort required to apply the transformation, since “continuous program transformations can decrease the understandability of the overall architecture of the system.”
Refactoring, Automation and Understandability
While refactoring technique varies based on the language and context developers are working in, the fundamental definition of code refactoring is the process of restructuring existing computer code without changing its external behavior. Refactoring is intended to improve non-functional attributes of the software, make code more readable and reduce complexity all in the name of creating more maintainable software and establishing a more expressive internal architecture or object model. As Martin Fowler explains, “Refactoring isn’t another word for cleaning up code - it specifically defines one technique for improving the health of a code-base.”
While there are many refactoring techniques, such as allowing for more abstraction and breaking code apart into more logical pieces, the Zurich team’s research illustrates that most refactoring in practice focuses on techniques to improve the names and location of code. They propose one reason for this is that developers lack proper automated tools while refactoring, i.e. tools that help overcome their fear of breaking the code or introduce bugs while refactoring.
One of the more promising DevOps trends is the increasing practice of Continuous Integration (CI), a development practice aiming at continuously building new software, which can make the identification of bugs and the improvement of software quality easier. CI is promising because, through tooling, developers can define and automate software quality gates as part of their delivery pipeline.
A software quality gate is simply a set of constraints, determined by the organization, on the quality of the software that are expressed through thresholds on certain metrics. Software quality gates are a well-known way to control software degradation; if a newly committed change fails a software quality gate, the developer can take action to resolve the issue quickly.
Should the Pipeline Stop for Bad Software?
While the Zurich team suggests that there is a lack of effective refactoring tools that prevent the wider adoption of Continuous Refactoring, my experience is that there is organizational resistance against ‘breaking’ the DevOps pipeline or slowing down release cycles. This mindset is driven by the current thinking about DevOps key metrics, specifically deployment frequency and lead time for changes.
As Mr. Fowler points out, these are IT delivery-centric measures. While they have value to an organization, the inclusion of product-specific measures would create the opportunity for a decision regarding when to ‘break’ the pipeline, especially if the software built does not meet the organization’s stated security, reliability and overall quality goals.
High-performing organizations are quickly embracing the idea of including product-based measures and instituting software quality gates within DevOps pipelines. In IEEE Software Magazine, Fannie Mae demonstrates how an automated structural quality analysis within their DevOps-Agile methodology resulted in 21 times (+19,000) more builds per month with half the previous staffing, while experiencing an 30-48% improvement in overall software quality and 28% improvement in team productivity.
While DevOps and Agile have certainly revolutionized how software is developed and delivered, there is much room for improvement to establish: Are we actually getting better and improving software quality?
As we near the next phase of maturity across the industry, more organizations and business leaders will be looking for this answer, and of course asking the next question: How can we get even better?
The Zurich team has started an investigation into how to answer this question, but more work is to be done. In my next post, I will continue to share how the Zurich team evolved their investigation into DevOps and Agile processes and their impact on Continuous Code Quality.
In the meantime, I look forward to hearing about your experience with refactoring: Are teams doing it? What is working best? What obstacles have you overcome? What obstacles remain?
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.