Whether it’s in sports, medicine, music or even a military operation, I’m a firm believer in the “best man for the job” concept. This is why Agile, or more specifically, Scrum development, sounds to me like a smart play for an organization.
But even with the “best man” on the job, the process needs to be planned out from the beginning, with defined goals and roles in place in order to ensure that the quality of the software being developed does not take second seat to the speed with which it is developed - speed that can lead to oversights, omissions and errors. In addition, there needs to be stringent checks and balances that ensures each scrum has performed its duties proficiently and that the interfaces between each of the scrum-built pieces do not bring down otherwise well-written code.
I’m not the only one who feels strongly about this last point. Recently over at Agile.DZone.com, Daniel Ackerson commented on the importance of software testing. He says the longer a company waits to test its software, the worse it gets, noting:
“The later you test, the more effort you’ve got to spend fixing bugs introduced weeks ago. And as the code is changing during the testing weeks, every test cycle you do has to be repeated. In the end, your software is no more stable then it was before the test cycle.”
He’s right. Scrums cannot wait until the end of the development process to test because problems beget problems. However, Ackerman’s solution still is not enough. He says, “The only way to support a rapid cadence of releases is to automate testing.”
Unfortunately, even automated testing of Agile developed software is not enough to ensure software quality...in fact, waiting until the software is ready to be tested is too late to be completely effective.
The beauty of Agile developed software is also part of the reason why it is hard to ensure optimal application software quality. Bits and pieces of functionality that will eventually become interdependent are created and tested separately in different scrums. New functionality is often added on top of old, which further muddies the architectural waters, threatens reliability and performance, and increases the cost to modify and maintain the software. Moreover, as the number of lines of code grows, architectural complexity grows exponentially.
At this point, performance bottlenecks and structural quality lapses become very hard to detect. This makes it very difficult to see and measure the structural quality. Being able to find and fix critical architectural bottlenecks in a rapidly evolving code base reliably is the key to developing high-quality applications using Agile techniques.
In his 2009 book, “Applied Software Measurement,” Jones wrote, “In terms of defect removal, testing alone has never been sufficient to ensure high quality levels.” To back up this statement he has some compelling statistics:
So Ackerson is correct in one respect, waiting until the end of an Agile cycle is not the best way to catch defects. However, merely testing doesn’t work, either. There needs to be a combination of testing and static analysis, which as noted can catch 96% or more of defects.
So for Agile and other forms of Scrum-based development designed to speed the process, a system of automated analysis and measurement needs to be employed. This provides comprehensive visibility over component interconnections and assesses the structural quality of each scrum-built component as well as the application software as a whole.
To answer the question above, it is possible to guarantee top-quality software developed in Agile, but only if automated analysis and measurement is used to assess the application software during the build process and then automated testing is employed to hone the quality further.