Time to market is a major consideration when developing software these days. Feeling the pressure of competition, companies realize they need to move swiftly and cannot rest upon their laurels if they wish to remain ahead of the competition and be the company that sets the trend rather than follows it.
But the pressure to produce software in short order can lead to software that is the quality of a food prepared by a short order cook – it might suit its purpose, but the quality is far from top-notch.
To alleviate some of the pressures of producing software so fast, more and more companies – particularly smaller shops with limited IT staffs – have turned to Agile development to speed deliverables. By utilizing Agile and breaking projects into smaller pieces or Sprints (a single iteration in a Scrum-based process), companies could get multiple layers of an application developed simultaneously to cut the final development time to a fraction.
In an idealistic world, this is a fabulous idea! Give developers a smaller piece of the puzzle on which to work and then bring each section of that puzzle together to create the final application should cut development time by four, six, eight or however many groups there are working on the project.
But we don’t work in an ideal, hermetically sealed little world. As Jan Stafford wrote recently on SearchSoftwareQuality.com, Agile has its issues:
“The long laundry list of software development pros’ problems with the Agile methodology includes inadequate training, poor leadership, rigid adherence to Agile principles that don’t fit the project, and more. That said, there are Agile problem areas that are slammed more often than these, including required meetings, inadequate documentation and issues related to short iterations.”
She goes on to report:
“The problem for developers, however, is that there’s little time in Agile’s short-iteration scheme to write enough documentation, development pros told us. Iteration cycles are every three weeks, 'boom, boom, boom,' and it’s hard to fit doing documentation into that cycle, said Huckleberry Carignan, lead QA engineer at Vistaprint, a print services provider.”
Add to these issues things like teams distributed across the globe, short-iteration cycles and burnout because of them and Agile seems pretty decrepit for a pre-teen.
When software is developed through Agile, or in any Scrum-based process, bits and pieces of functionality that will eventually become interdependent are created and tested separately in different sprints and new functionality is often added on top of old. The result is sometimes a morass of murky architectural waters, questionable reliability and exponential increases in software maintenance costs.
Because these pieces were each tested separately, performance bottlenecks and structural quality lapses become very hard to detect, making it very difficult to see and measure the structural quality of the application as a whole. Most issues of robustness and performance are not hidden behind one specific artifact of code but exist in the interaction between multiple components created in separate sprints.
What agile needs to overcome these ailments is a good dose of automated analysis and measurement, which provides comprehensive visibility over component interconnections and assesses the structural quality of the application software as a whole, rather than each part individually. This is important because even if each sprint develops a perfect portion of the project, the entire project is doomed if these modules do not interconnect properly. Together, visibility and quantification give software engineers the information they need to ensure high performance and reliability, no matter how rapidly the code base evolves.
Automated analysis and measurement is by no means a panacea but, as a diagnostic test, it can identify what ails Agile-developed products. And as any good doctor knows, the key to a quick cure is a fast and accurate diagnosis.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.