The software industry is moving very quickly from the traditional waterfall model to the agile methodology. We’re certainly producing software more quickly, but is the software we’re producing any better? Before we get into that though, let’s look at the reasons for this shift in mindset from waterfall to agile.
Firstly, there are a few concerns with the waterfall approach which are re-emphasized time and again. They include:
- The inability to adopt changes during the development phase because of initial scope freeze. (If the design phase has gone wrong, things can get very complicated in the implementation phase.)
- Key decisions are taken with little knowledge of project and product.
- Resource planning is not accurate as the full scope is not clear early in the cycle.
- Critical performance and integration issues are identified only at the end of the release cycle, (and the cost of fixing a problem at the end is very high).
- Working software is available only when testing is completed at the end of the release cycle.
- The feedback from stakeholders and customers is received very late, resulting in features not meeting their expectations. (Feedback is available only during UAT, which is too late and expensive to implement.)
- Deployment is possible only when all work is finished.
- Usually quality is addressed very late in the cycle, resulting in poor delivery.
The agile methodology facilitates an easy way to receive recurring feedback from the customers early in the release cycle, and thus will have a positive impact on the overall quality of the product.
The feedback comes from intermediate releases or quality checks before going to production. It also comes from more tests, build cycles, and early dialogue with customers.
Figure A: Acceptance of agile and waterfall methodologies based on success rate
The above results are based on the analysis of functional quality more than the structural quality.
However, structural quality is an integral part of the software product or project. Using static analysis tools -- which carry out the testing and validation of the software’s inner structure, source code, and design -- we can detect major architectural issues or design flaws in time.
Based on my experience as a Scrum Master, I have seen that in enterprise ADM, it is not always easy to reconcile the agile method with architectural constraints placed on legacy system components.
Therefore, introducing static analysis checks can make a big difference. The true value lies in the ability to track the evolving architecture of an agile project and how that fits with the overall application landscape.
As a Scrum Master, I was always looking for a solution which ensured a complete quality assurance by:
- Performing code and architectural reviews on the application or product being tested based on a defined set of rules.
- Prioritizing issues based on their impact on the business areas, functional features, module, and code.
- Giving a clear view on the key quality indicators such as security, performance, architecture, robustness, maintainability, transferability, and much more.
I couldn’t find anything close to what CAST offered. Therefore, I knew CAST was the right fit for me.