On the night of his ship’s maiden and lone voyage, the skipper of the Titanic saw the top of an iceberg, swerved to avoid it, and in doing so piloted his ship’s hull directly into the monstrous portion of the iceberg that lied unseen beneath the surface of the ocean, tearing apart the “unsinkable” ship. Had he known what lied beneath the surface, his reaction likely would have been much different and could have yielded a very different, possibly positive result.
The Titanic's experience underscores the essence of the difference between testing and software quality assessment – addressing the seen versus the unseen. uTest, a company that specializes in testing software, recently questioned whether or not software quality comes from testing. The blogger laments that QA testers are saddled with software that is already “buggy” and lacking in quality. He goes on to sympathize with them and encourages them to work with developers to communicate issues and promote better development practices. uTest also comments that since developers create the problematic software, they may not represent the optimal choice for ensuring quality.
So in answer to uTest’s question, "Does software quality come from testing?"...we would say, “No.”
Testing can only address an application’s “external quality.” Testers can effectively address only visible symptoms such as correctness, efficiency or maintenance costs. What lies beneath the surface, however, the internal quality, directly impacts the external quality and can lead to even greater issues. These characteristics - program structure, complexity, coding practices, coupling, testability, reusability, maintainability, readability and flexibility – are the invisible root of the software quality iceberg and can do far more damage to a company’s reputation and IT maintenance budget than the visible issues.
But how can you fix problems if you can’t see them? uTest is correct in as much as developers probably should not be the ones responsible for finding the issues. First of all, time, business and cost pressures have all pushed developers to make sub-optimal choices that impact the quality and future performance of critical applications. Second, and more important, there is just too much that needs to be reviewed for any individual developer – or even group of developers – to review efficiently and find the issues that could lead to application software malfunction.
Managing the risk of poor software quality requires a thorough understanding of the Structural Quality of critical applications. However, assessing the health, structural quality, complexity, maintainability and functional size of an application can be a daunting manual task that takes time and expert resources.
To accomplish an effective internal quality review requires Application assessment automation. More and more companies are automating their process for all critical applications as the occasional manual review becomes increasingly obsolete. Such a service provides visibility to continual, automated assessment to help companies ensure that quality is built into their systems with every developer contribution – whether the software is being built from scratch or being customized. And that visibility to the internal quality of application software is the difference between a company either enjoying a successful voyage or a Titanic disaster.