In 2015 there was a major slew of headlines dedicated to software failures at major companies (US Airlines, Wall Street Journal, the New York Stock Exchange, and the Royal Bank of Scotland to name a few) which led to a discussion of best practices for software development. This post talks more specifically about UK banking and software quality, but the principles are applicable across all areas. IT visibility, which is imperative to managing technical debt, has come to the forefront of the software industry.
Many systems used by large organizations have been built upon decades old legacy systems which have become hyper-convoluted. These systems, in order to meet the fast-paced demands of consumers have been adding new features and functionality onto creaky and complex old systems. New services like online banking, contactless payments, and Apple Pay (in the banking industry) are a must for customers who expect convenient products at their disposal. Each of these additions increases the risk of system failure, and as they become more complex being able to discover possible vulnerabilities becomes more difficult. Software assurance, therefore, has revolved around functional and load testing, and manual code reviews.
These approaches, however, do not safeguard against system level structural flaws. We have talked about issues like this before, where burdened legacy systems with technical debt are difficult to modify or test because any change could result in an unwanted breakage. Without being able to test for structural faults, these faults can go undetected for years. You will be persistently working in risky conditions you are unaware of, until the day when a new bit of functionality brings the whole system tumbling down. The traditional methods for measuring software quality (including technical debt) are great for the actual process of software development but not for the maintenance of aging platforms.
Like we have mentioned many times, the first step to dealing with any software quality issues (technical debt) is to review it and quantify it. Without knowing where your vulnerabilities lie and how prevalent they are in your system it is nearly impossible to manage the risk of system outages. Numerical values for software quality and technical debt are imperative for objective decision making; deciding what work needs to be handled now and which can be left for later. Comparing your numbers to industry benchmarks can be extremely helpful in this case. CISQ (Consortium for IT Software Quality) has code quality standards which compare source code against these benchmarks - you can then measure your code against these standards to detect vulnerabilities and assess your technical debt. This sort of comparison should be done for every release until your code quality and technical debt are under control.
Automation is the next key step in the process of handling software risk. This can be done with platforms like CAST AIP which incorporates the industry standards of CISQ into its processes. There are several moves that can be taken to then ensure the maintenance of software quality like: structural testing before deployment in order to avoid having to undergo rewrites and going forward with updates and patches with caution.
Ultimately, as software development and customer expectations advance rapidly it is of tantamount importance to make sure that your practices and quality of code don't fall behind. Allowing this to happen can lead to heavy build up of technical debt, complexity, scalability, and reliability issues.
To read the full post visit here.