In this post, there is a distinct warning being made to banks in the UK: another banking outage, similar to RBS major failure in 2012, is on its way. This cautionary statement is based on the amount of legacy code that the banking sector has in its systems, according to research.
According to CAST, the average critical banking application contains about 600,000 lines of code. However, when looking specifically at UK banks, they average between 800,000 and 900,000 lines of code. The higher level of complexity in this case makes it more difficult to get a full view of an organizations architecture and so glitches occur.
Lev Lesokhin, CAST's senior vice president of strategy and analytics, states that in consumer banks there are core components that have been there for years, even decades. There can even be something that was written in Java 20 years ago within these core components. He also mentioned that in the UK there has been a more passive in employing software engineering techniques, which could be part of such an issue.
In banking, there are typically 20-30 incidents a month, with no evidence to prove that this has changed over the past decade. Due to this, it is only a matter of time before another major incident occur. And UK banking, in particular has seen a series of outages due to IT mess ups within the past five years.
In 2012, the RBS and NatWest outage affected about 6.5 million UK customers and resulted in a £56m fine from regulators in 2014. It was said that the crisis could've affected the overall stability of the financial system. Just seven months after this failure, and a major fine from industry regulators, RBS underwent another IT failure that affected around 600,000 customers. While smaller in scale, the repetition of IT failure only goes to further uphold the idea that banking outages are only going to become more common.
In addition to the occurrence of IT failures in UK banking, according to the 2016 CAST CRASH Report the UK delivers applications with lowest security scores (the greatest risk), while continental Europe continually delivers the best scores. It may be that poor system performance in the UK banking sector, could be linked to software security and code quality.
The response to this is a need for greater quality assurance out of development, and while application teams are generally responsible for a system's complexity, it is usually being driven by business needs. This means that business stakeholders who are stressing about the competition and start demanding certain things from development teams, make it so there isn't any time to pay down pre-existing technical debt.
But the issue isn't only that technical debt that is already lurking in legacy code will persist, but that in such an environment, where business pressures are tantamount to software standards, it is likely that more technical debt will be incurred.
So what when looking at UK banking, code complexity that is high within legacy systems is not only spurring new technical debt, but also increases the risk of failure that affects customers and business.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.