Modern software systems have become so complex, with software components interacting across multiple application layers, there’s no way one single developer can hope to conceptualize how it all fits together. A National Research Council study found that as we demand higher levels of assurance, traditional testing cannot deliver the dependability required at a reasonable cost. At the intersection of these two realities lies the biggest problem facing software development today: architecturally complex violations.
Architecturally complex violations are structural flaws involving interactions among components that reside in different application layers. Although they constitute only 8% of the vulnerabilities in an application, they represent 52% of the repair effort, require 20 times more changes to fix, and are 8 times more likely to escape into testing and 6 times more likely to escape into operations.
Organizations are learning that throwing more functional testing and testers at this problem simply is not paying off. Systems size and complexity are growing exponentially while the pace of delivery accelerates, leading to an increase in production failures, system outages, and rework rates. Many organizations now understand that to effectively address these issues they must attack the problem at the root cause -- finding and eliminating architectural complexity.
However, finding and eliminating architectural complexity is hard to tackle for three reasons. Firstly, the defects themselves are often structural and not functional, meaning you can’t write a specific test case to find them. Second, the static analysis tools at the IDE/developer level will be unable to detect their presence without system-level context only available when the system is tested as a whole. Lastly, as mentioned earlier, applications have become too large and complex for any single individual or team to fully understand them, and they make assumptions about technologies with which they are less familiar.
So how can IT combat architecturally complex violations? Organizations need to first equip themselves with tools that can evaluate an application with system-level context – like our Application Intelligence Platform – before it can begin finding and eliminating architectural complexity. But once the tools are in place, organizations can quickly get baseline application health parameters and put processes in place to start reducing their architectural complexity in just three easy steps.
1. Identify -- Run an initial structural analysis across critical applications to identify structural weaknesses and architectural hotspots (a component centrally located in the paths of several defective interactions). This insight informs your resource planning to focus on areas of highest impact not highest volume.
2. Stabilize -- Baseline applications along the most important application health parameters. Ensure they don’t deteriorate, especially with respect to critical violations.
3. Optimize -- Implement a continuous measure of structural quality, with a direct feedback loop to developers. Track asset improvement and identify risk as early as possible in SDLC.
Every day architecturally complex violations are causing crashes, glitches, and system outages that are losing IT organizations millions in revenue, as well as the trust and respect of their customers. Unless you want your organization to end up on a list with Home Depot, Target, American Airlines, and other high profile outages, IT leaders can’t waste any time ridding their application portfolios of needless complexity.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.