Modern software systems have become so complex, with software components interacting across multiple application layers, there’s no way one single developer can hope to conceptualize how it all fits together. A National Research Council study found that as we demand higher levels of assurance, traditional testing cannot deliver the dependability required at a reasonable cost. At the intersection of these two realities lies the biggest problem facing software development today: architecturally complex violations.
Architecturally complex violations are structural flaws involving interactions among components that reside in different application layers. Although they constitute only 8% of the vulnerabilities in an application, they represent 52% of the repair effort, require 20 times more changes to fix, and are 8 times more likely to escape into testing and 6 times more likely to escape into operations.
Organizations are learning that throwing more functional testing and testers at this problem simply is not paying off. Systems size and complexity are growing exponentially while the pace of delivery accelerates, leading to an increase in production failures, system outages, and rework rates. Many organizations now understand that to effectively address these issues they must attack the problem at the root cause -- finding and eliminating architectural complexity.
However, finding and eliminating architectural complexity is hard to tackle for three reasons. Firstly, the defects themselves are often structural and not functional, meaning you can’t write a specific test case to find them. Second, the static analysis tools at the IDE/developer level will be unable to detect their presence without system-level context only available when the system is tested as a whole. Lastly, as mentioned earlier, applications have become too large and complex for any single individual or team to fully understand them, and they make assumptions about technologies with which they are less familiar.
So how can IT combat architecturally complex violations? Organizations need to first equip themselves with tools that can evaluate an application with system-level context – like our Application Intelligence Platform – before it can begin finding and eliminating architectural complexity. But once the tools are in place, organizations can quickly get baseline application health parameters and put processes in place to start reducing their architectural complexity in just three easy steps.
3 Steps to combat architectural complexity
1. Identify -- Run an initial structural analysis across critical applications to identify structural weaknesses and architectural hotspots (a component centrally located in the paths of several defective interactions). This insight informs your resource planning to focus on areas of highest impact not highest volume.
- Key Success Factor: Prioritize output to only relevant software flaws by identifying the riskiest objects and transaction paths using:
- Propagated Risk Index (PRI) is a measurement of the riskiest artifacts or objects of the application along the health factors of robustness, performance and security. PRI takes into account the intrinsic risk of the component coupled with the level of use of the given object in the transaction. It systematically helps aggregate risk of the application in a relative manner allowing for identification, prioritization, and ultimately remediation of the riskiest objects.
- Transaction Risk Index (TRI) is an indicator of the riskiest transactions of the application. The TRI number reflects the cumulative risk of the transaction based on the risk in the individual objects contributing to the transaction. The TRI is calculated as a function of the rules violated, their weight/criticality, and the frequency of the violation across all objects in the path of the transaction. TRI is a powerful metric to identify, prioritize and ultimately remediate riskiest transactions and their objects.
2. Stabilize -- Baseline applications along the most important application health parameters. Ensure they don’t deteriorate, especially with respect to critical violations.
- Key Success Factor: Focus efforts on the business-relevant characteristics, Stability and resilience, Performance efficiency, Security and software risk. Leverage industry standard definition and guidance such as the OMG’s Consortium for IT Software Quality’s Specifications for Automated Quality Characteristic Measures
3. Optimize -- Implement a continuous measure of structural quality, with a direct feedback loop to developers. Track asset improvement and identify risk as early as possible in SDLC.
- Key Success Factor: Rely on measurement platform that remains consistent: Over time, for trending, across application portfolios, with industry standards. CAST Application Intelligence Platform (AIP) is a software technology that analyzes and measures the structural quality, complexity and size of software applications. The insight produced allows IT executives and their teams to measure, understand and master the outcome of development activity -- the source code and software architecture of the application being produced -- and to diagnose the cost and risk rooted in the source code.
Every day architecturally complex violations are causing crashes, glitches, and system outages that are losing IT organizations millions in revenue, as well as the trust and respect of their customers. Unless you want your organization to end up on a list with Home Depot, Target, American Airlines, and other high profile outages, IT leaders can’t waste any time ridding their application portfolios of needless complexity.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.