Empowering Developers with System-Level SAM Tools


CAST-Empowering-Developers-with-System-Level-SAM-ToolsThe analogy between brick-and-mortar building architecture and software architecture is used quite often. Although they are quite different, this still helps to remind us that in software engineering everything is interdependent with a crucial cause-effect factor, which is actually thousands of times more sensitive than in hardware construction.

It is fairly obvious that the quality of a building is a combination of the quality of the bricks, the quality of the assembly of the bricks in the wall, and the quality of the assembly of the walls (along with the electricity, plumbing, etc.). So it follows that assessing the quality of an application does require more than assessing the quality of the code solely at the program or component level.

But because the software analysis industry is still relatively new, AD delivery leaders (and sometimes even CTOs and Chief Architects) often consider software quality initiatives at the program level alone, by adopting tools performing non-contextual analysis. On the positive side, code analyzers that miss the context are cheap, good tools for code hygiene, easy to implement, and have already been part of an IDE for a long time (PMD in the Java world for example). Quality at the program level certainly contributes to make a system easier to read and therefore easier to maintain.

However for obvious reasons, most organizations don’t derive a great deal of business value utilizing them. Quality at the program level won’t prevent improper software architecture, and as a result the system will remain costly and complex to maintain despite the best possible code quality at the component level. And the belief that it might suffice to prevent efficiency, resiliency, or security issues at the program level only is going against the growing evidence that program-level analysis can be misleading -- simply because in multi-tier IT systems, as in brick and mortars, there is  little correlation between the code quality at the program level and the quality of the whole system.

As a very trivial yet very valid example, take the Java code below. Analyzed with a code quality tool, without context and without looking at the consequences of this code on the rest of the application, no issue will be highlighted and a code quality tool will say it’s a valid piece of code and that the author should be granted a medal:

public void FetchGroupUserList(ArrayList dateList) throws SqlException {

// Get data factory object

IDataFactory dataFactory = DataFactory.GetInstance();


String curQuery = null;

ResultSet myResult = null;

String strQuery = "SELECT * FROM product_catalog WHERE expiration_date > ?"; // Column "expiration_date" is not indexed

for (DateTime oneDate : dateList) {

curQuery = dataFactory.prepareStatement(strQuery, oneDate); // Fits the parameter in

myResult = dataFactory.executeQuery(curQuery); // Execute the query


// [...] Do something with the result set

But looking at the same piece of code in a broader architectural context, if there is no index defined on the ‘expiration_date’ column of the table “product catalog”, the SQL statement must be highlighted as it could result in many time-consuming table scans -- giving end users enough time to have a drink around the block while they waited for the results. This nice, tiny piece of code would become a crappy program, and the proud and newly medaled developer asked to start looking for his/her next gig.

This type of problem remains even within a complex system written in only one language. If a class or a function is incorrectly used elsewhere in the system, the consequences can be dramatic. A very basic example would be a class requiring a good portion of memory. That would not be particularly flagged as an issue. But what if this class is then instantiated 10,000 times or more in the code? It could simply result in a interruption of the application by lack of memory. But if the code quality tool only looks at one source file at a time, how could it know that? It can’t identify the relationship across classes, it can’t follow the path of a variable in a program, it can’t infer the right type of an object at a given time of the execution and as such, similar to the syntax checker in your favorite text editor, it doesn’t “understand” anything about the overall logic and semantics of the whole program.

Moreover when IT systems are big and made of heterogeneous layers written by different teams, each being focused on some of the layers related to their technologies expertise (who could pretend to master 5+ millions of Java, C++, and SQL lines  dealing with a giant data model?), the issue described by my small example above is very frequent.

It is now clear that the analysis at the program level done by code quality tools covers only one tenth of the work that needs to be done to truly assess such complex systems. When it comes to analyzing an entire GUI-LOGIC-DATA, these tools become totally … silent. A holistic approach is required to understand software framework, middleware, database structure, transactions from front end to data, etc.

System level analysis is a bit more complicated to implement as it requires that the entire system source code and files be ready for analysis. This can be done on a daily, weekly, or monthly basis at the build phase or at the end of a release -- and the value outcome can be considerable: the trickiest software defects that cost so much to IT projects and can put the business at real risk are usually located in architecture and inter-layer dialogues.

Additionally, by “making the invisible visible” for the management, such analysis brings an enormous additional value because analytics help IT execs and managers have fact-based discussions with their teams or suppliers, driving and improving behavior while preventing IT project risks and derailment.

Ultimately, the biggest source of value is in the end-product delivered: robust, high performing and secure apps carrying a low technical debt, for better business performance at minimum TCO. There is a growing general consensus on this, fully endorsed by experts, analysts, thought leaders, and hundreds of large-scale, real-life experiences.

Holistic, system-level analysis of the structural quality seems to be an obvious choice as it corresponds to what development teams really need. However, because such Software Analysis and Measurement (SAM) tools are not part of their own toolset, they are often seen as the top down control system run by the “Management.” And this does not fit very well with the idea of the sacred status of the software engineer, who would like to keep his or her independence, not to mention freedom. The recently adopted agile approach has contributed to reinforce this sentiment. But what is true within small teams building new software applications has not been proven applicable at the enterprise level.

At this level, hundreds and sometimes thousands of developers contribute to develop and maintain interconnected enterprise information systems while still sitting at their desks in different time zones scattered across various continents. The visibility on each of these systems provided by SAM tools allows developers to step outside their stuffy cubicles once in a while to get some fresh air and a bird’s eye view of the landscape. Such a tool allows them to truly understand the impact of local programs across the whole architecture; gain knowledge into what actions might be taken to work around problems; fix x, y, and z architectural issues; suggest structural quality improvements to management; and enjoy an on-going training platform in software engineering. It’s an opportunity to connect the enterprise architects with the teams practicing agile or any other approach, in a more constructive way than we’ve done in the past.

Looking at measurement systems in a different way (inside out) empowers the developers to maintain continuous attention on technical excellence, which is one of the key pillars of the broadly acclaimed Lean Software Development movement.

We’ve written a technical paper describing application-level causes of IT system problems that are difficult to detect with the naked eye. These issues seem to be coming out more frequently in the last 18 months since this paper was written.


  This report describes the effects of different industrial factors on  structural quality. Structural quality differed across technologies with COBOL  applications generally having the lowest densities of critical weaknesses,  while JAVA-EE had the highest densities. While structural quality differed  slightly across industry segments, there was almost no effect from whether the  application was in- or outsourced, or whether it was produced on- or off-shore.  Large variations in the densities in critical weaknesses across applications  suggested the major factors in structural quality are more related to  conditions specific to each application. CRASH Report 2020: CAST Research on  the Structural Condition of Critical Applications Report
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
Making sense of cloud transitions for financial and telecoms firms Cloud  migration 2.0: shifting priorities for application modernization in 2019  Research Report
Olivier Bonsignour
Olivier Bonsignour EVP, Product Development
Olivier Bonsignour is Executive Vice President of Product Development at CAST, including the company's R&D efforts. He has more than 20 years of experience implementing software engineering best practices for software parsing, software quality, software security and technical debt. Prior to joining CAST, Olivier was the CIO for DGA, the advanced research division of the French Ministry of Defense.
Load more reviews
Thank you for the review! Your review must be approved first
New code

You've already submitted a review for this item