Most organizations have started to realize that code quality is an important root cause to many of their issues, whether it’s incident levels or time to value. The growing complexity of development environments in IT -- the outsourcing, the required velocity, the introduction of Agile -- have all raised the issue about code quality, sometimes to an executive level.
Business applications have always been complex. You can go back to the 70s, even the 60s, and hear about systems that have millions of lines of code. But here’s the rub: In those days it was millions of lines of COBOL or some other language. But it was all one language. All one system. All one single application in a nice, neat, tidy package.
Today, millions of lines of code means applications built with several different programming languages -- some compiled, some object oriented, some interpreted, some SOA-enabled, some dynamic scripting languages, and some mark-up languages (not to mention database languages) -- with a complex data model controlled by hundreds, if not thousands, of business rules. It’s beyond time for developers to innovate their level of thinking about how to adequately test the quality of their applications.
Firstly, modern business applications are too complex for any single individual or team to understand the entire application and the technologies it interacts with. Even if a developer is a master at two or three different programming languages and the tiers they reside on, he or she can’t have a complete mental model of how his software will interact with other technologies. To compensate, he or she makes assumptions that are difficult to verify, and when wrong can lead to damaging incidents (glitches, crashes, etc.).
Secondly, recent research on large software systems contradicts the old model that a defect can be traced to a single source. Of the faults that lead to failures, 60 percent required changes to two or more files (lowest level system elements), while 30 percent involved three or more files. One third involved multiple system components, while 10-20 percent crossed major segments of the architecture.
Is there any application quality in your code quality?
So, when we examine code quality we’ll have to think of it in two stages. First is basic code quality, which measures individual or small collections of coded components written in a single language or occupying a single tier in an application. The second is application quality, which analyzes the software across all of the application’s languages, tiers, and technologies to measure how well all of the app’s components come together to create operational performance and overall maintainability.
We would be the first to shout that code quality is important, but high quality code by itself will not ensure a high quality application. Checking code quality can be as simple as running your thumb through the code; however, application quality problems are difficult to detect until components have been integrated with components from other tiers in the build process. This means they’re often detected at the last stage of integration testing, causing delays, frustration, and potentially, business losses.
Here are some typical application quality problems that can occur even with high quality code:
- Bypassing the architecture. Components in one tier of a multitier application are typically designed to access components in another tier only through an intermediate “traffic management” component. While the code might be sound, the developer might not be following this architectural requirement and might be accessing components in a tier of the application by bypassing the traffic management component. Great code can do this, and yet, that’s poor application quality.
- Failure to control processing volumes. Applications can behave erratically when they fail to control the amount of data or processing they allow -- caused by a failure to incorporate controls in each of several different architectural tiers. Again, the code that’s handling processing volumes might be beautiful. In fact, it might operate perfectly. And yet, from the application’s perspective, it might be allowing the processing volumes to run amok.
- Application resource imbalances. Elegant, kudos-worthy code can take database resources in a connection pool and mismatch them with the number of request threads from an application. As a result, resource contention will block the threads until a resource becomes available, tying up CPU resources with the waiting threads and slowing application response times to a crawl.
- Security weaknesses. We’ve all seen applications that operate flawlessly from the perspective of meeting the business requirements, and yet, end up being exploited because of a security weakness that went unchecked. The code quality was perfect, and yet the application quality was at risk.
- Lack of defensive mechanisms. Despite how well the code quality performs, developers cannot anticipate every situation that the executed code will be faced with. They must implement defensive code that sustains the application’s performance in the face of stresses or failures affecting other tiers. If they lack these defensive structures, they’re more fragile because they fail to protect against problems in their interaction with other tiers.
Each of these problems will result in unpredictable application performance, business disruption, and data corruption, including making it difficult to alter the application to respond to pressing business needs. An evaluation of application quality, rather than code quality, can detect these problems.
So how do I assess application quality?
There are many tools available that measure code quality. They’ve been available for many years and increasingly becoming standard components in developers’ tool sets. However, when it comes to application quality, it’s only in recent years that some tools have been introduced by various software vendors and consultancies. Indeed, organizations need the help of application quality diagnostic services because this is not something that can be done only manually, given the scope of the complexity of modern development tools.
The good news is, when organizations do start analyzing their IT systems for application quality, they will gain a variety of benefits:
- Visibility across application(s). Better manage the portfolio of applications and projects with the metrics from consistent and continuous analysis of all core business applications.
- Analysis of the internal quality of an application. Continual status about application quality and risk in the integrated software system by reviewing for quality to detect architectural and structural problems that hide in interactions between tiers.
- Team enablement. Improve developer skills, the team’s breadth of application knowledge, and the efficiency of team performance by analyzing application quality.
Fundamentally, all the application weaknesses I mentioned earlier -- bypassing the architecture, failure to control processing volumes, application resource imbalances, security vulnerabilities, and the lack of defensive mechanisms -- can be found and addressed, as well as many more.
Since even the most talented developers can no longer know all of the nuances of the different languages, technologies, and tiers in an application, their capability needs to be augmented by automated tools to evaluate the entire application. Without it, defects hidden in the interactions between application tiers will place the business at risk for outages, degraded service, security breaches, and corrupted data caused by poor quality applications.