Most organizations have started to realize that code quality is an important root cause to many of their issues, whether it’s incident levels or time to value. The growing complexity of development environments in IT -- the outsourcing, the required velocity, the introduction of Agile -- have all raised the issue about code quality, sometimes to an executive level.
Business applications have always been complex. You can go back to the 70s, even the 60s, and hear about systems that have millions of lines of code. But here’s the rub: In those days it was millions of lines of COBOL or some other language. But it was all one language. All one system. All one single application in a nice, neat, tidy package.
We’ve made it a point on our blog to highlight the fact that software glitches in important IT systems -- like NatWest and Google Drive -- can no longer be “the cost of doing business” in this day and age. Interestingly, we’re starting to see another concerning trend: more and more crashes blamed on faulty hardware or network problems, while the software itself is ignored. It’s funny that the difference in incidents can be more than 10 times between applications with similar functional characteristics. Is it possible that the robustness of the software inside the applications has something to do with apparent hardware failures? I think I see a frustrated data center operator reading this and nodding violently.
The perimeter surrounding enterprise applications expanded exponentially since the birth of mobile and cloud, and IT security professionals are looking in all the wrong places to try and find a fix. Traditionally, organizations secured their data using a walled off perimeter -- like the walls of a medieval castle -- which contained a multitude of layers to help mitigate the risk of data compromise or exposure. The advent of mobile has altered that landscape dramatically, essentially opening up the front door of the castle and allowing that data to escape into unknown territory -- the mobile device.
While working in a CISQ technical work group to propose the "best" quality model that would efficiently provide visibility on application quality (mostly to ensure their reliance, performance, and security), we discussed two approaches that would output exposure. The first is a remediation cost approach, which measures the distance to the required internal quality level. The other is a risk level approach, which estimates the impact internal quality issues can have on the business.
When we start talking about cloud, several common questions come to mind:
We all know testing is an essential step in the application development process. But sometimes testing can feel like your team is just throwing bricks against a wall and seeing when the wall breaks. Wouldn’t it make more sense to be measuring the integrity of the wall itself before chucking things at it?