I learned recently of the passing of my first boss in the tech industry, Clint Battersby, a couple months back. Clint was a driven, highly motivated technologist. He was a creative individual with a number of patents to his name and with several tech startups founded by him.
It was in 1994 when Clint hired me to write marketing materials for his latest venture at Measurement Techniques. To that point, the company had produced some of the world’s most precise measuring devices; so precise that one model was used by several automakers to calibrate the timing of air bags to the millionth of a second. This new venture, however, was Clint’s first into software and the product around which he planned the conversion was a network-based, non-volatile LAN cache that would improve the performance of distributed clients accessing data over enterprise networks – what we might today look upon as WAN optimization.
As this was my first foray into the tech industry, Clint took the time to teach me a great many things. One of the first things Clint taught me was why some software works and some doesn’t. To this day, I still have not found a better explanation than the one he offered on my first day at MTI. Quoting what he called an “age old tenet of the software industry,” he said, very simply, “Garbage in; garbage out.”
Could anything ever be more true? No matter what you say about the ability of technology to improve the work we are doing, the determining factor of application software quality still comes down to the work an individual human being puts into it. As Nari Kannan quite accurately says in his blog over at eBizQ, the first root cause of bad data quality is “bad software implementations.”
Nobody sets out to write a bad application…at least I would hope not. I honestly believe that every individual writing code for an application – either independently, or for a software vendor or for an IT department – truly wants to produce at least a robust piece of software if not the next great application.
The number of reasons why poor quality application software exists is exponentially greater than the ways high quality software is developed.
Developers often face time constraints, a “rush to market,” corners they must cut to meet deadlines and inferior tools with which to work, all of which results in writing sub-par applications. Sometimes the blame lies squarely upon the person or persons writing the code. Developers may be too inexperienced, too rushed to write good code or simply make a mistake. Some developers write multiple lines of code when one will do while others “code in circles,” writing, counteracting and re-establishing functions, all of which makes the application software cumbersome and hard to maintain.
Then there are times when the error is not of their making, like when asked to build on top of existing software and it carries either a latent error or code that does not meet current standards for structural quality.
The source of the “garbage” does not matter, though; all of these possible problems can and inevitably will lead to application failure.
When you consider all the things that could possibly go wrong when developing application software, you might think that if anything ever comes out right, it’s a small miracle. But it’s no miracle. It all comes down to performing the proper assessment of application software right from the start of the build so that errors can be seen before they fester and grow into a problem.
While manually assessing the software can be very much like trying to find a needle in a haystack, automating the process of analysis and measurement identifies the potential issues much more efficiently. Automated analysis and measurement can compare 400,000 lines of code (the size of the average software application) against 1,000 different standards in an infinitesimally small fraction of the time it would take the human eye to do the same. In doing so, it can far more efficiently locate flawed lines of code (100 in the average 400K-line application according to studies) that lead to outages and security breaches. Performing this automated static analysis throughout the build process means companies are able to address issues before they become serious, attain an overall view of an application’s health and identify areas of risk that could jeopardize future health of the application.
Because as Clint would have reminded me, unlike antiquing where “one man’s garbage is another man’s treasure,” in the business of technology, one developer’s garbage will cost a company its treasure.