Developers: They're Only Human

by

Developers mistakes errorsHuman beings are an odd animal. We’re the only animal that experiences embarrassment over mistakes; some say we’re the only animal that realizes we make them. We also run a full gamut of emotions when we make mistakes – from frustration and self-deprecation to humor and acceptance.

Mistakes are so prevalent among humans that they’ve become ingrained in our culture. In music, “Old Blue Eyes” Frank Sinatra sings, “Regrets? I've had a few, but then again, too few to mention,” while in more modern times we’ve heard Billy Joel croon, “You're only human, you're allowed to make your share of mistakes.”

If mistakes are a sign of humanity, then software developers most definitely qualify as human beings (not that I ever doubted they would). From the novice programmer to the senior engineer, from the best to the worst, all developers are apt to err now and then. The question is when they do, will they catch the error immediately, go back and fix it or will that error remain and be a thorn in the side of the software on which they are working?

To Err is Human

Almost invariably, nearly every programmer will wind up treating an error in each of the ways mentioned above. Some are easier to detect than others and can be caught right away or eventually seen with the naked eye and fixed on review. The real problem facing software development is the error that, because it is so out of the ordinary, goes undetected.

In an article posted on CodeGuru, Andrey Karpov discusses how to make “fewer errors at the code writing stage.” His article offers five quick tips of how to avoid errors, but for a sixth point, he admits that not all coding errors are detectable:

“For many errors, there are no recommendations on how to avoid them. They are most often misprints both novices and skillful programmers make...However, many of these errors can be detected at the stage of code writing already, first of all with the help of the compiler and then with the help of static code analyzers' reports after night runs.”

Again, static analysis rather than testing is credited as the key to detecting errors indiscernible to the naked eye.

software structural flawsTo Forgive Divine

As Capers Jones discusses in his 2009 book, Applied Software Measurement, “In terms of defect removal, testing alone has never been sufficient to ensure high quality levels.” He adds the following figures to bolster his stance:

  • Testing by itself is time consuming and not very efficient. Most forms of testing only find about 35% of the bugs that are present.
  • Static analysis prior to testing is very quick and about 85% efficient. As a result, when testing starts there are so few bugs present that testing schedules drop by perhaps 50%. Static analysis will also find some structural defects not usually found by testing.
  • Formal inspections of requirement and design are beneficial too. Formal inspections create better documents for test case creation, and as a result improve testing efficiency by at least 5% per test stage.
  • A synergistic combination of inspections, static analysis and formal testing can top 96% in defect removal efficiency on average, and 99% in a few cases. Better, the overall schedules will be shorter than testing alone.
  • The average for a combination of six kinds of testing – unit test, function test, regression test, performance test, system test and acceptance test – without preliminary static analysis is only about 85%.

Quicker, more complete and more efficient code analysis? It certainly sounds like employing static analysis is the way to go. They should be more proactive about software errors and address them through static analysis and the best, most efficient way to do that is by employing a platform of automated analysis and measurement during the build phases of each project rather than relying upon a manual review by IT staff.

After all, they’re only human!

Filed in: Software Analysis
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Jonathan Bloom
Jonathan Bloom Technology Writer & Consultant
Jonathan Bloom has been a technology writer and consultant for over 20 years. During his career, Jon has written thousands of journal and magazine articles, blogs and other materials addressing various topics within the IT sector, including software development, enterprise software, mobile, database, security, BI, SaaS/cloud, Health Care IT and Sustainable Technology.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|