Would you be so nice as to not tell me the truth?

by

I recently found myself in yet another endless discussion about how bug fixes and extra capacity impact the results of a Software Analysis and Measurement (SAM) assessment.

My interlocutor's first reaction is that it must be the computing configuration (i.e., the way to turn quality findings into an assessment score, status, etc.) that changed. Fixing bugs or adding extra capabilities won't have that impact on assessment results. Therefore, keeping the computing configuration stable keeps the results stable.

Then, after explaining that finding new or more accurate dependencies would impact the SAM assessment results -- thanks to a better understanding of complex behaviors, for instance -- my interlocutor reluctantly accepted that it can have a tiny impact, but by no means a dramatic one. His main argument was this: In real life, one would not lose a certification because of additional knowledge. And this is where I tend to disagree with most when dealing with risk.

For example, when assessing the safety hazard of a plant:

  1. Would the knowledge that a given construction material is a carcinogen not change the assessment result?
  2. Couldn’t this cause a small or dramatic effect, depending of the amount of hazardous material found in the audited plant?
  3. And wouldn’t the results change in an unpredictable way as, up to this point, no one cared about measuring the amount of the hazardous material?

At this point, my interlocutor started to become evasive because he still could not accept such changes in the SAM world.

  1. What if I know that you have a proven CWE vulnerability in your code?
  2. Should I keep silent, as you would not accept a dramatic impact of your assessment?
  3. Should I minimize the risk, as you would only accept a tiny impact on the assessment outcome?

That is basically what 99 percent of people ask for (I should say 100 percent, but I would rather leave room for some people that remain rational in the digital world of IT).

Is a dramatic change disturbing? Yes, of course. But isn't it also disturbing in the real world? Knowing what it will cost you to remove asbestos from the 56 floors of the Montparnasse Tower must be disturbing. I read it could cost up to 800,000 EUR per floor.

But that doesn’t change the fact that asbestos is now known to be a health hazard. I understand that some people -- most likely the ones signing the checks -- would be willing to say that the tower is as safe a place to work in as it was before the world knew asbestos was a health hazard and before the asbestos level was measured in the tower. But that is not a reason to hide the truth.

So the question now becomes: How do we handle the change?

To answer this question, we can look to the non-IT world (let me call it "the real world" from now on).

I also happen to work on a roll-bearing assembly line. Whenever a cutting tooth from a CNC cutter needed to be changed, not a single person in the plant would assume that you could fire up the cutter right away. Not before a proper re-calibration of the cutter had been done.

As for just-in-time strategy and productivity measurement in industrial process and basic house-cleaning principles, it seems the IT world is so different -- or even superior -- that the real world's principles would not even apply.

  1. How many people, even in the workplace, get their computer so full of garbage files and programs that they end up buying a brand-new computer? As if they would hoard junk in their home or office, then move to another home or office when the first one is full. (I know it does happen, but it usually ends up in reality TV shows.)
  2. How many IT professionals think that productivity measurement is only about the produced volume of code, and completely disregard the quality of the production? The industrial world knows for a fact that volume without quality is not the path to growth a competitive edge.
  3. How much effort does it take to convince IT professionals that just-in-time strategies and event-driven architecture can yield the same responsiveness to business requirements and the same resource-usage efficiency in delivering IT outcome as it did in assembly lines? To their benefits, it took many decades to convince the industrial world of these benefits. The pity is that IT doesn’t have the excuse of being the pioneer in this domain; it has a huge amount of knowledge and experience to leverage. Yet, it seldom is.

SAM is not the only measuring activity in the world. However, its practitioners need to reach the same level of maturity as their real-world counterparts.

Filed in: Software Quality
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Philippe-Emmanuel Douziech Principal Research Scientist at CAST Research Lab
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|