While former Secretary of Defense Donald Rumsfeld never spoke or wrote about software (as far as I know), his quip about unknown unknowns during the early months of the Iraq war is well known.
No matter what you think of Rumsfeld, his classification applies nicely to software and teaches us a lesson or two about building good software.
Some things you can test for right away. Some things you can anticipate and set aside to test for later. But the stuff in the top right in red is impossible to test for and not easy to plan for either. How an application and its environment will change is quite uncertain.
How do you handle this uncertainty?
By starting with static analysis, but not stopping there. You have to go beyond static analysis in five ways:
- Analyze and measure the application as a whole not just its component parts in isolation. This means going wide on technology coverage -- not just a plethora of languages, but being able to handle frameworks and databases. It means putting your measurements in the context of the application as a whole, not just parts of it.
- Generate a detailed architectural view that can be readily updated. This gives you the visibility to see what's changing.
- Make sophisticated checks of patterns and anti-patterns in software engineering to catch design and bad-fix problems that are otherwise impossible to find and eradicate.
- Provide actionable metrics that gives IT teams a sense of what to change (and in what sequence) to improve quality.
- Automate, automate, automate! If you do 1 through 4 above, you would then be automating design and code reviews -- provably known to be the most effective insurance against unknown unknowns.