Back in August of 2008, the FAA reported significant software configuration problems when a software glitch delayed dozens and dozens of national flights. The outage directly affected traffic and ground personnel; cargo had to be manually loaded, and flight plan information manually entered into the system by air traffic controllers. According to an FAA spokesperson, the source of the malfunction was a "packet switch" that "failed due to a database mismatch".
This was the second glitch of it's kind, the third was to follow...
November 19th, 2009, Bloomberg reported the FAA systems were down for 4 hours due to a "software configuration problem" within the Federal Telecommunications Infrastructure.
These sorts of issues may commonly be perceived as network issues, but in reality it’s because the software is too complex and badly engineered. The issue is typically around data access, and a “supply-demand” mismatch in how components of the application use the database. Sudden spikes of activity cause hardware to go over capacity because the application forces the network or the CPU to thrash.
These problems often occur because software quality depends on context -- you can read more about what CAST's Chief Scientist Bill Curtis and Olivier Bonsignour say about how context matters in this post.
Software quality issues are extremely serious in the Aviation industry, especially considering the immediate widespread domino effect it has on all sorts of personnel, and more importantly, consumers. Objectively assessing the quality of each moving part of the software system, and its contribution to the system, will improve how the application systems load the hardware, thus aiding the system's smooth operation.