From the earthquake and tsunami in Japan back in March to the tornadoes that have ripped through the Midwestern United States over the last two months, we have been witness to the violence and destruction Mother Nature can inflict without warning.
As we begin to move on from the shock of the destruction wrought by these natural disasters, we turn our attention to the recovery, both in human terms and in terms of business.
In the weeks following the Japanese earthquake in March, a number of top technology publications focused on disaster recovery and what that particular disaster meant to the IT community. eWEEK's Chris Preimesberger called the Japanese earthquake and tsunami a “Wakeup Call for IT Managers” while long-time tech media veteran Wayne Rash discussed “Truly Preparing for the Worst” over on CTO Edge. Both articles sent clear messages – disaster recovery plans not only need to be installed, but also assessed and updated.
Even before this year’s spate of natural disasters, James Damoulakis, CTO of GlassHouse Technologies, an independent IT infrastructure services provider, brought to light the importance of periodically assessing a company’s disaster recovery system due to the highly complex nature of the systems that need to work together in order for it to function correctly. In a December 2009 article in SearchDisasterRecovery, Damoulakis wrote:
One of the primary challenges with disaster recovery is dealing with its inherent interdependencies. The coordination required between the various functional areas of IT requires an end-to-end perspective -- disaster recovery is only as effective as its weakest link.
That any system is only as strong as its weakest link is not, in and of itself, a revelation. However, when a business is damaged or destroyed by an act of nature, it doesn’t have time to think about weakest links. It needs the disaster recovery system to work right the first time to get the company’s systems back online and data flowing. The last thing a company that is already down needs to discover is a structural flaw within one of the applications controlling the recovery process.
Finding out there’s a structural flaw somewhere in the disaster recovery process is tantamount to adding insult to injury. Like any application software, disaster recovery system software issues need to be addressed before the system is needed, not after the continuity and survival of a business depends upon recovering files from a remote server. To ensure sound structural quality out of the gate, a company must periodically assess the application software embedded in its disaster recovery system using a platform of automated analysis and measurement.
Because automated analysis and measurement digs into the structure of an application, it resolves not only the code issues that often accompany acquired software, building on top of old software, rapid development and developer inexperience, but it also grants significant visibility to the work being done by a company’s own IT staff as it customizes software and links applications together.
Regardless of whether it’s the application or the interface, the software or the IT engineer, though, automated analysis and measurement grants visibility into the issues before they become issues. In addition, if potential areas of structural risk are identified before the system is needed, they can be addressed and dealt with so they won’t bring down the system designed to rescue a company that’s already had its system brought down. Failure to perform such an assessment and ensure software quality of the disaster recovery system merely invites more disaster where disaster has already struck.