Everyone knows that protecting computer data is important, but how many can be completely confident that their backup systems will work when needed?
A study by the accounting firm of McGladrey and Pullen estimated that one out of every 500 data centers will have a severe disaster this year, that 43 per cent of those companies that do experience a disaster never reopen, and that 29 percent of them close within two years. In his book, "Disaster Recovery Planning: Managing Risk & Catastrophe in Information Systems" Jon Toigo wrote that half the companies that experience a computer outage lasting more than 10 days go out of business within five years and that most never fully recover financially.
The two largest contributing factors in data loss are hardware or system malfunctions and human error. Together, they account for almost 75 per cent of all incidents. Software corruption, computer viruses and 'physical' disasters like fire and water damage make up the rest.
There are three major trends in data loss today, representing industry-wide shifts in technology and market behavior.
First, because we are storing more data in smaller spaces, the impact of a data loss incident is magnified. Ironically, the very same technological advances that allow us to do more with less contributes directly to the increasing severity of data loss.
The media that stores data is fragile, whether it is tape, diskette or hard drive. As hard drives get smaller and hold more data, the mechanical components in a hard drive must work with greater precision. The distance between the read/write head and the platter where data is stored is steadily decreasing. Today, that distance is 1-2 micro inches (one millionth of an inch). A speck of dust is 4-8 micro inches and a human hair 10 micro inches. Even a slight nudge, a power surge or a contaminant introduced into the drive may cause the head to touch the platter and cause a head crash. Data in the contact area may be permanently destroyed.
Second, data is more mission-critical. Users are storing greater amounts of critical personal and commercial data like bank accounts, hospital patient records and tax records on their desktops and networks.
By definition, loss of mission-critical data brings major business processes to a halt. In the worst case, that can mean bankruptcy. Certainly, system administrators can lose their jobs and companies can lose customers' trust when they fail to deliver promised goods and services. There are also financial and legal consequences that can put companies and individuals at risk.
Finally, most of the backup technology and practices are failing to protect data adequately. Most computer users rely on backups and redundant storage technologies, and for many users, this is a successful backup strategy. Others are not so lucky.
Most companies and many individuals do back up their data, but those backups frequently turn out to be useless when they need to restore their data. The backups and redundant storage systems fail because the systems rely on a combination of technology and human intervention for success. Backup systems not only assume that the hardware is working properly; they also assume that the user has the training and the time to backup properly. They further assume that the backup media is in good condition and that the backup software is doing its job.
Often there is a weak link. The key point of this article is that you can find that weak link now and significantly improve the security of your data and your business. The exercise is simple: restore randomly selected files from backup.
If you or your organization meet the challenge, that's great. However, the chances are good that you will not. However, the good news is now is the time to find out. This is an exercise. It's simply a low risk way to test your systems and make improvements where needed.
Organizations underestimate the length and fragility of the backup chain. First, the data has to exist in a form that can be backed up. The backup hardware and software must be functioning correctly, and the media must actually capture the information. Somebody must cause the backup to be performed and the backed-up data must be, ideally, properly documented and moved to an off-site location. Finally, the data must be restored in a timely manner, so everybody can get back to work.
The list of things that can go wrong in that chain of events is almost endless. Ironically, the biggest enemy of regular, properly executed backups is the reliability and efficiency of today's IT systems. That dependability means that restoring data from backup is rarely necessary. Unfortunately, systems that are seldom used tend to fall into decay and that is the danger.
So, backup your data, test and then test again. Make improvements and establish a regular program of random, unscheduled tests.
Midwest Data Recovery Inc.
312 907 2100
Copyright © by Disaster Recovery Planning Forum All Right Reserved.