Using Technology to enable and transform Business Strategy

IT Strategy

Subscribe to IT Strategy: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get IT Strategy: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


IT Strategy Authors: Elizabeth White, Peter Silva, Colin Ritchie, Anders Wallgren, Robin Miller

Related Topics: Cloud Computing, Virtualization Magazine, IT Strategy, Secure Cloud Computing, Cloud Data Analytics, Big Data on Ulitzer

Article

Back Up Data Correctly to Avoid a Disaster, Even When Disaster Strikes

As the tech landscape evolves, data storage practices need to be carefully considered & amended to meet changing requirements

The convergence of increased data stored on company networks and stricter compliance regulations dictating the length of time the information needs to be stored for have helped cloud storage explode in popularity. The cloud offers an easy-to-use, scalable and cost-effective solution for data storage. However, organizations must seriously consider how they manage their storage from a back-up and disaster recovery perspective. While there is no doubt that cloud computing can speed disaster recovery - from reducing the time it takes to restore data, to the fact that information is stored off-site, alleviating the risk of natural disasters - incorrectly managing the storage can often prove more of a hindrance than a help. Whether an employee accidently deletes a file, or a more sinister hack on the company network takes place, for most organizations it is inevitable that data recovery will need to occur at some point. Planning for disaster is essential and having in place an effective back-up and disaster recovery process can save headaches down the line.

As the tech landscape evolves and organizations increasingly have to adapt to new trends, such as virtualization and unstructured Big Data, data storage practices need to be carefully considered and amended to meet changing requirements. There are a multitude of options available that can cause IT teams to struggle with understanding the best solution for their organizational needs. Companies often fail to consider future scenarios when making decisions and, instead, focus on their needs at the current time. This has the potential to cause problems down the line, particularly when it comes to back up and disaster recovery strategies.

From hardware failure to network hacks the potential for data loss is huge. A recent survey by independent research firm, TechValidate* revealed that significant hardware failures occur far more frequently than many may believe. Cited in the survey, 52 percent of respondents had seen a failure within the last year and of that number, 37 percent had suffered the loss within the last six months. However, the same study also revealed that 81 percent of organizations do not have a tried and tested back-up and disaster recovery strategy in place. What is alarming about these statistics is the fact that disaster recovery will be an inevitable requirement at some point for almost every business, but most have not prepared for the eventuality.

If more than two-thirds of U.S. companies have not tested their disaster recovery strategies, chances are they will have no idea how long it will take to restore their business-critical data if disaster were to strike. Where data is stored will make all the difference. While storing data all in one place may once have been the norm, this need not be the case with a cloud solution. In fact, storing everything in one environment can contradict a number of the cloud's value propositions, leading to adverse financial and disaster recovery effects. Cloud storage is a relatively cheap commodity, but storing everything - from emails about company social events to key customer information - all in one place can rapidly become expensive, even in the cloud. Also, from a practical point of view, it's likely that a lot of information stored within the company will never be looked at again and while compliance initiatives dictate that data has to be retained for a certain period of time, the location is up to the organization. There is therefore no reason to store the everyday essential information in the same location as the ‘never-again' information.

Further, if an outage occurs, any company will need to get its business-critical information back as close to immediately as possible. But if every piece of company information recorded over the last 10 years is being recovered at once, the process will be hindered and take far longer than necessary, or feasible, for business operations. This will not only cause serious headaches for anyone who needs access to the data, but it could also cost millions in lost revenue. Imagine a retail outlet not being able to process payments correctly because their server has gone down and they can't get it back up quickly enough because of all the other less essential information that they are restoring. The revenue lost could be extremely detrimental.

Storing by Importance
A new approach should be considered in order to ascertain where data should be stored. A key element that must be a part of your disaster recovery plan is the idea of "tiering" the data to be recovered based on its overall business importance. This allows resources to be correctly proportioned with the budget requirements and business impact.

The first step should be deciding which applications and data are business critical and which are not. This will then allow the data to be grouped depending on its importance and a ‘storage hierarchy' can be put in place. Data that does not need to be accessed frequently can be placed in lower cost storage that may take days to recover, while business-critical information should be placed in more expensive storage where it can be recovered quickly. In the event that a system's restore is necessary, irrelevant information will not slow the process down and everything can be returned at a speed that is appropriate to its importance.

Most companies will have vast amounts of data and manually deciding what data is stored where would be a laborious process for an individual, or even a team after the initial segregation has taken place. Therefore, once the hierarchy has been put in place, it can be combined with an automated system that intelligently tracks and tags all data based on predefined rules, and automatically diverts it to the correct location. Not only does this allow IT teams to focus on more value adding tasks, but also guarantees all data is backed-up, without concern that anything may have been missed.

With these systems in place, businesses can test and tweak their strategies and be sure that in the event of an outage, their applications, data and systems are only the touch of a button away. Planning, implementing and testing data recovery techniques help make the actual disaster, be the only disaster.

*Survey conducted by independent research firm, TechValidate, December 2012.

More Stories By Bob Davis

With more than 25 years of software marketing and executive management experience, Bob Davis oversees Kaseya’s global marketing efforts. He applies significant experience from marketing network and system management solutions to directing Kaseya’s strategy, product marketing, branding, public relations, design and social networking functions One of the original founders of the company, Davis returned to Kaseya in 2010.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.