First published on CloudBlogs on Apr, 17 2013
Several weeks ago I met with a customer who expressed a very common problem: “If we can’t access our data at any moment, from any location, we immediately begin losing money.”
This comment was made before any other topic we discussed that day – before concerns about app management, security, or even storage costs. For many businesses, the cost of data storage is minimal compared to the cost of not being able to
access
that data.
Several studies have been done on this very issue, and the results are consistent in showing that close to 40% of companies who lose access to their data for 24 hours or more will be irreparably damaged in the eyes of their customers. I don’t take figures like these lightly. This is a reality that emphasizes continuous availability as more than just a competitive advantage – it is a table stakes requirement in today’s hyper competitive environment.
This mindset has been a long time coming – just a few years ago, several hours of data loss or app downtime was acceptable (albeit incredibly unpleasant), but now we’ve advanced to the point where an enterprise environment expects zero to a few minutes of data loss and recovery time. For high traffic or high value data, even this minimal amount of downtime can be a huge problem.
There are a lot of different solutions geared toward addressing the challenge of continuous availability, but the majority of them are so expensive that their usage is limited to only the mission critical workloads, which leaves much of the IT infrastructure exposed. Other services rely only on data backup as the availability solution, or deploy a combination of HA, DR, and backup. In each of these cases the end result is complex, expensive, and it’s still simply not good enough to satisfy Enterprise-grade availability requirements.
The lines between HA, backup and DR are getting increasingly blurry, and, to stay ahead of a disaster scenario, it is important for continuous availability to be woven into the fabric of cloud computing. These solutions should offer a range of protection and recovery options that include: Zero to minimal data loss, recovery times of seconds to minutes (within and across data centers and clouds), and a single management interface to configure, deploy, and manage HA/DR/backup across a hybrid enterprise that spans multiple clouds.
At Microsoft we have developed and released a number of solutions that are focused on continuous availability, including:
-
Windows Server 2012
allows you to store critical application data (
e.g.
, Hyper-V) on low cost but continuously available SMB3 file shares.
-
The Windows Server 2012
Clustered File Server
backed by
Storage Spaces
delivers a reliable, available, manageable storage platform on cost-effective hardware for various workloads, including Hyper-V, SQL Server, and IW data.
-
Within Windows Server 2012 we also offer
Windows Server Backup
which enables you to backup your Windows Servers to another Windows Server at intervals as small as every few of minutes. This kind of regular backup means that even if you lose an entire server or disk, your recovered data is never more than a couple minutes old.
-
Hyper-V Replica
is another Windows Server 2012 capability that allows you to
replicate all your VMs to another server
and
have a secondary copy that is also never more than a couple minutes old.
-
System Center Data Protection Manager
is built on the same capabilities as Windows Server Backup and provides additional command, control, and reporting in a one-to-many environment.
-
Finally, in November we purchased an organization named
StorSimple
. This company has
done some amazing innovation in tiered storage
that enables you to create policies on where and how you want to tier storage – locally
and
into Azure. This solution is incredibly exciting – and in a future post
I’ll discuss StorSimple
in much greater detail.
Looking ahead, we have some work to do to more closely align these solutions, but the bottom line is that we offer a number of solutions today that help deliver continuous availability in Windows Server and System Center.
There are some great
customer testimonials
about this out there already, and we’re continuing to make this kind of support even more valuable with
cost-effective hardware
, and by adding automation and orchestration to every step of IT management.
Focusing on recovery options only
after
a disaster has occurred is no longer enough. By putting the emphasis on the continuous availability of data and applications – even during a disaster – we are really leaping ahead towards the future of computing.