Sizing for Local Backup Repositories

Sizing for Local Backup Repositories

Proper sizing for local backup repositories is a critical step in building a complete backup and disaster recovery as a service solution (DRaaS). Below, we will list the most accurate way of determining what your data change rate is and how large the repository should be. While no method can anticipate every future event in your data, we have found that following these three steps leads to the best possible estimate for local backup repository sizing.

The main factors that impact your storage requirement include the size of you VMs, the frequency of your backups, your retention policy. You will need to estimate your compression and deduplication ratios which usually result in a 40% to 60% reduction. And you will need to know your approximate daily data change rate, but Veeam offers a great tool to address this to address this.

Using this information, you can plan for the required disk space. But, you should always allow a safe margin for future growth, and for moving VMs or restoring VMs, etc.

Step One – Run Veeam ONE

 

Veeam ONE is a free reporting and planning tool included in the Veeam Availability Suite that monitors both VMware and Hyper-V virtual environments. Veeam ONE will report the daily data change rate by server in your environment. Ideally you should run Veeam ONE over a period covering normal activity levels.

This example shows the daily data change rates for a group of servers. The total daily change rate is 214GB.

Sizing for Local Backup Repositories

Though not shown in this report the total size of these servers as reported by vSphere is 2.5TB, but as an example, this screenshot shows sizing data for a different group of servers in vSphere:

Sizing for Local Backup Repositories

Sizing data for a group of servers in vSphere.

Step Two – Apply a Retention Policy

 

We have written at length in this blog about retention policies. Organizations may vary in specific retention needs, but let’s work from the following fairly typical or standard retention policy: Local Restore Points: every 6 hours for 2 days, plus 10 dailies, and one monthly. Remote Restore Points: 10 dailies, one monthly, unlimited monthly archive.

Visit The Restore Point Simulator at http://rps.dewin.me/

Given the data size, change rate and estimated compression rates, this very clever tool can complete the calculation we need. We’ll give it these inputs:

Style = Incremental
Used Size GB = 2500GB
Retention Points = 21
Change Rate = 8%
Data left after reduction = 60%
Interval = Daily
Time Growth Simulation = 10% per year

The simulator reports below that we need 5.475TB:

restore point simulator

Step Three – Reasonableness Test

 

We always apply a reasonableness test. Depending on what we know about the data, we’ll estimate storage requirement of roughly four times the storage reported in vSphere. In this case that 10TB. For this example, we’d recommend a repository with 8 to 10 TB of storage. This would allow for room to grow and would accommodate any unexpected data growth or high change rates.

Here are some other articles we’ve written about retention policies and related ideas:

Developing A Data Retention Policy What Data Do You Have to Backup

Data Backup Developing an Effective Data Retention Policy

Checklist for Developing an Effective Data Retention Policy

In a future post, we’ll discuss specs for the repository hardware as well as specs for the server performing the backups.

Summarizing the Steps

  • Get your data change rate from Veeam ONE
  • Use the Restore Point Simulator to estimate storage required
  • Apply a reasonableness test as a final check

Let us know if you have questions. And we welcome your thoughts and real world experiences.

WAN Acceleration for all Cloud Disaster Recovery Customers

WAN Acceleration for all Cloud Disaster Recovery Customers

Global Data Vault is adding “WAN Acceleration” for our customers with physical servers – we already provide this for virtual environments and the results have been terrific – in some cases data is now moving 50 times faster. So now we’re adding it for all of our Cloud Disaster Recovery customers.

WAN Acceleration for physical servers is important because it enables data transfers to our data centers at speeds ranging from 5 to 20 times faster than conventional transfers. Conventional transfers rarely take full advantage of available bandwidth because the methods used to check for errors and acknowledge clean data packet transmission are not optimal for moving large amounts of data. Our WAN acceleration tools solve this problem – and thus improve the speed at which data is moved into our secure data centers – all leading to greater quality and higher reliability for our customers’ disaster recovery capabilities.

This picture shows the WAN acceleration setup for virtual environments:

The keys to this improvement are in two factors:

1)     We keep track of data at a block level in a “global cache” and then simply do not resend data that is unchanged since the last backup and

2)     We implement a communications protocol that is far more efficient for moving large amounts of data over long distances over the WAN.

We view quality in terms of RTO’s and RPO’s, which unlike most businesses is very quantitative. WAN Acceleration allows us to restore to more recent points and with faster restore times, keeping our customer’s data more current in our data center and locally.

Global Data Vault Service Enhancements

Global Data Vault Service Enhancements

As a Global Data Vault customer, you’re already enjoying a truly unique offering that protects both physical and virtual servers. This is a special bonus for those making the gradual transition from physical to virtual servers, and we have two more services coming your way that we think you’ll be pretty excited about.

Enhancement 1: Extended Data Retention

Ever wish you could go back in time when things were simpler, gas was cheaper and your computer network functioned without the (fill in the blank) recent hitch that just corrupted it? Now you can.

While we don’t have a time machine that works for gas prices, we have created a time machine of sorts for your files and data. We’re introducing a unique extended data retention policy that lets us keep weekly restore points for all our disaster recovery customers – forever.

Weekly restore points can be a godsend when a file or database gets corrupted by a virus, hardware problem or something gets deleted without notice until weeks or even months later. Now instead of firing the person who deleted the crucial data, you can simply go back in time to restore it to your system.

You’ll be able to go back in time as far as you wish, with monthly restore points available for the entire length of time you’ve been our customer. It applies with your virtual infrastructure as well as your virtual machines. We have developed an extremely efficient way to do this that doesn’t require colossal storage infrastructures, a one-of-a-kind development that no one else offers.

Enhancement 2: PC and Notebook Protection

Protecting your main fort is good, but protecting your main fort and your outposts is even better. We’re going to do both, providing PC and notebook protection for customers who already enjoy our virtual server protection. This additional protection comes with no additional licensing fees; the only additional cost is any increase in storage capacity which we believe to be minimal.

That storage increase will be kept at bay as all of the storage will be de-duplicated. That means if you have 100 PCs running Windows 8 and Office, the storage requirement for those systems is equivalent to one copy. Restore capabilities for these devices range from single-file restores all the way to full bare metal restores.

We’re launching both enhancements in early 2015.

Part of our mission is to continue to grow, develop and discover new methods to make our services even better and our customers even happier. Cheers!

Data Protection Dashboard Provides Easy to Use Status Window

Data Protection Dashboard Provides Easy to Use Status Window

Global Data Vault Web Portal

At Global Data Vault, we ensure the quality of your recovery environment through constant monitoring. Nothing is more important to your business operations than your data and the systems that support it. Not only do we monitor your readiness, we make it easy for you to check in on the status of your data protection as well. Today, we’d like to introduce our improved customer dashboard.

Built as a Global Data Vault web portal, this single screen allows our customers full visibility of the status of their local and remote recovery options. The beauty of it is, it’s available – securely — 24/7/365 and can be viewed from any device with a browser. It works well on PCs, tablets, and mobile devices.

The keys to note are:

  • 100% of the environment is recoverable locally
  • 100% of the environment is recoverable remotely
  • The Restore Points “RP” are all within the last 2 hours – locally and remotely
  • The green “Yes” in the “Loc” and “Rem” columns indicate these restore points are within the promised agreed service levels, which are generally 4 to 8 hours

This same portal is available for physical and virtual environments as well as for much larger environments.

We provide this dashboard for all clients, from enterprise level to small office. Here’s an example of the portal screen of a company with 2 servers and 6 protected disks:

customer portal shot 1

The details of the portal are explained in this screen shot:

portal-shot-2

To access your own client web portal, visit globaldatavault.com, select “support” and then login with your customer id and password.

Not a Global Data Vault client yet? Request More Information on our Cloud Disaster Recovery Solutions Here

Even Tornados are No Match for a Disaster Recovery Solution like Global Data Vault

Even Tornados are No Match for a Disaster Recovery Solution like Global Data Vault

Why Rug Doctor Calls GDV the Real Disaster Recovery Solution

 

“… within 2 hours they had all four servers set up in a virtual environment.”

Tornado season is coming to a close for most of the country, but it hasn’t been without event. We never like to hear of a major tornado affecting any of our clients, but when it does, we’re happy that we can help them maintain their business operations with little to no interruption.

This video features one of our customers, Rug Doctor, a national leader in hot water extraction carpet-cleaning machines and supporting products. Rug Doctor has a major plant facility in the heart of Tornado Alley, in Fenton, Missouri, plus they have 35 warehouse distribution points and service centers around the US. Additionally, their international operations include a manufacturing facility in the UK and warehouse distribution in Canada, Puerto Rico, Australian and 20 other countries. All of those locations rely on connectivity to the main office to keep their operations running smoothly.

In 2013, a tornado went directly over Fenton causing considerable damage to Rug Doctor infrastructure there. With 30 servers running in their server environment and handling over 3 million transactions a year it is easy to understand the commercial devastation that could have been caused when the May 18–21, 2013 tornado outbreak resulted in the destruction of the warehouse and lossof their computer room. Luckily Rug Doctor has been a Global Data Vault since 2010. Watch this one minute video to hear Bill Ellis, Manager of IT Infrastructure for Rug Doctor, explain what happened that day and just why he calls Global Data Vault the ‘real disaster recovery solution’.

Virtual Environments for Disaster Recovery – Managing Configuration

Virtual Environments for Disaster Recovery – Managing Configuration

Using Virtual Environments for Disaster Recovery – Managing Configuration

Creating a virtual environment for your server system is part of every disaster recovery plan. Off the shelf virtualization technology such as VMWare, Hyper-V and Virtual Box, allow you to create a software representation of all the hardware and software associated with a server. Essentially you’re taking a physical server and translating all that hardware into software.

When we describe the virtualization part of the process, it sounds a bit like magic, but it is not. There is significant configuration to put in place in order to make it work.

Translating the hardware into software is necessary so that the entire “workload” – which is everything that server does whether it’s a email server or application server, or whatever that particular server’s function is – can be duplicated and moved to the cloud. This virtual machine must function just as its physical machine twin, so if something happens in the real world environment, there is an exact duplicate in a virtual environment.

This magical process becomes complex when the time comes to recover or fail over to the virtual machine you’ve created. Ensuring the correct hardware/driver configurations can pose a formidable challenge in the virtual environment.

This image represents all the virtual devices that have to be configured correctly before your VM environment will function properly:

  • Memory
  • CPU
  • Disk interfaces
  • Network interfaces
  • Video drivers
  • Communications ports
  • Optical drive controllers
  • Printer ports
  • USB drivers

For the virtualization and failover to be successful, the virtual machine has to have the exact specifications of what the finished machine is supposed to look like before the conversion is performed, including memory, what kind and how many processors, network cards and interfaces how many and what, etc.

Configuring these environments is a sophisticated project that can be time consuming for even the senior IT professional. For most 5 – 8 server environments, a day should be allocated for this project alone — and that’s if the virtualization goes smoothly.

Global Data Vault Builds Clients Virtual Environments

An added benefit of Global Data Vault’s disaster recovery solution is that we take care of all this for you. Our competitors make this a client-run project but GDV is different. We build the virtual environment for our clients before even installing the databases, and we continually monitor all the components that make up your network servers for any changes.

Building the virtual environment is quite expeditious (typically only requires about an hour) and we monitor any hardware and software revisions on an ongoing daily basis. By updating your virtual environment in real time, we eliminate your downtime when a real disaster occurs.