Bandwidth – How Much is Enough?

Bandwidth – How Much is Enough?

How Much Bandwidth is Enough?

Knowing and planning for an appropriate level of bandwidth is a key component of every DRaaS solution. In our most common DRaaS implementation, the data is moved from local repositories at the customer site over the internet or WAN to our data centers once per day, even though that data may contain multiple restore points.

Here’s a diagram showing how the data moves.

veeam_cloud_connect

The key to an effective solution is knowing whether the remote data center is getting the data on a timely basis. We measure this daily, and we call this measurement “quality”. In other words, how current is the remote data. If it’s within the desired restore point objective (RPO) as agreed in the service level agreement (SLA), then we regard it as being “in SLA”.

It’s essential that the bandwidth between the customer site and the remote data center is sufficient to move the data that’s changed every day quickly enough that it’s in the remote data center in time to support the agreed or planned RPO. See our discussion of RPO here. As a general rule for most sites, we find that 4 to 6 hours is “quickly enough”, and this is usually scheduled overnight.

Why not use 8 or 12 hours? in more complex implementations there are other events, like multiple backup jobs, each of which use resources and must finish. So 4 to 6 hours is a conservative window of time. Some sites may be able to use longer windows – and therefore less bandwidth – to move the data. Or simply move more data per day in the longer window.

Getting to the Question

So, how much bandwidth is required to move the data in 4 to 6 hours. To answer this question, we need two pieces of data and a bit of math. First, we need to know the total storage in use. This is your disk storage in use across all servers. If you’re running VMware, one way to get this data is from your vSphere console as covered here.

Second,we need to know the daily data change rate. This is easily measured by Veeam ONE as discussed in a previous post or it can be derived from simply looking at the actual size of daily backup files.

Now for the Math

If R is required speed in Mbps (see note 1), D is data changed each day in Gb (see note 1) and T is 6 hours expressed in seconds, then our formula is: R = D / T.

To save you from number crunching, we’ve prepared this chart showing typical storage sizes and change rates and the required bandwidth result:

bandwidth required

So at a site with a 1% daily data change rate (after compression and deduplication), which we find is typical, with storage of 5TB, the required bandwidth is 19Mb.

For a deeper dive, check out this great web based calculator at WintelGuy.com.

Note 1: WintelGuy also defines all of the relevant units of measure you may encounter.

Summarizing this Idea

  1. Start with your storage in use
  2. Find your daily data change rate
  3. and calculate or lookup the required bandwidth here

Final Question

What if my bandwidth is not enough? There are numerous strategies we employ to deal with slower lines, larger data sizes and higher change rates. The best strategy is developed on a case-by-case basis.

Let us know if you have questions. And we welcome your thoughts and real world experiences.

Sizing for Local Backup Repositories

Sizing for Local Backup Repositories

Proper sizing for local backup repositories is a critical step in building a complete backup and disaster recovery as a service solution (DRaaS). Below, we will list the most accurate way of determining what your data change rate is and how large the repository should be. While no method can anticipate every future event in your data, we have found that following these three steps leads to the best possible estimate for local backup repository sizing.

The main factors that impact your storage requirement include the size of you VMs, the frequency of your backups, your retention policy. You will need to estimate your compression and deduplication ratios which usually result in a 40% to 60% reduction. And you will need to know your approximate daily data change rate, but Veeam offers a great tool to address this to address this.

Using this information, you can plan for the required disk space. But, you should always allow a safe margin for future growth, and for moving VMs or restoring VMs, etc.

Step One – Run Veeam ONE

 

Veeam ONE is a free reporting and planning tool included in the Veeam Availability Suite that monitors both VMware and Hyper-V virtual environments. Veeam ONE will report the daily data change rate by server in your environment. Ideally you should run Veeam ONE over a period covering normal activity levels.

This example shows the daily data change rates for a group of servers. The total daily change rate is 214GB.

Sizing for Local Backup Repositories

Though not shown in this report the total size of these servers as reported by vSphere is 2.5TB, but as an example, this screenshot shows sizing data for a different group of servers in vSphere:

Sizing for Local Backup Repositories

Sizing data for a group of servers in vSphere.

Step Two – Apply a Retention Policy

 

We have written at length in this blog about retention policies. Organizations may vary in specific retention needs, but let’s work from the following fairly typical or standard retention policy: Local Restore Points: every 6 hours for 2 days, plus 10 dailies, and one monthly. Remote Restore Points: 10 dailies, one monthly, unlimited monthly archive.

Visit The Restore Point Simulator at http://rps.dewin.me/

Given the data size, change rate and estimated compression rates, this very clever tool can complete the calculation we need. We’ll give it these inputs:

Style = Incremental
Used Size GB = 2500GB
Retention Points = 21
Change Rate = 8%
Data left after reduction = 60%
Interval = Daily
Time Growth Simulation = 10% per year

The simulator reports below that we need 5.475TB:

restore point simulator

Step Three – Reasonableness Test

 

We always apply a reasonableness test. Depending on what we know about the data, we’ll estimate storage requirement of roughly four times the storage reported in vSphere. In this case that 10TB. For this example, we’d recommend a repository with 8 to 10 TB of storage. This would allow for room to grow and would accommodate any unexpected data growth or high change rates.

Here are some other articles we’ve written about retention policies and related ideas:

Developing A Data Retention Policy What Data Do You Have to Backup

Data Backup Developing an Effective Data Retention Policy

Checklist for Developing an Effective Data Retention Policy

In a future post, we’ll discuss specs for the repository hardware as well as specs for the server performing the backups.

Summarizing the Steps

  • Get your data change rate from Veeam ONE
  • Use the Restore Point Simulator to estimate storage required
  • Apply a reasonableness test as a final check

Let us know if you have questions. And we welcome your thoughts and real world experiences.

What is Veeam Backup

What is Veeam Backup

Backup and recovery solutions are vital to your organization, but backup solutions that used to work effectively may no longer do the trick in today’s ever-changing computing landscape, especially if your firm relies heavily on virtualization. Veeam backup offers backup and disaster recovery solutions that make an ideal fit in the virtual environment.

Backup Solutions That No Longer Work

Tape backup has been a longstanding solution.

Tape restores take far longer to complete when compared to a restore from disk – or the even faster alternative – using the backup as the primary. That’s not even possible with tape.

Tape solutions mean that finding the data always means finding the right tape. That’s time consuming and risky.

With tape there is the risk of obsolesce of the media when the tape software or drive gets upgraded.

Finally, tape solutions simply cannot provide the guaranteed availability that every modern business requires.

 

Enter Veeam for the Virtual World

Veeam works in the virtual environment by backing up virtual machine images instead of files, storing the images in a secure, central location. If disaster were to strike, the VM images could be rebooted nearly instantly, rapidly restoring the machines with a recovery time and point objectives of fewer than 15 minutes for all data and applications.

How Veeam Works
Veeam’s software runs through what is called a hypervisor, which is a layer of software that allows for multiple VMs to be run on a single, physical machine. Veeam’s first task is to take full backup of each virtual machine. It then subsequently uses caching and duplication elimination processes to only store new information in the VM images.

Veeam Benefits and Options

The Veeam Availability Suite provides five main capabilities. They are:

  • Rapid recovery: Choose exactly what you want to recover and how you want to recover it.
  • Avoidance of data loss: Near-continuous data protection coupled with streamlined recovery methods
  • Verified recoverability: Veeam stands by its ability to recover every application, file or virtual machine, with no exceptions.
  • High visibility: Receive alerts and enjoy continuous monitoring to become aware of issues before they can affect operations.

The Veeam Availability Suite combines two products into a single solution: Veeam Backup & Replication for restoring and replicating information and Veeam ONE for monitoring and reporting. A smaller version of the Veeam Availability Suite, known as Veeam Backup Essentials, is available for smaller businesses that may not need the suite’s full capabilities.

Headquartered in Switzerland with main offices in Ohio, France and Australia, Veeam was founded in 2006. It has since become the go-to backup and recovery solution for more than 183,000 customers across the globe.

 

Veeam backup

West Coast Companies Prepare Cloud Backup and Disaster Recovery Solutions

West Coast Companies Prepare Cloud Backup and Disaster Recovery Solutions

Los Angeles is aiming to prepare for the next big earthquake, yet some companies may already be prepared. LA Mayor Eric Garcetti outlined a proposed preparedness plan that would give the city a layer of resiliency when the “Big One” hits, according to NBC, but not every company necessarily needs to craft a fresh resiliency plan if they already have solid network and resource protection in place.

The Proposed Plan

The earthquake plan outlined by Garcetti involves billions in infrastructure improvements, specifically targeting three key areas: buildings, water supply and telecommunications. Certain types of at-risk buildings erected prior to a specific date would require retrofitting to add strength.

A backup water system would be created, while the existing one would undergo an earthquake-resistant upgrade to pipes and materials. Measures would also be taken to protect the San Andreas fault, pegged as the most likely epicenter of the next big quake.

The mayor’s telecommunications proposal involves improving cellphone and Internet networks to protect against quake-related breakdown. His ideas also include designing a solar-powered wireless Internet network and strengthening cellphone towers.

The Protection

Some companies, those that have implemented cloud backup and disaster recovery solutuons, already have the fortification they need in the event of the next big Los Angeles quake. Others may find themselves in the same situation as the New York Stock Exchange after Hurricane Sandy. The NYSE went dark due to the failure of several trading firms to test the recovery and backup systems.

Protection can come from two main strategies. The first is to ensure a backup data and real-time data recovery solution is in place – and regular testing to ensure the solution works. The cloud-based disaster recovery solution offered by Global Data Vault is a prime example, particularly because GDV conducts quarterly testing of customer systems to ensure systems would be operable when they’re needed most.

The second strategy involves location. All companies, but especially those with resources along the West Coast, would be wise to maintain a cloud backup and disaster recovery solution in a risk-free environment. Redundancy is a must, and opting for redundancy in a data center located in a low to risk-free area of the country can best preserve continuity of business operations.

While the mayor’s plans may be beneficial for protecting Los Angeles as a whole, companies need to ensure they have their own strategies to protect their individual businesses. A no-fail, cloud-based recovery solution is definitely one of them.

Global Data Vault Service Enhancements

Global Data Vault Service Enhancements

As a Global Data Vault customer, you’re already enjoying a truly unique offering that protects both physical and virtual servers. This is a special bonus for those making the gradual transition from physical to virtual servers, and we have two more services coming your way that we think you’ll be pretty excited about.

Enhancement 1: Extended Data Retention

Ever wish you could go back in time when things were simpler, gas was cheaper and your computer network functioned without the (fill in the blank) recent hitch that just corrupted it? Now you can.

While we don’t have a time machine that works for gas prices, we have created a time machine of sorts for your files and data. We’re introducing a unique extended data retention policy that lets us keep weekly restore points for all our disaster recovery customers – forever.

Weekly restore points can be a godsend when a file or database gets corrupted by a virus, hardware problem or something gets deleted without notice until weeks or even months later. Now instead of firing the person who deleted the crucial data, you can simply go back in time to restore it to your system.

You’ll be able to go back in time as far as you wish, with monthly restore points available for the entire length of time you’ve been our customer. It applies with your virtual infrastructure as well as your virtual machines. We have developed an extremely efficient way to do this that doesn’t require colossal storage infrastructures, a one-of-a-kind development that no one else offers.

Enhancement 2: PC and Notebook Protection

Protecting your main fort is good, but protecting your main fort and your outposts is even better. We’re going to do both, providing PC and notebook protection for customers who already enjoy our virtual server protection. This additional protection comes with no additional licensing fees; the only additional cost is any increase in storage capacity which we believe to be minimal.

That storage increase will be kept at bay as all of the storage will be de-duplicated. That means if you have 100 PCs running Windows 8 and Office, the storage requirement for those systems is equivalent to one copy. Restore capabilities for these devices range from single-file restores all the way to full bare metal restores.

We’re launching both enhancements in early 2015.

Part of our mission is to continue to grow, develop and discover new methods to make our services even better and our customers even happier. Cheers!

Why Five 9s Means You’ve Made It

Why Five 9s Means You’ve Made It

A Cloud Server Uptime Full House!

Some people may say they’ve made it once they nab a six-figure income or four-car garage. Others might measure success by the quality of their friendships, or the fact that their name showed up as a “Jeopardy!” question. While the definition of success can vary widely from person to person, there’s a constant gauge for success in the high-tech arena.

That gauge is the number of nines you can boast when it comes to your uptime. And we officially “made it” in September, boasting the remarkable uptime for our backup platform with five of those nines.

Yep, we hit 99.999 percent uptime in September.

What That Means

Five 9s is a huge deal when it comes to uptime – a level high enough that customers don’t even have a clue that any downtime happened at all. Actually, customers wouldn’t have been impacted if there had been more downtime, but the fives 9s is a great achievement.

Our overall unplanned downtime clocked in at a total of 32 seconds for the month per server across all of our servers.

Our planned outages spanned 1 hour and 45 minutes, scheduled during times of minimal use. Planned outages were 15 minutes each on seven different days across five different servers, including activities such as clearing space, moving users and a few large-customer emergency restores.

How We Did Itfive-nines

Our recipe for achieving the five 9s included:

  • Lots of redundancy
  • Top-notch hardware
  • High-quality storage area network (SAN)
  • Stellar performance of our primary communication provider Level 3 (Thanks, guys!)
  • Paying acute attention to infrastructure maintenance
  • Staying on top of problems and planned outages

 

Achieving the highest level of uptime is a feather in the cap of any cloud firm, and we’d like to share kudos with others that have achieved it. Now if we could only get our name in a “Jeopardy!” question, we’d really have it made!