In this episode of Global Data Vault’s “Back to Basics” webinar series, we look at the best free backup tools on the web. It’s a bit of a show and tell, as Steven New, Global Data Vault’s Senior Support Manager, demonstrates how each of them can be used for planning backups, and shows you how to use them. Kelly Culwell, Service Delivery Manager, provides detail from a user’s point of view.
Free Backup Tools
Today we are talking about free backup tools. At Global Data we are often asked, “How do I know how much bandwidth I need, or how many restore points can I keep? And I’ve got THIS amount of storage.” We’re going to run through some tools today that will show you how you can calculate that. We use these tools all the time, so we’re very familiar with each of them.
As a Veeam-certified architect, Steven utilizes these tools to enhance the solution, fill in the gaps, and give you different views. Sometimes it’s not just about having an idea of the data you’re trying to protect. We need to look at your CPU resources, network, disk latency, and things like that, and solve problems.
These tools will help answer questions about the “unknown,” and we’re going to show those to you today. We are going to start with a workload sizing tool.
Now, anytime you see these links that are Veeam.bp.com, these are not associated with Veeam. They are were created by Veeam solutions architects and the Veeam community. We have a couple of those today, and we just wanted to clarify that this isn’t associated with Veeam.com.
Workload Sizing: 00:03:22
The workload sizing tool looks at your production data and gives you an idea of what type of data stores you need and what repositories you would need based on your requirements.
Whenever you go to this site, it will bring you to your dashboard, telling you to add a workload. A workload is your backup job. So this is going to be an active workload, the workload name. We are just going to go ahead and call it “main job,” and then it’s going to be VMware, and we do have the option of VMware agent as the backup type. So we’re going to go ahead and choose VMware, and then we have site B, so any backup copies by design are going to go to site B. Set your source terabytes; we’ll go ahead and leave that at 10.
Enter your quantity of VMs and assume you want to use an eight-hour backup window. Best practices are 3-5% on yearly growth, and we’ll go ahead and leave it at five. And then your scope for retention. We’re going to go ahead and leave at three years, and you can drop this to one year, which would be 12 monthlies, but we’ll leave it for three years for this demonstration. Your change rate will be about 5%, and your reduction usually is 50% because you can get a 2:1 compression ratio. We’re not going to use per-VM files, and we always use ReFS or XFS for repositories. For your backup job, we’re going to go ahead and give it 18 restore points, which is Global Data Vault’s standard retention for your main backup job. We always configure it for 18 restore points, and it runs between 6:00 AM at 9:59 PM.
For this example, we’re going to disable the remote copy and then click add workload. Then once you click add workload, click calculate. Now, the cool thing about this tool is it will size your Veeam server for you. The biggest issue we see when customers call in regarding performance or any bottlenecks (where the job is running slowly) is that the Veeam server is not sized to specifications. This will tell you exactly what resources you need. So for the 10 VMs that we added and 10 terabytes of data, you would require four CPU cores and 8 GB of RAM. That is just for your Veeam Backup & Replication server. So if you’re running an all-in-one server, an example is if you have Veeam’s SQL installed on the same machine, you would need an additional two cores and then four GB of RAM for the SQL server.
So right there at a bare minimum, that is eight cores and 12 gigs of RAM, and then, it would also give you your repository capacities based on the retention that we put in, which will be 18-19 terabytes. By default, the replication server is also a proxy. Your repository, where you store the data, requires two cores and 10 GB of RAM. So right there alone, we’re at roughly 24 gigs of RAM, excuse me, 22 GB of RAM, and then we’re about 10 cores, just for a Veeam backup server, SQL server, repository, and proxy for the workload that we selected.
Go to the Workload Sizing Tool
All these tools are mostly customizable. Not every user or company is the same, they all have different amounts of storage, different retention policies, different rates of change, things like that. They can just put all this in and have an excellent overview and insight about their organization. These tools can prevent bad things from happening if you use them in advance. If you undersize your backup environment, suddenly, you can’t back up all of your data. You can’t keep what you need. You could lose data. Hopefully, you’ll take some of this today and use it in your environment, and feel free to let us know what you think.
The next tool that we’re going to talk about, now that we understand the workload is how much bandwidth we need to transfer that data. And not necessarily just to the cloud, but maybe intrasite or intersite as well. The bandwidth calculator again is heavy math, which I don’t like, so I’m glad all these tools are out there.
Bandwidth Calculator: 00:10:50
The bandwidth calculator allows you to enter your copy interval, data reduction, compression, change rate, and data size. Let’s say you want to calculate by five terabytes of data, and you want to send that in an eight-hour window; it would roughly take a connection that is 728 megabits a second, and it would transfer 2.5 terabytes. The reason for that is because you have a 50% compression ratio, and then you have a hundred percent change rate, but this would cut your transfer in half. So if we drop the compression ratio to 1x, you’ll see it will transfer the full five terabytes. Usually, with Veeam, you’ll see anywhere between two to 2.5x compression.
Keep in mind that Veeam deduplicates and compresses data, which reduces the data transferred to Global Data Vault. You can also go in here, and if you want it to transfer on a 24-hour period, it requires 216 megabits a second to copy 2.2 terabytes of data to GDV. This is a great tool to discover how long it will take to transfer data to GDV.
Go to the Bandwidth Calculator.
This tool is beneficial, if you want to know if your bandwidth is fast enough. Keep in mind that we are talking about upload speeds. Your download speed has no bearing on this. You can’t do 100 megs down and five megs up and expect it to work quickly, so you have to consider that. This is a great tool to show you just how much bandwidth you need to transfer a certain amount of data within a specific period.
The next tool we’re going to talk about is the Restore Point Simulator. We’ve seen the Workload Simulator, we’ve seen the Bandwidth Calculator, and now we’re going to talk about restore points. This is more about data sizes, how many retention points are needed, what the data will look like when it lands. We have a couple of different ways to model this based on the type of backups you’re taking, whether forever incremental or incremental, etc., or a backup copy job or a local job. We always try to simulate ReFS—lots of cool options in this one. So I’m going to hand it back over to Steven. He’ll show us how the Restore Points Simulator works.
Restore Points Simulator: 00:13:48
The restore point simulator is very easy to use. Basically, what you’re going to do is choose a quick preset at the top. We always do forever forward incremental here at GDV. You have an option of incremental, reverse incremental, VM replica, or backup copy job but we’re going to go ahead and use our standard configuration, which is forever incremental backup.
The size is 10 terabytes, so we’re going to go ahead and leave it at 10,000 gigabytes. Then, set the restore points. We’re going to go ahead and put this at 18, which is our standard, and we see about a 10% change rate.
At Global Data Vault, we run your backups every four hours between 6:00 AM and 9:59 PM. We’re going to go ahead and leave the compression at 2.5 and the interval to daily. If you click simulate, this will give you the retention you would use for your main job. Keep in mind; this is one job–you would have a VBK or a full or base image, and then you would have 17 VIBs.
So if you click on the actual gigabytes, it will show you the speed required for four hours, eight hours, etc., for your main job. It also displays your max file size, incremental size, and total size, which basically matches the number at the bottom. So we would roughly use about 10 terabytes, and then you would have 4200 gigs as a working space. Your average size is going to be 4000 gigs, and your average incremental size will be 4000.
The working space is for any merges, GFS, restore points, or any synthetic operations. Running a fast clone with a forever forward chain will merge the data back into the base or the incremental. So you would need 4200 gigs, a working space. Whenever we configure these jobs, we also configure local and backup copy jobs. Local copy jobs go to the LDV, or the Veeam backup server that is onsite. Our standard retention for the local copy is ten restore points, and our backup copy jobs run on a continuous schedule between 10:00 PM and 6:00 AM. These restore points are daily restore points. By default, we would choose one monthly, and then you would go ahead and simulate, and then it will give you your retention for your local copy job.
Go to the Restore Points Simulator tool.
When we talk to folks about how much cloud space they’ll need, we use this tool. It’s beneficial to determine how much space you’ll need locally for those initial backups. You always have to leave that working space or overhead for your backups to run correctly. Sometimes we see customers and end-users who buy just as much shortage as they need, and they fill it up with just a little bit of data and then wonder why their backups have issues. It’s probably because there’s not enough of that working space, so this is really helpful for that. Our next tool helps connect everything with networking information.
Veeam Ports Finder: 00:20:44
Veeam Backup & Replication and its components use similar ports and some that aren’t so similar. We don’t always know which ports we need to open on firewalls, between servers, or between the cloud and your data center. This tool identifies the services that you need and shows you what ports those use, and displays that in a report.
The Veeam Ports Tool is straightforward. It’s just a directory of ports that tells you which ports you need to open, based on what Veeam services and applications you will use.
So we can go ahead and change this to the vCenter server; keep in mind that every time you run these, it adds services and puts those ports in the report at the bottom. If you want to, you can go ahead and export these later. For vCenter, we will need to open port 443. We can add these to our list, and it will keep track of all the ports you need to open. We’ll choose an example like a backup proxy, which requires ports 9501-9509 or 10443. And then we can go ahead and add these ports, and then it will add the ports down here to the list.
So we’re going to go ahead and choose a backup proxy up here, backup proxy service, and then go ahead and run these. These are all the ports that you would need to open for both the backup proxies, so we can go ahead and add these to our lists. We’ll continue to give us a list of ports that we would need to open.
Go to the Veeam Ports Finder tool.
A lot of organizations have separate networking groups or separate IT and cyber security groups, and just to be able to hand them a report and then say, “here are the ports we need” is pretty powerful. Even if I did it every day, I probably wouldn’t be able to remember all the ports, so it’s good to have a reference for that. The next tool that we’re going to talk about is called RV tools. They give you information about your VMware environment.
If you’ve seen Veeam ONE or vCenter reporting, it’s similar to that, but it’s a very lightweight tool. It’s very easy to use, and you can get the information delivered to you in several formats. It uses a lot of tabs to show you things like CPU utilization, data store utilization, your IOPS, that type of thing. All these tools are free. If you’ve ever tried to guesstimate what type of resources you need in your virtual environment, you know that it can be challenging, especially over time. And this is a great free tool to help with that. We use this specifically for virtual machine replication.
RV Tools: 00:26:06
This is a tool that will give you resources for your VMs. I brought up one of our demo environments, where we perform testing on patches and different versions of Veeam. Here is an overview of the layout in VMware. It shows your config status–whether it’s green or red–DNS name, connection state, whether the guest or the VM is running or not, it will also give you a health heartbeat, which is yellow or green. Anything red is bad. It’ll tell you when it was powered on, the creation date, the version, CPU, and memory, which is very important, especially if you’re using our replication product or want to use it.
This is an excellent tool to see what you’re using and, if you want, just dump all this information into a spreadsheet.
Go to RV Tools.
It is always helpful to have a quick glance at things because we all know, as administrators, that there’s a lot of information out there, and remembering where to go for all of it can be daunting, especially if you’re working with different hypervisors in different systems. RVtools tools can save you some time with that. The last tool that we’re going to talk about is the raid calculator. RAID stands for redundant array of independent disks, and it allows you to determine how many hard drives at what size is actually usable. We know that with RAID, you have some penalty as part of parity for data protection so that you can tolerate a hard drive failure or something like that. RAID was created to prevent data loss, but we can see some performance gains with different types of RAID. We’ll be able to see the amount of storage that you have usable based on the number and size of your hard drives and what types of read and write benefits you get if any. This is RAID-calculator.com.
RAID Calculator: 00:31:05
I’m going to share my screen briefly, and then it will take you to RAID-calculator.com. So, how this works is you put in your number of disks. We’re going to go ahead and put in 24 disks, and we’ll use 10 terabytes as an example. We’ll set RAID 6, double parity, and click calculate. This gives your capacity, and if you notice that you have no write speed gain here, you can tolerate two drive failures. So what that means is you can lose two hard drives, and your RAID/data will still be intact.
It is important to determine your need for read or write speed bonuses versus the amount of disk space used for each RAID type. Using RAID 6 will allow for two hard drive failures but will subtract two disks from the usable space (24 disks in RAID 6 will give you the space of 22 disks).
Go to the RAID Calculator tool.
This is a great tool for understanding disks in your production environment. You can’t buy 24 hard drives of a specific size and expect to get 24 hard drives worth of space just because of RAID penalties. If you buy 24, 10-terabyte drives and put them in RAID 10, you will not have 240 terabytes of space because you’re splitting the drives.
So that’s all the free backup tools that we have for now. We demoed six of them. None of these things will do backups on their own, but they help you to plan your backups, your disaster recovery environment.
Don’t miss the next episode in our “Back to the Basics” series. Sign up for our next webinar by visiting here.
More Global Data Vault Webinars
Our March webinar featured special guest, Jason Buffington, Vice President of Solution Strategy for Veeam® Software, and author of the comprehensive industry report: Data Protection Trends 2022. With webinar host, Kelly Culwell, Global Data Vault, Jason took a deep...
Global Data Vault’s February 2022 webinar again takes us "back to the basics" with an important topic: how to verify if your backups are working. Not to spoil the movie ending here, but the common theme in the video below is the need to test, monitor, and test again....
Global Data Vault's January webinar takes us back to the basics with a topic that's actually anything but basic--ransomware and how to avoid it. Today, we're sharing our recent client experiences with ransomware, how these companies knew they were attacked, what the...
When it comes to cybersecurity, every organization is constrained and confined by some sort of budget. This webinar, hosted by BitLyft Cybersecurity, brings together a cybersecurity professional and an IT veteran to discuss the challenges and opportunities of finding...