Spooktacular Tricks for a Successful Veeam Implementation

Spooktacular Tricks for a Successful Veeam Implementation

Welcome and thanks for joining our heavily themed October webinar today. We have a spooktacular discussion about Veeam implementations. We will cover ways to avoid the graveyard, aka common mistakes to avoid. We’ll pull the sheet off the ghost of scary implementations and tell you where to get your garlic necklace of troubleshooting help. I’ve got bad jokes all day, but I’m going to leave most of the rest of them up to the presenters. So briefly, I am Trista Perot, Director of Marketing for Global Data Vault, a Dataprise company, and I have a few housekeeping items before I introduce our speakers. The webinar’s going to run close to the witching hour, but we’ll be sure to leave time for questions before the werewolf changes back to a man. While everyone is set to listen-only mode, we absolutely want to hear your screams, I mean questions. 

I’ll be monitoring chat as well as a couple of my colleagues, and we’ll capture any thoughts you’d like to address either immediately or at the end. The webinar is being recorded in case you are a ghost hunter, and you want to watch it again or you need to leave early, we’ll get you that link after the session and it’ll be on our website, or you can watch it via YouTube or our website anytime. So, before things get too scary, let me stir the cauldron with what’s brewing. Much like Frankenstein was a beautiful monster made of different functioning parts, Global Data Vault is now a division of Dataprise, which is a managed IT service provider for mid-market businesses. We provide leading IT solutions to organizations across the United States — not Transylvania — To carry the theme: Dataprise offers, bleeding-edge IT solutions and services that enable our clients to excel at their core business. 

Our service suite includes managed cyber security, managed user, 24 7 help desk, managed infrastructure, and more. We want to add extreme value to our clients by becoming their end-to-end technology partner. And one of the ways we excel at this is through our BCDR division, which some of the folks in our audience may already know us as Global Data Vault. While we’re part of a much bigger Dataprise family now, we continue our commitment to being the premier Veeam Platinum cloud service provider offering fully managed disaster recovery as a service, backup as a service, and Microsoft 365 backup solutions. We pride ourselves at being specialists when it comes to Veeam and we demonstrate that through our deep expertise and talented staff, which includes Veeam Architects and a Veeam Vanguard. And with that, let me start introducing our speakers. Special guest and fan favorite, Tim Hudson, Senior Solutions Architect for Veeam Software and our “IT” guy, get it? Little double meeting there. He’s joined by Steven New, Operations Director for Global Data Vault, Veeam Vanguard and our favorite mummy. And Kelly Culwell is Senior Manager, Service Transition for Dataprise, who can cast a spell for success on any Veeam environment. So let me turn this over to our legendary lobster as he wants to kick us off with a few remarks. 

Tim Hudson (00:03:45): 

Hey everyone. Good afternoon. So glad to be here. My name is Tim Hudson. As a Senior Solution Architect, I help our partners like GDV and Dataprise build, manage, and deliver Veeam services. We wanted to talk about common implementation challenges today. Kelly, our first topic is around licensing. I know you were so thrilled that we were going to start our chat with this. Question for you, thinking about licensing challenges, have you ever run into a situation where you’ve been attempting to help an end customer, and you had to run through some basic or high-level discussions around license versions and license capabilities? 

Can the wrong Veeam license cause implementation issues?

Kelly Culwell (00:04:46): 

Licensing isn’t easy, right? Customers have specific needs; they need feature sets. They need to solve problems, and the product can do that. But the biggest challenge that I see is that customers may not have the appropriate license, or they have to worry about the counts of systems and things like that, and it gets confusing. I mean, just here on this slide, we have four or five different types of licensing and each one has different versions and editions. And you can throw in limitations with some essentials and the NAS licensing and whatnot. So yes, we get it all the time where the customers are worried about their licensing or they’re up for renewal, and it becomes just another management headache for them. A lot of it is between some of the Enterprise or Enterprise Plus editions.  

What are the things that you can tell us about those specifically? The Availability Suite is Veeam Backup & Replication and Veeam ONE, which is a monitoring, reporting, and management feature set. What are the biggest things that you see with Enterprise and Enterprise Plus concerns? 

Tim Hudson (00:06:14): 

Use of scale out repositories is probably the single thing that I’ve seen most as, I don’t want to say it, a “gotcha,” but maybe certain customers didn’t plan what their implementation was going to be like, and they run into a scenario where they need to leverage scale out repository and maybe they didn’t realize there’s a limit upon the number of extents with one particular version. Or maybe they needed a feature like WAN acceleration to try to deliver data maybe offsite to a service provider. Sometimes customers have to make a change. Steven, I was curious for GDV and Dataprise, is there a particular license type that you standardize on for your existing managed service and customers? 

Steven New (00:07:17): 

By default, Tim, we go with a Standard license unless the customer needs additional features. Our biggest request is normally, as you mentioned, the scale-out backup repositories. Some customers do use scale-out. We also have a big request for NAS backups. File share backups began in v10, so we need to get the customer that license to back up their critical NAS data. 

Tim Hudson (00:07:59): 

Totally makes sense. Seems like a maybe small thing depending on who our audience is, but this is a big deal. Now, I was also curious for you as a managed service provider, how hard is it to change a managed customer’s license. Like let’s say, for example, you engaged with a customer, you deliver some licensing, maybe as part of your managed services, and something changes in the future. This happens to all sorts of folks. What’s your process like, or how difficult is that? 

Steven New (00:08:37): 

Us being a being certified service provider, all that we really have to do is just log into the portal and click a few buttons. The license gets updated from Standard to Enterprise or Enterprise Plus. And then, once we update the license, all we have to do is just go into the customer’s Veeam console, click “update now,” and they have the new license. 

Tim Hudson (00:08:56): 

Pretty quick adjustment there.  

Steven New (00:09:00): 

Five minutes. 

Tim Hudson (00:09:01): 

Incredible. I think that’s quite different from a traditional Veeam user. That might be more challenging. Awesome. 

Kelly Culwell (00:09:12): 

Real quick on that, one of the larger things too is that Essentials pack is licensed similarly to VMware, where it’s limited to a certain number of hosts, certain number of machines, certain type of infrastructure. A lot of customers today will outgrow that, and it can be painful or costly to upgrade out of that. When you’re using a managed service provider license like ours, again, we just install our license, we give you the capability to use our feature set without having to worry about those Essentials limitations. You can just continue to grow and scale. 

Tim Hudson (00:09:48): 

Great. 

Why verifying your backups is critical

Kelly Culwell (00:09:52): 

All right. One of the big things in our “corny configurations” that we see are not verifying backups. It’s a common scenario. We’re going to talk about backup modes and full backup and things like that here. So, the “trick” is not verifying your backups, Steven. What are the issues that we see with the different backup modes, et cetera? 

Steven New (00:10:19): 

Really no issues regarding the actual backup modes themselves. Not verifying your backup is critical because you don’t want to get into a real DR event and notice that something is not functioning like you want. We do request that you verify your backups. You can do something as simple as a file-level restore to your desktop to verify the chain is good. If you want to get a little more granular, you can actually go in and do an instant recovery, just rename the server something different. Don’t connect the NICs to verify the chain functions properly. 

Kelly Culwell (00:11:02): 

That’s really good. Restoring a file is one thing but being able to recover an entire virtual machine is something completely different. That’s where people will lose data or they’ll have big issues if they come across the disaster scenario and they haven’t tested that and they’re like, “Well, we’ve been able to recover files. And then they go to power on that virtual machine, and they didn’t realize one of the OS files was corrupt or something in the chain was broken. They’re just unable to recover their data. So, our treat for this is the annual disaster recovery test. Make sure everything works, give it a test, you’re good to go! 

Tim Hudson (00:11:46): 

Quick question on there because I’m thinking, Veeam has some abilities like you guys have described for some administrative type of actions to go and test some backups, right? But like you talked about a file or a disk or one machine, but now when I think about large groups of machines, I recognize that could be a challenge for certain customers to test. It also makes me think about connectivity. If users are testing in their own infrastructure, they’re not really planning for that site failure because you need to think about connectivity somewhere else. Could you give me an example of how you solve some of those larger tests where you include some connectivity tests? I’m asking the question from an angle of what is above and beyond just the Veeam process that you can share with the potential customers on the webinar today. 

Steven New (00:12:52): 

By default, during our annual testing, we do spin the virtual machines up into an instant recovery state, and our standard service does allow for a virtual VPN and a virtual router that does the networking for the customer. During the initial test, the DR test, we send the customer a spreadsheet where they put in their networking information, and then once we import that into our build system with the click of a button and all the magic behind the scenes, it automatically builds the customer’s networking and puts them in their own VLAN for testing. The machines are segmented from all our other customers, that is their VLAN. Then, it automatically spits out a VPN account for them, and they can connect up to 10 users for a test. That allows them to go in and perform any testing that they do in their environment. 

(00:13:55): 

We request that they do it on the DR side. We want them to go in and kick the tires. We want to make sure that your SQL applications work. We want to make sure that you can connect to your web portals. We want to make sure all this stuff functions prior to you having an actual event and having to work through any issues in that event. We request that any customer perform these tests during their annual DR test to verify functionality. 

Tim Hudson (00:14:28): 

I think that’s a really great answer because to me that’s how you make this a comprehensive solution leveraging some Veeam tools, but really doing something that our end customers should. Considering you have that offsite platform, it’s a really great package. Awesome. 

Why offsite backups and secondary backups are key in disaster recovery best practices

Kelly Culwell (00:15:27): 

The other configuration challenge we see is that customers are not getting backups offsite or having a secondary backup.  That’s why we exist. That’s why Veeam provides this service for customers to do that and protect data offsite. And that’s also where the 3-2-1 rule comes into play. We’ve done a lot of different blog posts and webinars about the 3-2-1 rule. You can read about them on GlobalDataVault.com, but it’s basically three copies of your data, two different types of media, one of those copies offsite. So, we have to allow for that one offsite to prevent or to protect against overall site failures due to damage to the facility, fires, earthquakes, hurricanes, things like that. The overall architecture of the solution will protect against different failures within the infrastructure, but that one offsite is really the only way to protect against everything potentially and including malware and ransomware and things like that. Tim, how many people out there today still don’t quite have all their claws (reference to Tim’s lobster costume) when it comes to the 3-2-1 rule? 

Tim Hudson (00:16:44): 

I was actually going to ask you the same thing, but I’ll give you my answer, and it’s far too many. Veeam works very hard to produce a comprehensive set of options in our software, whomever you want to send it to, wherever you want to send it to. We like to give our customers choice. We think we do a good job of that, whether it’s in their own infrastructure, a service providers infrastructure, or even a hyper scale infrastructure. We think we give a comprehensive set of options. But I know, having worked with service providers for the last six years of my career and working at a service provider for the 10 years before, that there’s still a lot of folks not doing this or not doing this the right way. Maybe it’s not holistic, maybe they’re just sending apart, but if they’re not doing a comprehensive annual DR test, they might not even know that that part is not sufficient. So, I wanted to know from your side, maybe give me an example, Steven, of a DR test where somebody realized they were missing some critical infrastructure? Or Kelly, I’m just curious if customers come to you and you find out, even though it’s 2022, even though there are all these options in the console, that people still aren’t doing it? Do you guys see that? I do, across the globe. 

Kelly Culwell (00:18:15): 

We do, for sure. And the most common thing is just the misconception that everything is okay the way it is or that they take their backups offsite in the back seat of their car or some of those other types of things. We have had some challenges with infrastructure like you mentioned that Steven can talk about. Steven, what do you think about the missing “the one,” or even let’s take it a step further and talk about the one extra that we do to have another layer of protection in there to provide even a better gap against malware? 

Steven New (00:19:02): 

I’m going to address two of y’all’s questions. The first one, Tim, you asked in a DR event, customers realizing that they missed critical infrastructure components. Yes, we have seen that. Thankfully it was actually during a DR test, and they realized, “Hey, wait a minute. I don’t have my Citrix environment in the cloud.” And that was a big miss right there, so we worked with the customer. They created the backup jobs, got their Citrix environment up to us during the test, and we were able to immediately get it online during that test without having to reschedule. And then Kelly, to your question about realizing that customers do get hit by ransomware. we do have a technology that we have in the back end called Enhanced Data Protection, which we call it internally EDP that keeps an offline copy that the customer cannot access. So these malware and ransomware events, normally what they do is they remove the data from your repository. If they can’t see the data, they can’t remove the data. Whenever we do perform tests, we perform tests from that EDP data. It actually verifies functionality of the data, and then the customer also gets their DR test as well. So it’s a win-win situation. And that EDP data has saved more than probably a hundred companies that we’ve supported, that would be out of business if that data did not exist. 

Tim Hudson (00:20:52): 

 I mean, I think that speaks volumes. Veeam’s adjusted our 3-2-1 rule a little bit, or at least we’ve, proposed additional thinking. We’ve started to talk about a broader zip code that includes these types of things like protected backups or offline backups as being really part of that. And I think it speaks volumes that you have integrated this into your solution, and your customers can consume it. This touches on our next little trick here that I certainly see regularly today: not enough time and effort spent on planning a Veeam or data protection implementation. And that lack of planning either leads to failed or challenging DR test or sizing issues that caused some teeth gnashing thinking about repositories on a source end or repositories on a target end. I see our treat here, but I was hoping you guys could maybe speak to how you help folks with some of these planning or sizing challenges. 

Sizing your source data is critical to your backups

Steven New (00:22:10): 

Sizing is actually critical, Tim. If we don’t know the proper sizing of the source data, it’s going to be hard for us to plan or size the Veeam server that we put on the customer’s site. If a customer comes in with six terabytes worth of data and we find out that it’s really 12 terabytes worth of data, the server that we put onsite could be under-provisioned for that data size in the cloud site. I’m not really worried about it because we can adapt really easily to that. But mainly, the local sizing is what we see issues with, and we actually do have a tool called Veeam ONE for customers not familiar with that. It helps us size your device properly to make sure that you have a sufficient amount of resources and space based on the change rate. 

(00:23:06): 

We can do a configuration assessment of your VMware or Hyper-V environment, and then we can also run a report that will give us the change rate of your virtual machines, what machines have the highest change rate, what machines have the lowest change rate. Then we can use that to adequately size your LDVs or local data vaults, which is your Veeam server that you have onsite, to make sure that you have enough space locally to get those two copies of data and then get the one offsite as well. 

Kelly Culwell (00:23:44): 

One of the challenges that we have from a disaster recovery standpoint because we do bring customers’ virtual machines and servers up in our cloud, is the time-old adage, “It’s virtual, just throw resources at it.” Look, I can give this virtual machine 64 CPUs and 128 gigs of RAM. Yes, you can but it’s just like, “No, don’t do that.” And Veeam ONE is pretty good about going in and saying, “Here’s what this virtual machine is provisioned at today. Here’s what it’s really using and here’s what it needs.” Maybe you want to think about scaling that back a little bit when you try to bring up a bunch of servers in another environment. Things don’t always mesh up and you end up having more performance issues. A lot of times, it can simply be because the machines are over-provisioned. They have too many resources because you still are within the constraints of the actual physical devices that host those and other resources needing CPU time and memory time, and things like that. So Veeam ONE also will look at your configuration for overutilized, and underutilized virtual machines. It’s another good way to use the tool. 

(00:25:00): 

We’ve talked about configurations, and now let’s look at some “monster misinformation.” I don’t know about you, Steven, but that thing looks a little bit like Tim does right now.   

Tim Hudson (00:25:13): 

It does, actually. <Laugh> 

Common misconceptions of backup and disaster recovery

Kelly Culwell (00:25:15): 

<Laugh>. So having worked with Veeam for a long time, I know that Veeam did a lot of things before other providers in the backup and disaster recovery space. They were pioneers with a lot of different technology. And a lot of times when you have that happen, you see F.U.D, which is fear, uncertainty, or doubt. It’s basically misinformation that is spread by competitors or by people in the industry about, “No, that doesn’t work,” or “you don’t need that,” or “why would anybody ever need to restore a backup as a VM?” or “why would anyone want to power on a backup?” And now it’s second nature to everything that we do in the disaster recovery space. Let’s talk about a few of these things. We’ve already discussed multiple copies of your backups. Do we need to hammer that one? 

Tim Hudson (00:26:19): 

Let me sink my claws into it just a little bit more because sometimes people think this is multiple copies equaling what you have on the source side or your production site with an equal copy offsite. And we appreciate that. And we have that functionality built into our scale-out repositories, right? We have this backup copy capability to send data to our service providers, and it can be the same copy as the data on-site, could be a different backup chain. And what I’d ask people to think about is sometimes, locally, we might want to just have a small amount of data. To Steven’s point earlier, thinking about your local data vault or LDV, your local repository, you might size that a certain way only to keep 30 days’ worth of backups. If you have historical requirements, let’s make sure that data is offsite, long-term, monthly backups, yearly backups, or a long set of weekly backups. 

(00:27:34): 

A lot of folks don’t realize that they can have that data onsite today with local GFS policies, you could have an equivalent copy offsite with traditional backup copy jobs. And I think it’s just really important to work with your internal organization or a partner like GDV to understand what’s possible or why, because it bothers me, to be honest, to hear things like it’s expensive to store that data offsite that our company makes us store. We have this good corporate policy to keep X amount of data. It’s the reality that we need to keep these additional backup chains for historical purposes. Veeam makes them separate backup chains. It’s important. And so, the idea that we don’t need multiple copies or that we don’t need multiple chains, wild, it’s wild! So, I think it’s an opportunity to work with a partner almost to consult and understand, “What do we need? And why do we have this policy to keep this data?” And then finally, “Where should we send it?” To a safe and secure place, hopefully, it’s also a place where we can do a test on that data, local to the data itself. 

Kelly Culwell (00:29:04): 

I’m glad that you mentioned, “The data’s already secure,” or “Why do we need to send it somewhere else,” right? Because the hyperscalers, the Azures, the cloud, the public cloud-type infrastructures… the misconception there is that they’re already protecting my data. “It’s the cloud. What are you talking about? My data is secure, and it’s being backed up.” No, it’s not. The only thing that a lot of these hyperscalers or the cloud companies or service providers of that type really offer is the infrastructure, the platform, the physical security, the power to the servers that are providing these services. And there’s shared responsibility models and a lot of small writing and fine print that say, “Okay, here’s what we do.” But you’re responsible for literally everything else including the backups and the protection of that data, insofar as even ransomware protection and things like that. So, Steven, when we think about public cloud, the Azures, and the AWSs and that, how are we able to help customers with that scenario? 

Steven New (00:30:18): 

First of all, we’re able to get a backup into place locally for them. And then we can move that data to our cloud because you always want a copy of your data outside of the cloud provider. Just in case there is –knock on wood — an issue or something were to happen, you have a secondary copy out of that platform to where you can access it, and if something were to happen, you can bring that up in your DR environment and then you can go back to work as normal. Veeam is amazing because after you go back to work as normal in your DR environment, we can actually inject that data back into Azure for you. We can do a restore back to Azure using an Azure proxy. Once that is done, the event’s over with, you need to get your data back into the cloud so we can restore that data back for you. All the data that we protect from the cloud providers, we enable EDP as well. So, if something malicious were to happen, we have a copy of your data outside your repository, just like our on-prem customers, they have servers local or on-prem. 

Kelly Culwell (00:31:35): 

I think one other important point of that is retention again and the cost associated with some of that. It may be worthwhile to keep 14 days, 30 days, whatever, of backups local to the public cloud space. But then you want to send everything after that. You want to send your 12 monthlies or your yearlies or the other data outside of that to a different provider to not only provide the 3-2-1 rule and the additional protection, but to maintain control of your data, which ultimately is the goal of most organizations. You want to maintain control, you want to understand where the data is, how you access it, who do you call if you need to access it because typically unless you are sure it’s written in your contract, the cloud providers are only going to maintain that data for a certain amount of time. 

(00:32:29): 

And you might not know where that is. You may not know where that data is located. It could be shipped somewhere across the world in a backup state unless it’s contractually specified not to do that. You could have issues or for compliance reasons, you have to have your data in a certain part of the country, things like that. So a lot of different scenarios there to consider. The important part is that it’s not a be-all just because it’s a public cloud, so don’t get lulled into that. 

Tim Hudson (00:33:01): 

This makes me think about our next bullet point. “I’m so confident, and I’m so sure that my backups themselves are protected — not my data itself” — is pretty straightforward. Build the job:  green, yellow, red, your data, your data’s been protected. Is your backup data protected, though? This is a massive part of the conversation in the industry today. Do I have immutable backups, or are they secure? Meaning for those who have access or the capability to access that data, are they the appropriate folks at the appropriate times? I’d sometimes think about a hyperscaler or public object storage anywhere/anybody’s object storage. We know this stuff is accessed typically by something like an access key or a secret key. Who has access to that information? It’s not a traditional username and password. 

(00:34:07): 

Are those keys recycled? How do you audit access to that data? I wonder if end users, traditional end-customers are prepared? Are they armed with the right information to understand what kinds of checks and balances should they have for where their data is stored? If locally, we will traditionally think about things like network access. Can you hit this file share? Can you log into that Windows machine or Linux machine? But when we think about offsite in publicly, it’s a, it’s a different ballgame. It’s a different beast or monster or something like that. This reminds me, Steven, of the conversation you were just having around your EDP technology. This is offline storage, meaning our customers cannot access this data. I think there’s real value in that it’s a way for you to provide your customers backup security, security that they may not understand or may not implement, may not have the technical skills to implement or the time and effort. 

Steven New (00:35:37): 

Part of our best practice is when we do configure the backup job, is we do enable encryption locally and remotely. That is one of another way that we at Dataprise keep our customers’ data secure, is we encrypt it locally and then also encrypt it going to the cloud as well. If the customers would like, they can provide us with the encryption key. If they don’t want to, they can keep it in their favorite password vault. And we will actually request it in the case of an emergency. But we do always keep our customers backup secure by encrypting the data locally and also the data that comes to the cloud. Also keep in mind that if it does go from the customer’s network to our cloud network, that data is automatically encrypted in flight as well, so we encrypt in flight and at rest. 

What are some basic security best practices for backups?

Tim Hudson (00:36:36): 

Awesome. I’m going to jump to the security best practice, Steven, and you’re just making me think about this question and I need to ask, do you help your customers consider security best practices? I’m not thinking about just a Veeam installation or keeping logs on a different directory or different drive. I’m thinking maybe holistically, what kind of things do you talk to your customers about regarding security best practices? 

Steven New (00:37:11): 

Anytime we implement a Veeam solution, we request that the customer do not use the administrative account. Do not use administrator at vSphere local. We always create service accounts with granular permissions only giving certain accounts access to vCenter, port 443, running the Veeam services on a Veeam service account. We break it down to an account level and send them over the Veeam best practices article regarding security account permissions. Don’t give the account too much permission. That is probably our number one role that we like to relay to our customers. Please don’t use the root account, please don’t use the vSphere administrator account. Let’s create service accounts. Let’s get those implemented. That way you only have the permissions needed for the product to function. 

Kelly Culwell (00:38:24): 

Not joining the backup server to the domain… some of the attacks that we’ve seen have simply been because customers have had their server on the domain, somebody grabbed those domain admin credentials — the malware did, in fact. So, not using administrator level credentials to go browse the internet, making sure you have admin accounts separate from your everyday use accounts using MFA and zero trust wherever you can. All of those different things. On the devices that we ship out, we always have SIEM services and antivirus, NextGen EDR detection response and those types of things, so that if we do see some of those attacks, we’re able to react quickly to it and get the best information we can about them. 

Tim Hudson (00:39:19): 

Another question just keeps kept popping into my head. Steven, if you walked into a prospective customer that maybe wasn’t following best practices or was using an existing provider that did things differently, I mean, change is hard. I know that myself, <laugh> Veeam knows that, and surely you must have some experience working with folks who are coming from maybe a tough spot trying to mature their implementations and do things the right way, where they’re coming from a different provider, not even saying they were doing something wrong, but to do something different may be a challenge. Is that really true? And how do you help people not fail when they’re trying to make these changes? 

Steven New (00:40:20): 

Change is hard I will admit that, but sometimes we have to do it for the better. We always check the customer’s implementation, even if they come from another provider, to make sure that they’re not only within Veeam’s best practices but they’re within Dataprise’s best practices as well. If that requires a reimplementation locally to get the customer within back best practices, our implementation team will go through that with the customer, submit what changes we’re requesting, and then work with the customer to get those changes implemented so that they’re not only within Veeam best practices, but that they’re following security best practices. We always recommend that customers put a SIEM and also a good antivirus program on their systems. And also, we request that they have a 24-hour SOC monitoring their systems to make sure there’s no brute force attacks. Dataprise does offer security as a service, so we are able to provide that service for our customers, and like Kelly mentioned, all of our servers that go out are LDV, local data vaults, which are the Veeam repositories and run Veeam. They automatically get configured with SIEM software and an algorithm-based antivirus software, and they are automatically monitored by our SOC. 

Tim Hudson (00:42:00): 

So you’re combining real security, when you say SIEM, I believe that’s S-I-E-M, security information and event monitoring. You’re combining that with the backup service. 

Steven New (00:42:16): 

Correct. Incredible backup service alone these days isn’t enough. You have to have this monitoring on these servers to stop these brute force attacks. 

Kelly Culwell (00:42:34): 

We talked about, “I don’t need to test my backups,” so next up is testing. There are multiple levels of testing, multiple levels of things you can, and Veeam has some of those built in. So, let’s talk about health checks and what secure restores are. Health checks are automatic, right? They happen every job or every so often? 

Steven New (00:43:08): 

At the beginning and at the end of the job, there is a mini health check that is performed. But we go the extra mile for the customers, and we enable the health check in the advanced settings for every single job locally, backup copies, and backup copies to the cloud. That way, it gets a good health check at the end of the month to check for corrupted chains. And keep in mind that whenever you run these health checks, Veeam has a feature that if it does detect corruption, it will try to auto-fill the chain. That’s something that we enable when we configure the jobs, but it is not enabled by default. It’s something that we turn on. 

Tim Hudson (00:43:54): 

I think that’s a really important point. It makes me think as the Veeam guy, just to those last couple words, Steven, it is not enabled by default. That’s okay. Folks should use it. And I love to see that your best practice puts a standardization to it because we’ve made our decision in the product, and that’s okay. But for you to have made that decision in your product to follow those best practices, I think is a differentiator. And it’s an important CRC check to verify the blocks in the file are the ones we expect and that there’s no, known corruption that we can see in the actual backup file. Awesome. 

Kelly Culwell (00:44:43): 

Yep. Can you talk us through Secure Restore quickly? We’ve got a couple more slides, and I know we have some questions. I don’t want to run out of time here. We’ve already talked about the yearly DR tests a little bit, and file level restores as testing functions. But what about Secure Restore? 

Tim Hudson (00:45:03): 

I’ll speak to that. Secure Restore is our Veeam feature where during a recovery process, I don’t care whether it’s a full machine restore, an individual disk restore, or an instant VM recovery, Veeam can kind of tie into an anti-virus running on that workload, an antivirus application. We’ll run an AV scan or test at that moment of restore, we’ll turn on this machine, and we’ll verify whether that AV detects any malicious application or behavior on that system, and we can determine or define what we want that result to be. If the machine comes back exposed or with an issue, we could shut it down, we could delete it, or we could just disconnect the network because maybe you still want to come in on the console. So, it’s basically an anti-virus scan during the process of restore with the hope, Kelly, that it tells you, this machine’s fine, let’s just bring it up. Or if not that you don’t get exposed again to something that maybe could, could be detrimental to your environment. 

Kelly Culwell (00:46:26): 

That’s a really good feature and I know that we’ve had some customers go in and run some of their own tools. They did their own version of this, right? Where they come in and load a script or something that’s able to go and detect malware and things and kind of do it their own way. Still, that was an important feature that they were able to get in and do that before they moved that data back into their production environment. I think we’ve covered a lot of good info. Now let’s talk about the some “ghoulish gaps.” If we don’t have some of this information, where can we go to do that right now? A lot of this stuff is free. There are some paid classes and paid certifications and things like that, right? But Steven, talk us through this. You’ve mentioned Veeam best practices (Veeambp.com) What are your favorites from this list? 

Steven New (00:47:20): 

It’s going to be the Veeam certifications. We need to make sure that all of our technicians, implementation engineers, our support guys stay current with their Veeam training to make sure that we give our customers the best implement implementation possible. So, we do have two VMCEs, Veeam Certified Engineers, two Veeam Certified Architects, VMCAs, and we do have a Veeam Vanguard on staff. Then also, I love the forums. Do you have a strange issue that you’ve never seen before? The forums are the best place to go. You can type in your question in there, and we have people in the forums that are all the way up to senior product management to senior vice president of Veeam that go in there and answers questions for customers. When you can get a direct response from the product manager, the PM assigned to the product… 

(00:48:21) 

 If there’s an issue that needs to be escalated to the development team, they can escalate it for you with the click of a button. So, the forums are probably the second big one. The third is Veeam, the Veeam free, on-demand training. Who doesn’t like the word free? You can learn more about the products by taking the free on-demand trainings. And then best practices are basically the implementation guide. You want to make sure that you’re implementing a solution correctly. You can go to veeambp.com and you can go to the best practices. And what I like about that site is it’s searchable. If you have a specific question you want to look up something specific to a best practice from Veeam, you can type it in at the top and it will take you directly to the word you’re searching for. 

Kelly Culwell (00:49:16): 

That’s good. And Jason, thanks for the shout-out on the helpcenter.veeam.com. It’s definitely a good site. Steven mentioned a couple of the certifications that our folks have. One of the things that you get with DRaaS service from Dataprise is access to a VMCE, a VMCA, the certified folks who are competent in the solutions that we deliver. The platinum level requirements that we have are pretty stringent. Tim, I don’t know if you want to mention this or not, but it’s not just a “here you go, you sold five widgets, you get a platinum level certification.” Talk us through that real quick. 

Tim Hudson (00:50:03): 

I think what’s important for the audience to hear is that the Veeam Service Provider Program has been around for about a decade, and it’s matured. I think I want to make an analogy here to these VMCE and VMCA certifications. These are individual certifications, like Steven on the phone, Josh Clemens, another GDV employee. These folks have professional-level individual certifications, engineers, and architects. And that’s awesome, but they’re individuals. Looking on the screen here in the bottom right, we see these boxes labeled “competencies.” These are like certifications for an organization. For the audience in the webinar or those listening later, a quick Google about these will tell you that these are badged, tested, and verified. Think of them as offerings or services from Global Data Vault and Dataprise. We test service providers. 

(00:51:15): 

We ask them challenging questions about what they’re doing as service providers, and it builds up into what we think are proven partners and service providers that deliver great services, not just good services. These are above and beyond. And so, Kelly, the summary for my answer is that it’s a comprehensive program that is viewed at many levels, individual certifications and overall organization or services certifications. I think it speaks to your level of depth and maturity to have achieved these. Question from the chat, “How many folks have three certifications for Cloud Connect backup or offsite backup, delivering managed services for Office 365 backup, and for DRaaS? Very few. I don’t need a full hand to count them. 

Kelly Culwell (00:52:32): 

Speaking of full hands or empty hands, we have some questions. Don’t be a zombie. Ask us a question! We saw one earlier in the presentation, it was sort of a comment that goes back to Windows operating systems. One of the common things is that virtualization will allow companies and organizations to run systems far beyond the useful life of the hardware they were originally installed on. You could P2V an old server that’s still running server 2003, and it magically works in virtualization. Veeam will magically back it up and protect it, and we can do cloud connect and backup copy jobs to get that data offsite. One thing that doesn’t magically happen though, is powering that sucker back on in a new environment. There are bits and blocks and things that change when you do stuff like that, that those older operating systems just can’t tolerate. So, make sure that you’re thinking about that when you’re planning your disaster recovery strategy and your upgrade strategies onto supported platforms. Just because it works in some areas doesn’t mean it’s going to work or is supported to work or anything like that in other areas. It was kind of a question about what’s supported or what do you do with those outdated operating systems. And the answer’s kind of like, well, we do what we can.  

Trista Perot (00:54:04): 

I love what Brian said, “Don’t be spooked to ask questions.” <Laugh> I love a good theme. I do have a couple of questions that were sent to me. One is, “We briefly talked about the 3-2 -1 rule, but I’m hearing about a 3-2-1-1-0 rule. What’s the difference there? 

Tim Hudson (00:54:25): 

I’ll jump into that. I was kind of touching on it a little earlier, but like I talked about just a minute ago, the problem has matured, but so has our software, and so has the capabilities of malicious actors. So, we’ve continued to develop and mature this 3-2-1 rule in our own way. We now will call it “the zip code.” Meaning the backup zip code. We’d like to have three copies of our data on two different media, one of them being offsite still. That maintains those three pieces. The additional one and zero are that we’d like to have one of those copies be immutable or air-gapped or offline, out of the hands of a malicious actor. And then, finally, for the zero, we’d like to have these backups tested, and we’d like to see them have zero errors. So, this is not just about having your data, but let’s use that data in a way that gives our organizations and our leadership confidence that in the time of a disaster, we’re prepared. 

Trista Perot (00:55:54): 

Awesome. And I love the comments right now. <Laugh>. I have two more questions about licensing. The first one is, “Does my ESXi have to be licensed by Veeam?” 

Kelly Culwell (00:56:12): 

Your ESXi host has to be licensed by VMware. There is a free version of ESXi that is available for use for anybody for lab environments, that type of thing. Sometimes they make their way into a production environment and the customers are running basically an unlicensed version of VMware. There are certain things within the way that backups are done and the Veeam version will be licensed, but if the VMware version isn’t licensed, it doesn’t allow you to play, right? The car may be running, but you don’t have a key to the to the door basically is what happens when you try to back up an unlicensed version of ESXi. It doesn’t work. 

Trista Perot (00:56:57): 

Gotcha. And to follow up with that and also along licensing questions, “What issues can happen at the Veeam license file expires? Is there a grace period before the renewal goes through?” Tim, you’re smiling. 

Tim Hudson (00:57:11): 

I’m smiling because could you imagine if we didn’t have a grace period? Somebody makes a slight mistake about licensing and what, we’re going to prevent you from doing backups? I mean, Veeam is not that type of organization. Veeam is one where, of course, we have a grace period for licensing. It’s 60 days. That helps people get their administrative stuff right and lets their backups keep going. Maybe more importantly, it lets restores continue to happen without issue. And then finally, just on that point, Trista, I’ll say even if somebody leaves Veeam and they have a long history of backups, maybe it’s stored at GDV, maybe it’s stored somewhere else, we can always restore those backups. We never want to be a company that prevents an organization from restoring a backup because it’s something like a license. We don’t want that to be the case, whether it’s through the grace period or the permanently free. This is what we call the “extract utility.” It’s very beneficial in our minds, to our customers and partners.  

Trista Perot (00:58:44): 

Jason’s got a nice comment here. They give you 10% over your VUL licensing as well. So, if you go beyond your current count, you still have time to purchase new licenses, or you can burst your VM count above while you migrate to a new operating system. Very flexible. 

Disaster Recovery as a Service

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *