The 3-2-1-1 air gap rule

The 3-2-1-1 air gap rule

The importance of Air Gap Backups in a Disaster Recovery Solution

It’s not a matter of IF your business will succumb to hackers, a natural disaster, employee theft or other mismanagement of data. It’s a matter of WHEN. Naturally, it’s become common practice to keep safe backups of anything business essential, but how companies keep those backups varies considerably. Disasters are inevitable, and a disaster recovery plan is essential to business continuity. What is missing from many of those recovery plans however, is a fundamental understanding of air gap backups. They provide a final means of defense that can make a significant difference when recovering from a data disaster.

What Is an Air Gap?

An air gap, also called an “air wall” or “air gapping,” is a security measure that protects data from intrusion. The concept is simple: any device that isn’t connected to a network cannot be attacked remotely. The very name is derived from the principle. If the circuit is broken — or air exists between items in a network — then only a physical attack can threaten the data. In terms of disaster recovery, the idea is to place backups behind air gaps. This protects them from malicious software, direct cyberattacks and other corrupting threats.

Typically speaking, air gaps are thought of as a final layer of protection for data integrity. More accessible backups are used more often, but if everything else fails, the air-gapped backups should provide a preserved copy and be capable of restoring the whole networks system.

The 3-2-1-1 rule

You’ve probably heard of the 3-2-1 rule in regards to backing up your data. It goes like this: replicate to at least 3 copies of your data, local hardware, cloud, backup cloud. Some companies store these copies on 2 different media (tape/disk/Cloud), and place at least 1 copy off-site/off-premise. This is a great start to a DR plan, but what if ransomware compromises administrative passwords or domain info that allows that backup copy to be corrupted? Adding the “1” step insulates the data from further damage. The backup rule is now 3-2-1-1. That extra “1” accounts for an air-gapped copy of your data.

Are Cyber Attacks Really a Risk?

Yes. Cyber attacks are a reality. Large companies will suffer a data breach of some type, and small companies are certainly not immune to a hacker’s interests. Every year new names are added to the long list of compromised data sheets. Any collection of employee, customer or user data is potentially worth attacking, and the frequency of attacks is on a meteoric rise.

According to a poll by CSO, the rate and variety of attacks is growing every year, and it is already the largest financial threat to most businesses. Estimates suggest that by 2021, the total cost of cyberattacks will hit $6 trillion. Clearly hacking has become big business, and that additional “1” is critical in preserving a clean set of data from their meddlesome ways.

Challenges of Air Gapping

While air gapping can provide an ultimate line of defense, it comes with it’s own challenges. At the top of those costs is labor. When devices are completely disconnected from a network, they have to be physically accessed. This limits automation and requires man hours to do. Automated solutions do exist, but any device that is automatically connected to and disconnected from a network could potentially become compromised. There really is no way around this trade-off.

The other great challenge of air gapping is ensuring security. The walled devices are safe when they are disconnected, but at some point they have to communicate with other devices in order to update the backup. Hidden malicious software can be transferred during those updates. We minimize this risk by providing enterprise-level security measures that detect any unusual data movement within the network. By utilizing BitLyft on your networked account, we are able to monitor, detect and neutralize threats in real-time. BitLyft also provides automated incident responses to detect and neutralize future threats based on information gained from previous attacks, further offering a higher level of data protection.

Ultimately, air gapping is part of a holistic approach to network security. IT professionals have been following the golden rule of triplicate backups for decades, and air gapping remains a key component to maintaining a fresh data set.

Don’t Let the Reaper Get Your Data

Don’t Let the Reaper Get Your Data

According to a recent article in the Wall Street Journal, concerns are escalating over one of North Korea’s three major hacking organizations because of both their adeptness and sheer brazenness. APT37 aka, “Reaper,” is the hacking group is well known for attacking South Korea, but has since decided to attack companies in Japan, Vietnam and the Middle East. What is especially noteworthy about its recent slew of attacks is the heightened level of sophistication — and that they have made little effort to disguise their bad deeds.

Cybersecurity company FireEye, Inc. monitors Reaper’s attacks and in a report issued earlier this month reveals that Reaper is utilizing a toolset that includes access to zero-day vulnerabilities and wiper malware. Reaper has shown preference to hacking information within companies involved in automotive, aerospace, chemicals, and health care industries. They also recently attacked South Korea when they discovered a vulnerability with Adobe Flash which was then able to install malware on infected computers who opened the corrupt Adobe Flash files.

FireEye squarely points the finger at the North Korean government as the true face of Reaper due to malware development artifacts and targeting that supports state interests. FireEye claims to easily trace these attacks back to the Pyongyang IP addresses that Reaper has been using.

Reaper is just one of a growing collection of hacking groups linked to North Korean leader Kim Jong Un’s regime, including “Lazarus,” which the US blamed for the Sony pictures Entertainment data theft in 2014. Bloomberg Technology posits that North Korea has been widening its cyber-operations to gather cash and intelligence to offset the penalties of international sanctions. The sanctions against North Korea have been on the rise, yet North Korea seems unconcerned and continues to ramp up attacks on foreign countries.

Whether your company is a Reaper target or potentially attractive to another cyber-criminal, attacks are on the rise. Being vigilant within your own company is mission-critical to prevent losing data. The best defense to a cyber-attack is to have a comprehensive and tested disaster recovery plan in place that include an air-gapped backup. You may still be vulnerable to cyber threats, but your day-to-day impact is significantly minimized.

NEW Backup for Microsoft Office 365

NEW Backup for Microsoft Office 365

Office 365 Backup

Whether you have completely migrated to Office 365, or have a hybrid Exchange and Office 365 deployment, your business objectives remain the same. You must remain in control of your data and you need Office 365 backup and recovery at your fingertips.

One of the most vulnerable situations for an IT Admin is when their only option is to send a support ticket and wait. Don’t let this be you.

Backup for Microsoft Office 365 mitigates the risk of losing access to your Exchange Online email data and ensures Availability to your users.

With Office 365, it’s your data

Microsoft Office 365 enables you to work anywhere, anytime, without the need to maintain your own email infrastructure. It also provides a great way to minimize your on-premises footprint and free up IT resources. Even though Microsoft takes on much of the management responsibility, this doesn’t replace the need to have a local backup of your email data.

With Office 365, it’s your data — you control it — and it is your responsibility to protect it. Utilizing Backup for Microsoft Office 365, allows you to:

  • Empower your IT staff to take control of your organization’s Office 365 data
  • Reduce the time and effort needed to find and restore email data
  • Protect against data loss scenarios that are not covered by Microsoft
  • Facilitate the migration of email data between Office 365 and on-premises Exchange

Backup Office 365 email

You need to securely backup Office 365 email data back to your environment for a variety of reasons (i.e. to follow the 3-2-1 Rule of backup, to facilitate eDiscovery and to meet internal policies and compliance requirements). The most important reason being — for the peace-of-mind that comes from knowing you’ll be able to restore your users’ data when needed!

With Backup for Microsoft Office 365, you can retrieve Office 365 Exchange Online mailbox items (email, calendar and contacts*) from a cloud-based instance of Office 365 and uniquely back up this mailbox data into the same format that Microsoft Exchange uses natively — an archive database based on Extensible Storage Engine (ESE), also known as the Jet Blue database.

Office 365 Backup

Restore Office 365 email, calendars, and contacts

Never settle for less than fast, efficient recovery of Office 365 mailbox items with best-of-breed granularity.

Veeam Explorer™ for Microsoft Exchange allows for quick search and recovery of individual mailbox items residing in either archived Office 365 content or on-premises Exchange backups. Mailbox items can be restored directly to an Office 365 mailbox, an on-premises Exchange mailbox, saved as a file, emailed as an attachment or exported as a PST.

Office 365 Restore

eDiscovery of Office 365 email archives

Without a local copy of your data, retrieving emails for regulatory or compliance reasons can be costly and time consuming, and can ultimately present a major disruption to normal business operations.

But, not with Veeam! You can leverage the familiar, advanced search capabilities and the flexible recovery and export options of Veeam Explorer for Microsoft Exchange to perform eDiscovery on Office 365 email archives — just as easily as you would today with your on-premises Exchange server backup.

Office 365 eDiscovery

To start a free trial of Office 365 Backup, contact sales@globaldatavault.com

 

Cry Me a River…Really?

Cry Me a River…Really?

Yep. That’s one headline I saw this weekend about the WannaCry attack. And I guess we can understand that sentiment, maybe. Our view at Global Data Vault, is our job is to be ready to help any of our customers hit by this outrageous attack. Our customers use our services to recover from Ransomware attacks quite regularly and this one is far from over, and I suspect we’ll help our customers perform more than a handful of recoveries. We may all know this by now, but here is some background on the subject.

Ransomware is malware that encrypts and sometimes later deletes files from computers, smartphones, and other intelligent devices – now even including TVs. Ransomware is operated by organized crime gangs, many of whom are based in Russia. The proceeds of these attacks are being used to fund terrorism, human trafficking, drug operations and other nefarious activities.

The first known Ransomware attack occurred at a World Health Organization AIDS conference in 1989. At the time, the intent was to extort small amounts of money. Another early implementation posed itself as antivirus software which the victims were encouraged to purchase in order to eradicate malware that was planted by the same code.

Today, with attacks from so many sources, and with the advent of untraceable virtual currencies like Bitcoin, and through the existence of sophisticated encryption algorithms, ransomware has become a billion-dollar industry.

There is even a market that supplies tools to build ransomware and tech support for implementing attacks. The encryption is often now 256 bit RSA grade and is too sophisticated for even large technical organizations to solve. Citrix reports that many large companies are keeping Bitcoin available as a last-resort.

Even further frightening are cases where remote access trojans have been used to monitor a potential victim to determine the scope of the organization and assess its ability to pay a given ransom.

History

CryptoLocker was the first wideapread attack and first appeared in 2013. It was supported by a large network of malware bots (together called a botnet) which is used to distribute the actual attack. Cryptolocker extorted over $3 million before being shut down by the Department of Justice who took control of the botnet and issued a warrant and a bounty for Russian hacker Evgeniy Bogachev for his involvement.

New threats exist; Cryptowall is believed to have extorted over $350 million; Locky operated in 30 languages; Petya encrypts entire hard drives. As bad as these are Cerber is the most prevalent, accounting for 90% of Windows ransomware.

Cyber attacks through email attachments. Word, Excel and PDF files containing dangerous macros are sent as bait – usually calling themselves invoices, etc. If the user opens the file and allows the macro to run, the attack will generally succeed. Your inbox has become your most vulnerable point.

Avoidance and Prevention

  1. Patch Everything – as often as possible – patch every application.
  2. Do not allow local admin rights on user desktops.
  3. Desktop antivirus is helpful but not enough because the attackers are continually recompiling their code to escape detection. Secure email gateways also help but are also limited for the same reason.
  4. BACKUP – is the only real protection!
  5. Follow the 3-2-1 rule: Always have 3 backups, on 2 media types and 1 offsite. More on the 3-2-1 rule later.

Backup Strategy

As a service provider working in this area, we see attacks on a weekly basis. We have performed hundreds of recoveries. The following points are the lessons learned from our own experience and the well-organized thoughts on this subject from Rick Vanover Director of Technical Product Marketing at Veeam Software.

  1. Use different credentials for backup jobs! An attack or attacker with credentials to access your system might also attack your backups.
  2. At some point commit data to offline media such at tape. If it’s offline, it cannot be attacked.
  3. Use Veeam Cloud Connect (we do). It uses a different method of authentication and a different backup API.
  4. Store backups in a different file system.
  5. Take SAN snapshots of your local backup repository.
  6. Expand and master the 3-2-1 rule – use the 3-2-1-1 rule: have 3 copies of your data, on 2 types of media, have least 1 offsite and at least 1 offline.
  7. Test – have 0 errors after recovery is tested! Veeam’s Sure Backup verification is one great way to test.

While this is a good start, there are other many other technical strategies we implement for our customers. GDV employs as many as possible for each of our customers. We’re always happy to discuss how you can leverage these ideas.

We hope this is helpful. Good luck and stay ready.

Bandwidth – How Much is Enough?

Bandwidth – How Much is Enough?

How Much Bandwidth is Enough?

Knowing and planning for an appropriate level of bandwidth is a key component of every DRaaS solution. In our most common DRaaS implementation, the data is moved from local repositories at the customer site over the internet or WAN to our data centers once per day, even though that data may contain multiple restore points.

Here’s a diagram showing how the data moves.

veeam_cloud_connect

The key to an effective solution is knowing whether the remote data center is getting the data on a timely basis. We measure this daily, and we call this measurement “quality”. In other words, how current is the remote data. If it’s within the desired restore point objective (RPO) as agreed in the service level agreement (SLA), then we regard it as being “in SLA”.

It’s essential that the bandwidth between the customer site and the remote data center is sufficient to move the data that’s changed every day quickly enough that it’s in the remote data center in time to support the agreed or planned RPO. See our discussion of RPO here. As a general rule for most sites, we find that 4 to 6 hours is “quickly enough”, and this is usually scheduled overnight.

Why not use 8 or 12 hours? in more complex implementations there are other events, like multiple backup jobs, each of which use resources and must finish. So 4 to 6 hours is a conservative window of time. Some sites may be able to use longer windows – and therefore less bandwidth – to move the data. Or simply move more data per day in the longer window.

Getting to the Question

So, how much bandwidth is required to move the data in 4 to 6 hours. To answer this question, we need two pieces of data and a bit of math. First, we need to know the total storage in use. This is your disk storage in use across all servers. If you’re running VMware, one way to get this data is from your vSphere console as covered here.

Second,we need to know the daily data change rate. This is easily measured by Veeam ONE as discussed in a previous post or it can be derived from simply looking at the actual size of daily backup files.

Now for the Math

If R is required speed in Mbps (see note 1), D is data changed each day in Gb (see note 1) and T is 6 hours expressed in seconds, then our formula is: R = D / T.

To save you from number crunching, we’ve prepared this chart showing typical storage sizes and change rates and the required bandwidth result:

bandwidth required

So at a site with a 1% daily data change rate (after compression and deduplication), which we find is typical, with storage of 5TB, the required bandwidth is 19Mb.

For a deeper dive, check out this great web based calculator at WintelGuy.com.

Note 1: WintelGuy also defines all of the relevant units of measure you may encounter.

Summarizing this Idea

  1. Start with your storage in use
  2. Find your daily data change rate
  3. and calculate or lookup the required bandwidth here

Final Question

What if my bandwidth is not enough? There are numerous strategies we employ to deal with slower lines, larger data sizes and higher change rates. The best strategy is developed on a case-by-case basis.

Let us know if you have questions. And we welcome your thoughts and real world experiences.

Sizing for Local Backup Repositories

Sizing for Local Backup Repositories

Proper sizing for local backup repositories is a critical step in building a complete backup and disaster recovery as a service solution (DRaaS). Below, we will list the most accurate way of determining what your data change rate is and how large the repository should be. While no method can anticipate every future event in your data, we have found that following these three steps leads to the best possible estimate for local backup repository sizing.

The main factors that impact your storage requirement include the size of you VMs, the frequency of your backups, your retention policy. You will need to estimate your compression and deduplication ratios which usually result in a 40% to 60% reduction. And you will need to know your approximate daily data change rate, but Veeam offers a great tool to address this to address this.

Using this information, you can plan for the required disk space. But, you should always allow a safe margin for future growth, and for moving VMs or restoring VMs, etc.

Step One – Run Veeam ONE

 

Veeam ONE is a free reporting and planning tool included in the Veeam Availability Suite that monitors both VMware and Hyper-V virtual environments. Veeam ONE will report the daily data change rate by server in your environment. Ideally you should run Veeam ONE over a period covering normal activity levels.

This example shows the daily data change rates for a group of servers. The total daily change rate is 214GB.

Sizing for Local Backup Repositories

Though not shown in this report the total size of these servers as reported by vSphere is 2.5TB, but as an example, this screenshot shows sizing data for a different group of servers in vSphere:

Sizing for Local Backup Repositories

Sizing data for a group of servers in vSphere.

Step Two – Apply a Retention Policy

 

We have written at length in this blog about retention policies. Organizations may vary in specific retention needs, but let’s work from the following fairly typical or standard retention policy: Local Restore Points: every 6 hours for 2 days, plus 10 dailies, and one monthly. Remote Restore Points: 10 dailies, one monthly, unlimited monthly archive.

Visit The Restore Point Simulator at http://rps.dewin.me/

Given the data size, change rate and estimated compression rates, this very clever tool can complete the calculation we need. We’ll give it these inputs:

Style = Incremental
Used Size GB = 2500GB
Retention Points = 21
Change Rate = 8%
Data left after reduction = 60%
Interval = Daily
Time Growth Simulation = 10% per year

The simulator reports below that we need 5.475TB:

restore point simulator

Step Three – Reasonableness Test

 

We always apply a reasonableness test. Depending on what we know about the data, we’ll estimate storage requirement of roughly four times the storage reported in vSphere. In this case that 10TB. For this example, we’d recommend a repository with 8 to 10 TB of storage. This would allow for room to grow and would accommodate any unexpected data growth or high change rates.

Here are some other articles we’ve written about retention policies and related ideas:

Developing A Data Retention Policy What Data Do You Have to Backup

Data Backup Developing an Effective Data Retention Policy

Checklist for Developing an Effective Data Retention Policy

In a future post, we’ll discuss specs for the repository hardware as well as specs for the server performing the backups.

Summarizing the Steps

  • Get your data change rate from Veeam ONE
  • Use the Restore Point Simulator to estimate storage required
  • Apply a reasonableness test as a final check

Let us know if you have questions. And we welcome your thoughts and real world experiences.