A chassis is a pretty big deal around here but not the kind that keeps the body, suspension and wheels of your car all attached together. In our world, a chassis is the enclosure that handles all the non-computing tasks required to support multiple servers, including providing power, cooling, connectivity and manageability to each blade server that it’s holding.
“Blades” are redundant self-contained servers that fit into a chassis with other blades. Each chassis holds 8 to 16 blades – so that’s 16 to 32 processors and up to 96 cores per blade. Each blade supports up to 48GB of RAM or up to 768GB per chassis.
Our latest data center upgrade involves multiple redundant chassis. Here is a typical HP Blade Chassis – this one with 16 blades:
Blades are pretty cool servers. Their major selling point is that they afford nearly 100% uptime. They can tackle any task you’d like them to:
- Database and application hosting
- Virtual server hosting platforms
- File sharing
- Remote desktops and workstations
- Web page serving and caching
- Streaming audio and video content and more.
If your system needs more power, you just add another blade server to your chassis. They provide reliability through resilience and quality.
Servers can fail, we all know that. But when using VMWare, you can cluster 8 – 16 servers together in a single chassis. By doing this, when a server fails, it’s has almost negligible impact because the workload moves from all 8 for example to 7. None of the servers are running at full capacity so if one fails, it’s no problem.
The chassis itself has a high degree of resiliency to it. It has 4 power supplies (rather than the usual two of a stand-alone server). These power supplies are basically mini-transformers with their own fans. If one fails, as they sometimes do, it can be replaced within 24 hours without impact. And that’s a good thing.
All disk arrays – (storage systems linking multiple hard drives into one large drive) have at least 2 power supplies but could run off of one. Having two power supplies provides protection from 2 kinds of problems:
1) If the power supply stops working, another one takes over and handles the power for both
2) By having separate power paths, one side of device is directed into one path, the other in another separate path. If your data center is designed correctly, you are plugging into distinctly different power feeds. So if your electric company has a power failure, or a transformer blows up, or a major wire gets cut, the data center will stay lit. Even if one side of the power in the data center goes down, the power to your servers stay on and you don’t even go to diesel. However some data centers (not ours) do not have 2 power station feeds. You would have multiple concurrent power paths into your space.
There’s a saying, “Murphy’s Law: What can go wrong, will. Bell’s Law: Murphy was an optimist.”
Yeah, we all feel that way on certain days when your planets are aligned for great misfortune, but it’s also an apropos saying for all things regarding your computer systems and even your data center maintainability. Things are going to go wrong. Sooner or later, they will.
Change happens. New equipment, increased power requirements, cooling demands, changes in safety and security regulations, consolidations, expansions. All of these change events can trigger a failure. They demand that you have flexible maintainability, because with each change event, there’s a potential for misfortune.
The good news is that you can mitigate your risk for a Bell’s Law type of SNAFU by taking precautions. All experts recommend a few steps you can take to avoid downtime, as well as choosing a data center partner that fits your budget and needs.
Number one on your to-do list is to avoid densely packing racks with energy hogs. Next you should be trading space for density, as energy costs are 4 — sometimes 5 times — the cost of space. (Aim for 4 kilowatts per rack.) But after you’ve done your part to ensure the best continuity for your own servers, what do you know about the Data Center that you choose to put your servers in? How do you reduce your chances of performance interruptions? Choosing a data center with a level of uptime consistent with your needs is a start.
The aptly named “Uptime Institute” (a non-profit organization) is an unbiased, third-party data center research, education, and consulting organization focused on improving data center performance and efficiency through collaboration and innovation. Members of the Uptime Institute are corporations with heavy data center utilizations. These corporate members share best practices in relation to high performance data centers, and through their discussions, they’ve identified four “tiers” of fault tolerance (the Bell’s Law thing again), where 1 is the lowest and 4 is the highest or best in regards to data center uptime. Below is their definition of Tiers 1 – 4 and their typical outage time on an annual basis, as well as their basic design criteria:
||One path to power and coolant. Does not have redundant components (spare air conditioning units)
||One path to power and coolant, but has redundant components for both.
||Multiple power and coolant distribution paths, but only one active path. If the active path fails, the data center can switch over to the redundant path.
||24 minutes (this equates to essentially 99.995% availability
||The data center has redundant paths, but also adds “fault tolerance” which means that if one path fails, the other automatically takes over, including everything from the electrical power distribution system, the interruptible power supply (UPS), back-up diesel generation, etc.
Bell’s Law happens. Do your part to ensure you aren’t set up for colossal failure – and choose a data center partner that fits your needs and budget. Knowing your parameters allows you to plan for disasters much easier, and the contingency plans for recovery.
The only thing hungrier than Americans at Thanksgiving dinner are our data centers and their appetite for power.
The power cravings are due to our ever-increasing ramp up of Internet usage. Yes, all those senseless YouTube videos that your friends make of their cats are adding to the overall power consumption. In fact, it would take 7 million DVDs for each hour the internet has existed to capture all of the existing internet data, and that’s expected to quadruple by 2016. So you can imagine, the amount of energy it requires to store and power our data (both kitty videos and the relevant data too) is astronomical.
US Data Centers use more energy than the auto industry, and yet are notably very inefficient. Data Centers often have power densities 100 times greater than typical office buildings. So efficiency is critical. Data center power is required for the IT equipment but also for heating, ventilation and air conditioning (HVAC). A metric has been developed to measure data center energy efficiency – Power Usage Effectiveness or PUE. PUE measures Total Facility Power divided by IT Equipment Power.
The average US data center has a PUE if 2. Meaning that only one half of the electricity consumed goes to the IT equipment.
That data center inefficiency comes from many sources. Most online backup providers run at maximum capacity around the clock regardless of demand. A single data center can use more power than a medium sized town. If you’ve heard the term, “ping, pipe, power” – it refers to colocation facilities or data centers that supply rack or floor space where customers can “ping” their computers remotely, access electrical “power” for operations and continuity, and a connection, which is the “pipe” or bandwidth to and from the Internet.
As customers jam more and more storage and processing power into smaller spaces, more and more power is required from the data center.
Power and the continuity of power is a hot issue in the data center world. In fact, in some parts of Europe, data center space is sold by power consumption (Megawatts) rather than actual rack or floor space. While not common in the US – yet – the trend is real and headed stateside. Additionally, many data centers are investing in “green” improvements over the next 4 years (and reducing their reliance on not-so-eco-friendly diesel generators). This trend is projected to actually reduce data center electricity consumption by 33%.
On the other hand, historic trends project growing consumption. It will be interesting to see how this progresses, but it will be an important measurement for IT and businesses in general in the coming few years.
Guest blogger: Trista Perot
The blistering 107 degree hit me like a sauna as I stepped outside – and triggered a sick reminder to me to go back and grab a coat because I would need it.
Right, need it in blistering summer Texas heat?!,But today I was tagging along with Global Data Vault to tour an impressive new Dallas area data center and it would surely be freezing. Or so I had heard.
If you’ve not ever toured a data center, it’s fascinatingly dull.
Dull because visually, you see row after row of the same colored locked towers stacked with non-logoed servers and blinking lights. After a few minutes, the lights become mesmerizing and the landscape is repetitive and predictable — but the sheer magnitude of all the things you CAN’T see is appreciably mind-numbing. To be surrounded by the volume of data and technology passing through those walls faster than the speed of light is a rush for nerds like me.
Data Centers are highly secure environments. We had to walk through a “man-trap” (what I wouldn’t have done to get a hold of one of those in my younger years!) which is basically a series of security check-points. No one enters a data center without their government-issued ID. The center makes copies and — like it or not, you are forever in their files. Once you pass the security personnel, there’s a holding area with two doors which can’t both be open at the same time (insert image of spy flick with frantic hero attempting escape).
After emerging from the “man-trap” (Yeah, now I just like saying it) at CoreXchange, you then see an immaculate kitchen, a large conference room and a staging area for my techie friends to pull out their equipment, get comfortable on their Wi-Fi and tinker with their server parts.
As we were commencing our tour, in the open hallway, something interesting happened.
As if on cue –
All the lights went out.
Not exactly a rolling blackout – just the utility company struggling on a day of record demand.
But there are no windows in a data center.
It was really dark.
And I was a little nervous.
But it only lasted for a few seconds, and I expected that there would be a small army of technicians clamoring to the scene. I wanted to see the excitement! Sadly, only one person came, we’ll say “with intent” to the data center floor to see what was up.
That’s because in a data center, they live for this kind of thing. It’s like – no big deal.
If the power goes out, battery backups take over for a few seconds and keep everything running, then in literally a split second, the data center’s API (application programming interface) signals the generators to start. That would be one of THREE generators. So there was ZERO interruption on the data center floor. ZERO. NADA. ZILCH.
The power came back and believe it or not, flashed AGAIN (what some electric companies will do to get your attention…) but ground zero kept humming along unfazed by each little trick the power grid tossed their way. And no one panicked because everything happened just as it should. So seamlessly that we accused them of staging it.
Next we passed through the metal doors to see a sparkingly clean cage farm of blinking lights. There were “hot aisles” and “cold aisles” – but because of the raised floor, even the cold aisles didn’t justify the need for that coat that’s frankly much better suited for a spring shower. It was like going to an oxygen bar. Lots of air circulating, white noise to relax you and the contemporary sleekness of an urban loft. All we needed was a little Kenny G in the background.
It was a professional organizer’s utopia. All the servers faced the same direction (alas the warm and cool aisles because of the server fans blow off the temperature uniformly), and not a single piece of clutter or trash. Everything was in its perfect place. The raised floors kept the air moving and cancels the need for the super cold environments of other such centers. Even the crash carts that allow IT people to visit sans laptop had their own little garage to keep them quarantined. All the wires were neatly strung above the server lockers on ladder-type racks attached to the ceiling. There wasn’t a massive jumble of cable like you can find behind each piece of technology in my home.
What’s especially attractive about this datacenter is that the fiber running to it is redundant (in case of a real disaster) and connects to the building on two different routes or as they are known in tech speak, diverse paths. In other words, your data is not only backed up, it’s backed up on steroids.
Which is good because these companies that are leasing everything from a partial server to their own chain-link cage with a desk inside are depending on 100% uptime. It’s vital to their operations.
And then it hit me. I was literally walking through the CLOUD. This is what technology today is.
It got me thinking: as a business owner, wouldn’t I want to know that a company that’s backing up all my data is housed in a data center of this caliber? Uh, that’d be a he** yes! Every back-up company should be doing what Global Data Vault has done with me and take customers on a tour of where their information is going to be protected.
And while I’ll never know who else is actually housed here – that’s a best kept secret for any data center – I certainly have a new-found appreciation for the guts of it all and how it relates to little ol’ me.
On Trista: Trista Perot is a social business consultant in Dallas. She is on a one-woman mission to learn how to protect her client’s data in the safest way possible. From desktop to fiber, to online data back-up and back, she wants to learn how to safeguard against data loss and ultimately what disaster recovery is all about. In this series, she’ll be examining the role a data center plays in harboring one’s mission critical information. Stay tuned for next weeks post where Trista learns what the heck SAS 70 is and why its important to protecting her data.
If you’re online, you’re connected to a data center somehow. All sizes of business today rely heavily on data centers for business continuity. By backing up their data to a remote location, companies mitigate their potential for loss due to a hack or other disastrous event.
But what is it really, a “data center?” What’s it look like and what’s inside?
When you walk into a data center – that is after you’ve signed in and presented your (more…)
To accommodate continuing growth, Global Data Vault today announces the addition of the H5 Data Center in Dallas to its data center network. H5 offers several distinct advantages including the highest levels of security and resilience as well as superior Gigabit connectivity to our Infomart data center. Combined with our sister company, Cloud Data, GDV’s network now includes four data centers as shown here: