dcsimg
 
 
 

Data Center Power Play

 
 
 
 
 
 
 
 


By John Parkinson

I haven't worried about reliable power for quite some time. But last week I started to, if not worry, at least think about how to cope with the levels of power density we are starting to see in the data center.

A fully populated rack--four full blade chassis, for example, running virtualized environments at around 80 percent utilization (which is what we are achieving these days)--consumes 24kVA of power continuously. That's more than the average family home concentrated into 10 square feet.

At an efficiency ratio of about 1.4 (it takes 1.4 units of power to cool 1 unit of power), that means that each rack represents nearly 60kVA of committed power. And with compute, storage and connectivity to take into account, we have a lot of racks.

Data center designs tend to keep the racks close together. So we save on real estate as well as fiber and copper connection costs, but that just makes the power and cooling capacity problem worse. Most data centers can safely deliver only 200 watts per square foot with the current generation of power-distribution infrastructure, yet we need north of 240, with some headroom above that to handle occasional utilization spikes.

Something has to give.

What's giving right now is density within the racks. By limiting ourselves to no more than 50 percent full we can keep the power to around 150 to 160 watts per square foot. That will hold us for a while (maybe a year), but then we will have to start spacing the racks further apart so as to ease the cooling problem--perhaps even letting the temperature rise and running the data center hotter than we are used to.

The technology actually doesn't mind being relatively warm. It's the sudden changes and localized hot spots that kill electronics. Juggling all these factors might well be what leads us to containerized solutions sooner rather than later.

Making these critical environmental design decisions is becoming much more of a science--and much more of a full time job for data center operations. It takes new skills and tools to understand, monitor and manage the power and thermal environments, keep them stable and react correctly when incidents occur.

It also takes some political skill to explain to the business why there is so much empty space in the racks and the data center. After all, we have spent several decades making the technology smaller and packing more and more of it into less and less space.

Absent a move to sealed container environments, we seem to have hit a tipping point in this process--and just as microprocessors had to give up getting faster and go to more real estate to get more powerful, so our data center designs may have to evolve to be more accommodating to the laws of thermodynamics.

We should also, probably, stop offering data center tours to outsiders. Cameras are cheap and the video is a lot easier to manager than physical access, security and safety concerns.

And you won't have to explain why the place feels like a big empty sauna.

John Parkinson, the former CTO of TransUnion LLC, has been a technology executive and consultant for over 30 years, advising many of the world's leading companies on the issues associated with the effective use of IT. Click here to read his columns in CIO Insight's print edition.

 
 
 
 

0 Comments for "Data Center Power Play"

No one has commented yet. Be the first one to comment.

Leave a Comment