Everyone is running out of real estate. How do we make the most of what we have available? In a data center telecom-internet-11environment this means consolidating the space and overall square-footage. As companies and data centers alike wish to consolidate more and more equipment and applications into smaller and smaller spaces, this increases the need for power density.

Data center operators are looking to make the most of their investments. For them this means fitting as many clients as possible into their facility, increasing revenue per rack by increasing the available power density, increasing cabinet sizes over the standard 42U 19” rack (a U stands for rack Unit which is 1.75 inches in height), and of course a moving into managed services and Cloud computing. Back in 2005, data centers were building to the specifications of 50-100Watts per square foot. Today your standard data center can support 100-175Watts per square foot with more modern data centers built to 225Watts per square foot. Some very high-end data centers are now even building custom environments up to 400Watts or higher per square foot for applications such as data mining clusters/super-computers or energy industry seismic processing. Today’s most common power configuration per cabinet is 208Volt, 20amp circuits in a Primary/Redundant configuration. This means that two circuits are installed, one on each side; one is the Primary circuit in which all the equipment in a rack will run off of, and this is referred to as the “A” circuit. The Redundant circuit, referred to as the “B” circuit, is there simply in the case of a failure on the Primary power supply. With this configuration you are looking at a maximum power draw of about 4.2kW per circuit. Compare this to 2005 where the most common configuration was 120v 20Amp circuits in an A Primary and B Redundant configuration; those circuits have a max power draw of 2.4kW. Today’s standard power configuration could supply the standard load requirement for approximately 80% of rack enclosures today. So why are we worried about increasing power density? According to Schneider Electric, that same configuration will only support up to 40% of future enclosures. There are a few reasons for this:

1. As I stated earlier, everyone is trying to conserve real estate square footage for any number of reasons.

For a company colocating inside of a data center, it is about saving money. The less space you take up the less total square footage you need to pay for on a monthly basis; you will also reduce the number of racks you need, which saves up-front fees. The data centers want to save the space in an effort to increase the longevity of their data center and increase their bottom line revenue per square foot. The data center and companies are saving square footage with two primary means: larger racks and virtualization.

  • Larger racks could simply be taller 9 foot tall 58U’s keeping to the standard 19” wide slots, such as what Microsoft is doing. There are also the wider “Open Rack” designs which are 21 inches wide and is being pioneered by companies such as Facebook. The EU is well ahead of the US on using the taller racks. As a standard, many EU-based data centers and companies have been using 48U racks since 2000 – 2001.
  • Virtualization is being used more and more often, by companies themselves setting up with own private Cloud using VMware and Citrix technologies. Now you can run as many virtual server instances as your particular hardware will support. This means a lot of the hardware is considerably condensed, which again saves space; but when you place several of these in a cabinet, it requires higher power density.

2. High-end applications are another primary reason for the ever increasing demands on power density.

Virtualization could also be fit into this as I discussed other resource-hungry applications. I would point to the always increasing need for more efficient data mining. Data mining, in any of its thousands of forms, be it Google, Amazon, data base searches, or anything else, requires high processing power, which translates into lots of energy being used. There will always be an increasing demand of faster and more efficient data mining because of the always increasing amount of available data, even this blog. Take Google as an example. We use it every day without even thinking about what is behind it. But Google is essentially a data mining company. It searches the Internet for whatever you type in. It needs to search through a dizzying amount of information out there on any number of servers and locations over numerous geographies to give you the most relevant information according to whatever you just typed into that Search bar. The fact that Google does this, billions of times a day, with the staggering amount of information that is on the Web, and delivers those results to you in .25 seconds is pretty amazing. All of that work is done by an incredibly complex algorithm and custom hardware. All this processing from millions of requests requires some very high power density. Google is known for their impressive data centers, particularly their Container data centers which are built for power up to 780Watts per square foot (as of 2009).

We could go through tons of examples, but this highlights the industry as a whole. Some of the over-achievers are companies like ViaWest, CyrusOne, Telx and QTS. The industry is moving in the right direction and companies should be taking advantage of the change. If you are with one of those companies who has been managing the same data center environment since 2000 with little change and band-aid fixes, I highly suggest really taking a look at a data center refresh. If you’re not, it’s just money out the window. Most every IT department is limited by what it can do with a restrictive budget. Let GCN help you make the most of your data center refresh and make the most of a strong industry trend.

Share Button

No Comment

You can post first response comment.

Leave A Comment

Please enter your name. Please enter an valid email address. Please enter a message.