Introduced in 2007 by information technology industry group The Green Grid, PUE (Power Usage Effectiveness) has become an important topic in the colocation industry and data center community as a whole. The simple PUE formula of total facility power / total IT equipment power can be easily understood by techie, wrench-turner and bean-counter alike. This however is where the simplicity in analyzing and addressing PUE ends.

What are the numbers looking like?

We hear regularly of big news about the Facebooks and Googles of the world achieving sub 1.1 PUEs at some of their sites, often aided by custom servers and the luxury of having large amounts of capital and scale on their side. Digital Realty Trust, the global colocation juggernaut, has recently stated that they are achieving sub 1.2 when using free air cooling at one of their newer facilities in Sydney.

These are staggeringly low numbers but what do they mean in the grand scheme of things? What does this all mean to the end user community that struggles mightily to achieve an average that most estimate to be no better than 2.0?

Getting to know the applications

As with almost anything strategic related to IT, you first need to understand the applications being used, their importance to the users and what happens to the organization when they become unavailable. This is true whether you are an organization selling data center colocation services on the open market or an end-user organization deploying into either a colocation or in-house facility.

Take, for instance, a colocation company designing an asset intended to serve organizations that have high criticality, low data volume applications such as financial transactions. Average kW usage per rack may be in the 3-5 kW range and cost of downtime for the typical user is high and immediate. These users are willing to pay a premium to keep any interruption to service from taking place. The financial structure of deals with them is going to be primarily driven by confidence in the facility infrastructure and the Service Level Agreement (SLA) that backs it. Since the cost of utility power is relatively insignificant to the cost of downtime; PUE may not even cross the mind of the end user.

In a very different scenario, let’s think about organizations that implement high performance hardware to run applications to produce extremely dense images for oil exploration. Average kW draw per rack may be in the 15-20 kW range with cost of downtime being relatively light in the grand scheme of things. Compounding this is the fact that the cost of the data center operation has a significant impact on their bottom lines. PUE in this case is paramount and a major factor in any choice the end users make.

In either case a lower PUE is preferred, but what efforts should be made to achieve this? These efforts obviously will not be uniform.

Capital Expense vs. Operating Expense Planning

Once it is determined how big of a factor PUE is in regards to the applications and hardware that live at the data center, the next exercise from a facility needs standpoint is to begin collaborating on a plan to address the non-IT load power drains. Cooling and power infrastructure redundancies as well as empty space for growth are going to be the big thieves to focus on.

Putting together a capital vs. operating expense plan is a bit of a prediction game unless you know exactly what IT hardware is rolling in and when. This is typically not the case in any private or colocation site built to serve a 10+ year period of growth. A good master plan will address the present needs and the near term needs yet leave flexibility around the harder to predict mid to long term.

 

uptime-institute-tier-rating  What are the best approaches to getting that PUE down?

One thing is clear; there is no singular, magical approach to lowering PUE. Many different methods are being successfully deployed often in combination. Which methods are deployed are dependent on the climate of the geographic location and, as we have demonstrated, the type of applications and hardware being deployed. Hot aisle containment, cold aisle containment, the use of blanking panels inside server racks, “free-cooling” when ambient temperature permits and simply managing the hardware environment to allow for higher temperatures (often up to 80 degrees Fahrenheit) are all effective methods that can be cost effective.

The initial setup and capitalization of the data center is going to dictate much of the PUE lowering tactics going forward. Going to chilled water versus standard DX units for example may take a larger capital outlay but in many scenarios could provide long term operating savings that outweigh it. Buying the most efficient Computer Room Air Handling units may also take a bit of extra capital but return it in operating savings.

Going forward?

Knowing what the applications mean to the business financially, and having a sound capital and operating budget around delivering them, are the foundation pieces. Spending a bit of capital to set things up more efficiently for the long run is going to bring returns if planned properly almost every time.

Whether an end user is operating in-house or colocated environments, these planning discussions and collaboration are critical to optimizing what the facility does for the organization. Of course we haven’t delved into what is occurring at the next layer of the picture – looking at PUE from a data delivery efficiency standpoint – but that is for another story.

power usage effectiveness

Share Button

No Comment

You can post first response comment.

Leave A Comment

Please enter your name. Please enter an valid email address. Please enter a message.