My data center design experience is old enough to drive this year. For 16 years I have been lucky enough to work almost exclusively on mission critical projects, and by now I have a pretty good track record and syllabus of lessons learned. But I still get a shiver when I recall how lost I was when I started.

On my first data center project, for an entire meeting I nodded along even as I wondered what a “crack” was; only to find out soon after it was a CRAC unit — something of particular interest to the HVAC designer, one would think. For reasons unbeknownst to me, the letter “N” was thrown around a lot. And “uptime” sounded like an especially cheery goal for a mechanical system although I wasn’t exactly sure what it entailed.

Data center HVAC design was (and still can be) intimidating. It’s not that the laws of physics are any different … fish gotta swim, heats gonna rise ... but there’s a big difference between maintaining environmental conditions conducive to computing versus keeping folks comfortable. Throw in the vocabulary, acronyms, and the client type (read “nerd”), and you have the makings for a few “deer in the headlight” moments.

But the engineer of today need not be as clueless as I was in 2000.

In this, the Information Age, computing facilities are ubiquitous. In turn, more engineers and firms have been exposed to technology-driven projects. More exposure means more information has been gathered and disseminated. So it should be calming to know that much of the heavy lifting has already been done and more importantly, the requisite roadmaps and resources are out there if you just take the time to look.

Harkening back to the Space Age, it’s the difference between the Mercury and Apollo programs. Getting off the launch pad is already figured out; we’re ready to get to the moon.


Information and Knowledge

Albert Einstein is credited with saying that information is not knowledge. And when you appreciate that knowledge is defined as a practical understanding of a subject, you can see Albert’s point. My 16 years contain mistakes, epiphanies, and a few um-duh moments that just can’t be captured in a white paper. That said, good information still gives you a rolling start down the mission critical path of knowledge.

One huge source of information (informed by the knowledge of others), has been compiled by the Center of Expertise for Energy Efficiency in Data Centers (CoE) at the Lawrence Berkeley National Laboratory (LBNL). The CoE is led by the U.S. Department of Energy and is focused on technical support, tools, and technologies intended to optimize and reduce energy use in data centers.

Of particular interest to any engineer approaching a data center project is the CoE’s Tools page found on their website ( The tools presented can be used sequentially to move from a basic understanding of how energy is used in a data center to identifying opportunities to decrease its use, and further, to the implementation of best practices.

The eight tools presented, along with links to third-party applications, each merit a full discussion. However, in this article we will focus on the Data Center Profiler (DC Pro), the PUE Estimator, and the Data Center Master List of Efficiency Actions. These three applications can be used by the novice and journeyman designer to facilitate an intelligent approach to any project.


Knowing what you don’t Know

When trying to guide my sons in the ways of dating, politics, and religion, I often remind them that they only know what they know. The backside of that chestnut is that they, and the collective we, don’t know what we don’t know. And frustratingly, if we don’t know what we don’t know, then what questions do we ask so that we can know? Ya know?

While I haven’t found an equivalent app for understanding my wife, DC Pro can help the designer ask the questions they wouldn’t know to ask otherwise. As described on the site (, DC Pro is an early stage profiling tool designed to diagnose how energy is being used and to determine ways to save energy.

DC Pro is a self-guided menu driven tool, which is free to use and confidential. No data is made available to other users or third parties. You can save multiple projects using your user account. And there are up to 82 questions depending on system types and equipment used, and the critical questions fall under the Energy Use Systems Section, and include:


• Energy management

• IT equipment

• Environmental conditions

• Air management

• Cooling

• The IT equipment power chain

• Lighting


Note that to get the most from the tool, you need to work with your electrical engineering counterpart and the IT folks. But that shouldn’t be a surprise since good design is integrated design and you should all be talking anyway.


Estimator not a Calculator

Before we go any further, be reminded that the old garbage-in-garbage-out rules apply. It can’t be overstated that the accuracy of the information generated will only be as valid as the data input. Because so much information is resident within DC Pro, it may be tempting to accept defaults or pick the first item in a pulldown menu when you don’t know the answer. Don’t.

For example, you will be asked if there is a UPS, and if so, what technology is employed, plus the size, voltage, and load factor. These are not typically questions that the HVAC designer can easily answer. However, the response will affect the estimated PUE calculation and recommended actions. So take the time to reach out to those who do know.

DC Pro takes user’s inputs and utilizes internal look-up tables to estimate data center energy distribution and PUE. The lookup tables were developed using the EnergyPlus program and cover ASHRAE climate zone, cooling system type, and UPS efficiency.

The energy use breakouts are defined only in terms of percentages. The model assumes a completely homogenous data center. Further, the tool holds some inputs as constants. In particular:


• Electrical distribution losses (excluding UPS) are assumed to be 2% of the total IT load.

• The lighting load is assumed to be 1% of the total IT load.

• The IT load is assumed to be constant 24/7.


You are not asked about the building geometry, fuel type, or a specific IT load or density. This is because the tool is meant to be used to compare system types and potential improvements relative to a generic data center baseline. Actual energy and dollar savings have to be determined later during detailed design when you begin your energy modelling and life cycle cost analysis.

The output is pretty straightforward. You get an estimated PUE and a list of recommended actions based on good practice in general and your inputs in particular


The Ins and Outs

As an example, let’s look at that data center I cut my teeth on back in the early aughts. Located in St. Louis, it was a pretty typical legacy design with distributed CRAC units. There was no aisle separation, and the space temperature setpoint measured at the CRAC return was 72˚F at 50% rh.

There were active humidification and dehumidification controls integral to the CRACs. The chilled water system operated at nominally 42˚F and included a water cooled chiller with a non-integrated parallel waterside economizer.

Plugging that data into DC Pro, we get an estimated PUE of 1.5 (Figure 1). We are also informed that a Potential PUE of 1.2 is possible based on best practices. A list of possible tasks we could undertake to make improvements is generated as well.

We can also use the PUE Estimator outside of DC Pro as a standalone tool. The inputs are limited to climate data, HVAC system questions, and UPS information. The beauty of the tool is that it allows you to game options in real time. The DC Pro assumptions and constants remain, but you can still see the opportunities on a relative basis.

Plugging in the same info as before for the legacy design, we again see an estimated PUE of 1.5 (Figure 2).

By the way, the actual PUE of that data center was probably — and regrettably —higher than 1.5. Part of the reason we don’t see the higher value is due to the fact that the tables are based on today’s equipment and associated efficiencies. Also, DC Pro doesn’t even let us pick some of the colder outdated temperatures for chilled water and supply air. And lastly, I would bet we actually came in somewhere between 1.8 and 2.0 when you coupled the design with an overly conservative operation.

So if we approached that same data center today, we would certainly take a different tack. For starters we would push the ASHRAE recommended conditions and we would incorporate some type of hot-aisle containment or separation. This would allow us to raise the leaving air temperature to 75˚F and the return temperature to about 105˚F. Following the guidance of ASHRAE, we would allow the humidification ranges to be expanded and in turn eliminate active dehumidification and humidification.

Warmer supply air allows warmer chilled water, so we can go to a 55˚F chilled water temperature and an ASHRAE 90.1 dictated integrated series economizer. Referring to Figure 3, we can see that by updating the HVAC design to the 2016 standard, we improve our PUE to 1.2; an improvement of 20%.

Note that this is the same as the 1.2 potential PUE estimated by DC Pro, so that’s a nice validation.


The Roadmap

Without first cost and actual operating costs, we have an incomplete picture. But we do have a signpost that points us in a direction that shows us the relative value (or lack thereof) of specific strategies. And if the DC Pro Tools provide a signpost, then the Master List of Energy Efficiency Actions may be seen as a roadmap.

The Master List is a living document of best practices and recommendations that have been compiled by the CoE and shown to increase energy efficiency. The document is aimed not specifically at designers but at the frontline owner and qualified assessors. As the document states, it can be used as a standalone reference for in-house improvements or to inform an energy assessment report being prepared by an outside party. Maybe that’s you.

The Master List is divided into eight sections that represent data center subsystems and other areas that deserve attention:


• Global (general issues)

• Energy monitoring and controls

• IT equipment

• Environmental conditions

• Cooling air and air management

• Cooling plant

• IT power distribution chain

• Lighting


Each section begins with high-level actions — common energy savings measures with the highest potential impact. Following the low-hanging fruit are detailed actions that provide technical advice and more opportunities to reduce energy use (some of which you may not have thought of otherwise).

For example, under Environmental Conditions, the no-brainer strategy of following the ASHRAE guidelines is listed first. Now before anybody tut-tuts listing the obvious, recognize that many data centers are still operating inside a tight environmental band to this day. In turn, reminding and encouraging users to accept the broadened guidelines is a first step we must always take.

But stepping beyond the conventional wisdom, the less discussed strategy of addressing electrostatic discharge (ESD) with physical means instead of thermodynamically is also presented. Specifically, ESD can be kept in check by conductive flooring materials, good cable grounding methods, and providing grounded wrist straps for technicians to use while working on equipment.

This is a frequently missed opportunity as users struggle to address and optimize humidification and dehumidification systems that too often fight each other and waste energy. Eliminating the shock hazard in the first place can simplify the mechanical design, operation, and save energy, hence its inclusion in the Master List.

Following a similar path throughout all of the subsystems, you can use the Master List to find your exit ramps to deeper study. Then you can research the topics most applicable to your particular situation to determine the best path forward.

The point isn’t that the Master List provides all the answers. The takeaway is that it will help guide the process.


In Conclusion

The tools and approach discussed herein are ideal for the early stages of a project or an assessment of an existing facility. If they are the beginning, then a good energy model and lifecycle cost analysis is the end. And while there is no substitute for the wisdom that comes with experience, good information goes a long way.

What got me through that first project was access to smart folks and a willingness to dig deeper and learn more. That’s what still gets me through a project. It’s just now I’m more often sharing my knowledge instead of imbibing from the font of others, and I have a larger pile of info to dig into.

When the next opportunity arises to either assess or design a data center, don’t reinvent the wheel just because you’re smart enough that you could if you had to. Visit the CoE site, get your bearings, and prep for a deeper dive and ultimately a better result.

Start smart so you can finish strong. ES