Figure 1. IT cabinet with integrated cooling. (Photo courtesy of Emerson Network Power/Knurr.)


Early in my career, the office I worked in was beginning to go high-tech.  There had been a computer in the office, probably for the last five years or so, but it was the property of the resident genius who used it for his unfathomable machinations (Pay no attention to the little man behind the curtain …).

But now, there were multiple PCs in the building, and we were starting to convert production to CAD. On top of that, Windows 3.0 had just been released, and all of a sudden, one didn’t have to speak DOS to run an application. One fine day, a new machine arrived, and as I ran my “loads” program for the first time on the new-fangled contraption, I marveled at the speed of the computations. At that moment, I boldly predicted that we had officially reached the zenith of computing technology, and there would be no need for faster machines henceforth.

The year was 1990, and the amazing machine was a 486 PC with a blinding processing speed of 25 MHz. As a point of reference, Intel’s® latest Xeon™ processor has a speed of 3.6 GHz, which is roughly 150 times faster than my predicted peak.

Thus began and ended my career in IT prognostication.

But I have to tell you, I saw something coming a few years ago and right about now I’m feeling pretty sharp, and just a little bit ahead of my time: Data centers are getting wet.  

State Of The Art, Circa 2004

Less than five years ago, an excellent article was presented on the subject of data center cooling1. The folks who wrote this article were experts, and in addition, they were and continue to be guiding members of the very prolific ASHRAE Technical Committee 9.9, (Mission Critical Facilities, Technology Spaces and Electronic Equipment), which has authored no less than four books on the subject of data centers in the last three years.

Looking back, what is remarkable to me about the article and the design guide is that there was no mention of liquid cooling. I’m not saying that the authorities weren’t aware of, or working on, liquid solutions, but my point is that as recently as three years ago, conventional wisdom said data centers were cooled with air.

Now, to their credit, the experts have addressed liquid cooling in subsequent guides and articles2-4. However, the conversation continues to be couched in futuristic and uncertain terms. Everyone agrees that IT loads are increasing exponentially. And everyone acknowledges that air-based solutions are rapidly approaching their logical limit. But very few in the industry seem willing to commit to the new liquid-based paradigm unless someone else does first.

Nick Aneshansky, vice president of Sun, seemed to provide voice to this less than progressive brain trust when he stated at a recent industry sponsored panel discussion, “If fluids are available, it’s a really good strategy … but I think liquids are not ready for primetime. There needs to be more standardization … when that happens, we’ll design the equipment to plug right into such utilities.”5

So the disturbing new conventional wisdom seems to be liquid solutions make sense and are coming … someday. In the meantime, I guess data centers are still cooled with air.

What's A Designer To Do?

I don’t know about the rest of you, but I have clients that are committing to data centers today for loads and equipment that won’t be defined until the day their IT department gains access to the facility. That’s how fast things are changing. In a very scary sense, it’s a high-tech crapshoot, and the stakes couldn’t be higher.

Unlike the vice president at Sun or IBM, who has the relative luxury of waiting for the utilities to be standardized before modifying their product, IT facility managers have to produce their product (the building and its infrastructure) before these same utilities are determined. And unlike Dell and HP, whose product’s average life span is three to five years maximum, our facility’s useful life should span decades.

Will the future be chip-based cooling? Will it be modular cabinets with integral cooling units? Is it water or refrigerants? Should we commit to a traditional air-based solution and pray that the folks projecting the load density numbers are being just too darn aggressive?

Problems Lead To Solutions

Three years ago, when I approached the manufacturers of data center cooling equipment, all I was getting were air-based solutions:computer room air conditioners (CRACs) on the floor to handle base loads, and ceiling hung spot coolers at high-density locations. Or, CRAC units in-line with the IT cabinets with physical enclosures to isolate the heating load from the rest of the space.

One client, who manufactures the very heat generating equipment we are discussing, had committed to the concept of creating large mechanical rooms directly below the high density data centers, and utilizing them as massive supply plenums, installing large central AHUs that discharged directly into the mechanical room/plenum space.

Some designers were proponents of a concept where the IT cabinet itself would be ducted to the ceiling plenum, and the supply air would be pumped directly into the cabinet from the floor via a cabinet mounted fan, thus avoiding the whole hot-aisle/cold-aisle approach.

What struck me about all of these supposed solutions was their clumsiness. An analogy came to mind: Trying to treat distinct server loads with large air systems was like trying to drive a finishing nail with a sledge hammer. Sure, you’ll accomplish your goal, but egad, what a lot of wasted energy and unfocused brute force.

As my old boss taught me, if you understand the problems, the solutions become obvious. And the problems I was seeing led me on the search for the data center cooling system Holy Grail, with the following tenets.
  • Scalable – The solution must be appropriate for day one when the loads may be small, but able to grow with minimal disruption.
  • Reliable – The system must be as simple as possible (but no simpler), dependable, and preferably built on existing technologies and practices.
  • Non-proprietary – The design should not lock one into a particular manufacturer or technology.
  • Energy efficient – Any solution in this modern era must be environmentally responsible and fiscally sustainable.
 

Figure 2. An example of chip-based cooling. (Photo courtesy of Emerson Network Power/Cooligy.)

Air And Water Are Not Mutually Exclusive

For the longest time, I thought that the solution had to be an all-water answer. But the more I thought about it, the more I realized there was just too much solid design experience and know-how out there to simply abandon traditional CRAC units all together.

The guides and articles I have referenced, and the volumes of data out there supporting the conventional hot-aisle/cold-aisle approach, should not be rejected. In fact, they should be embraced as the solid foundation upon which the design is built. Simply put, handling a base load of say 75 W/sq ft with CRACs makes sense. And it meets the four objectives quite nicely.

Now don’t get hung up on 75 W/sq ft as a base CRAC load. In fact, I would prefer it if you didn’t even think in terms of W/sq ft in the first place. Instead, think of load in terms of kW/IT cabinet. This begins to focus your attention on where the load actually is (in the racks) instead of trying to average it out across the floor.

(As a quick aside, for an idea of how the industry is starting to focus on the load within the cabinet instead of around the cabinet, take a look at HP’s Dynamic Smart Cooling (DSC) technology atwww.hp.com/go/powerandcooling. DSC uses thermal management software to continuously adjust air conditioning settings based on real-time measurements from a network of sensors deployed on the IT racks. We now rejoin the article already in progress …).

You have to determine with your client what the safe base load is in your particular application. It will be a function of space geometry, load concentration and location, risk tolerance, anticipated growth (both how much and how fast), and plenum fluid (air) dynamics. Whatever you determine your base load to be, give yourself some mathematical elbow room, and then lay out your CRACs based on well-established design practice.

Figure 3. An example of “dizzy” chilled water piping strategy providing no single point of failure and multiple paths.

Follow The Leaders

But what about that load on top of the base load? How should you handle that? Well, if I’m completely honest, I don’t know the final answer. Now, don’t close theEngineered Systemsbrowser in disgust. Note the caveat “final.” I have a clue on what the medium will be (water); I just don’t know how it will be ultimately applied. And frankly, no one else does, either. There are many ways that the medium of water could be practical, such as cabinet-based cooling, i.e., cabinets with built-in coils are a possibility (Figure 1). These cabinets essentially isolate the load within the cabinet itself and are modular, so that you can handle a load as small as 3 kW or as large as 30 kW within the cabinets themselves.

Another option is heat exchangers incorporated into the servers themselves right at the CPU. All of the server manufacturers are exploring this option, and there is a history there, since mainframe computers used to be water-cooled in the good old days.

One canary in the coalmine on this issue of using water is the recent acquisition activities of Emerson Network Power, whose product line includes Liebert, a perennial leader in precision HVAC. In the last year, they have acquired Knurr, which focuses on cabinet-based solutions, and Cooligy, which focuses on chip-based cooling solutions (Figure 2). If Emerson can put their eggs in multiple water-based solution baskets, so can you.

The fact is that there are many “things” water could be attached to, but in most designs, you don’t have to have a crystal ball to plan accordingly (although I must admit I consult my Magic 8-Ball® when things get dicey). The common denominator in all cases is a flexible chilled water infrastructure that should be able to support the eventual heat exchanger technology, and preferably in a plug and play manner.

Come On In ... The Water's Fine

If we return to those four tenets of an ideal infrastructure (scalable, reliable, non-proprietary, and energy efficient) and let them inform our design, a relatively straightforward data center piping rubric emerges.

Regarding scalability, the chilled water piping plant mains and distribution headers should be right-sized (not oversized) to accommodate future load increases. Even though this may represent a higher first cost, in the near term when loads are smaller, and arguably over the course of most of the data centers’ life, this will save energy. Larger pipe means less pressure drop, so pumps may be smaller.

While the plant and primary mains have to support the average load on the raised floor, at the data center, the distribution network must be sized to adequately support concentrated loads. Since these hot spot locations may change over time, the network must be flexible. A simple example (but not a rule of thumb): If the average load is 100 W/sq ft, the local distribution should be able to handle as much as 175 to 200 W/sq ft locally. The actual “local load factor” you apply must be determined, in the course of your design, based on your particular conditions.

And the last component regarding scalability and flexibility is the provision of isolation valves, taps for future connections, and the locations of both. Specifically, one should be able to connect to future equipment, or accommodate the relocation of existing equipment without disrupting the operation of the data center.

Based on your client’s risk tolerance, establishing the proximity of piping taps to the possible hot zones may be as aggressive as running a main down every aisle. Or you may establish an acceptable distance (50 ft for example), which can be used to create radial zones emanating from each tap, in turn providing full coverage with the understanding that some interruption may have to be accommodated.

Valves figure into the issue of reliability as well, in that sufficient valving needs to be provided to allow partial loop isolation while allowing continued operation. And on the subject of loops, in a data center, we are talking about a true looped piping system where there should be no single point of failure. As an old colleague of mine once said, this can make for some dizzy water, but it’s a necessity.

Now the extent of your looping depends once again on the needs of your client and project (gee, that keeps coming up). Figure 3 is an example of a very loopy dual central plant design I proposed on one project. In Chapter 3 of the latestASHRAE Liquid Cooling Guidelines(bibliography) no fewer than six configuration concepts are presented, from simplest to most complex, with their associated advantages and disadvantages listed.

On the subject of being non-proprietary a chilled water system is just that. Like I said, any “thing” can plug into your chilled water system, and those technologies may be proprietary if the client prefers, but your infrastructure is oblivious to that specialty. A valve is a valve and a tap is a tap.

And last, a variable flow chilled water system, when properly controlled, has the greatest potential for energy savings. For starters, using water to transfer energy is roughly 3,500 times more efficient than using air. Add to that the plethora of knowledge available to the designer regarding optimized chiller and tower controls, waterside economizers, waterside heat recovery, etc., and you have the best opportunity possible.

In Conclusion

I’m hopeful you can forgive my tongue-in-cheek claim that I was ahead of my time on this. I am neither a data center soothsayer nor an expert in the truest sense of the word. Like most of you, I’m an applications guy. My hope is that you see this article as it was intended, as a metaphorical barometer and signpost.

Just like mercury rising in a barometer indicates a positive change in the weather, I’m here to tell you that liquid cooling in data centers is coming, and in actuality it is already here. All you have to do is look at the major manufacturers’ websites, and you can see we are at a juncture. And when you come to a fork in a road, you look for direction.

I trust this article provides direction. I’m not capable of giving you the final answers, but I can tell you where to look for them. For starters, get your hands on the four books in theASHRAE Datacom Seriesreferenced below. And, with liquid cooling in mind, devour every data center article that has come out in the last 36 months and every new one that comes out. Because just like the high-tech equipment we cool, the technologies available to us, and how we apply them, are evolving rapidly.

Yes, like a trip to a water park with my boys, data center design is beyond a doubt, wet ’n wild.

Have fun.ES

BIBLIOGRAPHY

American Society of Heating, Refrigerating and Air-Conditioning Engineers,Thermal Guidelines for Data Processing Environments, Atlanta: 2004.

American Society of Heating, Refrigerating and Air-Conditioning Engineers,Datacom Equipment Power Trends and Cooling Applications, Atlanta: 2005.

American Society of Heating, Refrigerating and Air-Conditioning Engineers,Design Considerations for Datacom Equipment Centers, Atlanta: 2005.

American Society of Heating, Refrigerating and Air-Conditioning Engineers,Liquid Cooling Guidelines for Datacom Equipment Centers, Atlanta: 2006.