Figure 1. Babbage’s Steam-Driven Calculator: The Difference Engine. (Image courtesy of the Computer History Museum.)


In the summer of 1821, Charles Babbage, an inventor and mathematician, was poring over a set of astronomical tables that had been calculated by hand. Finding error after error, he finally said, “I wish to God these calculations had been executed by steam.” In the Victorian Age, that seemed like a logical solution. He actually went on to design a mechanical computer called the Difference Engine, which is now displayed at the Computer History Museum in Mountain View, California1 (Figure 1).

I’m amazed, after 10 years of designing and writing about data centers, at the pace at which they are evolving. Driven by energy costs and sustainability concerns, parameters have widened, and the cold white box data center of yore is almost as dated as Babbage’s contraption.

If someone would have told you as recently as two years ago that a United States government agency data center would incorporate a 100% airside economizer, you would have likely patted them on the head and said, “Keep dreaming.” And if you told me that the major players in the data center game (Yahoo!, Microsoft, Google, et al.) would be building data centers in the continental U.S. with no refrigeration of any type installed - none, nada, not even a window unit - once again you would have scoffed at their naiveté.

But gosh darn it if that isn’t what we’re seeing.

Figure 2. The Yahoo! “Computing Coop” data center concept. (Image courtesy of Yahoo! and datacenterknowledge.com.)

How in the world did we get here so fast? And if we have gotten here at such a pace, where the heck are we going to be in another two years? And five years may as well be an eternity.

I would suggest that there have been two seismic shifts in the recent past. First, we have come to understand that what really matters in a data center are the environmental conditions at and in the IT equipment, not the temperature in the space, per se. And second, there has been a major paradigm shift for those who occupy and operate data centers; they are embracing new definitions of what a data center can look like.

RACKS, NOT ROOMS

Like Babbage trying to solve a problem mechanically that ultimately would be solved electronically, HVAC building engineers have tried to tackle data center cooling based on their past experience. Unlike process engineers, HVAC folks typically look at the problem of heat generation at the room level, not at the source. For example, in a typical office or school cooling design, we have traditionally dealt with the heat that passes through the envelope and/or generated by the occupants, equipments, and lights after it has been transferred to the air. Once the initial transfer has occurred, we set about removing the heat, most often through air-handling systems.

So big surprise, then, that data centers come along and HVAC guys treat them the same way: as a space cooling challenge.

The process guys always had different options. Step into a brewery or chemical plant, and chances are you will be physically uncomfortable. But who cares? It’s the product and equipment that matter most, not the poor sap working in the space. Sure, we can’t ask a guy to go into a room at 100°F without some kind of protection or protocol, but we don’t have to cool the room to 75° either.

Figure 3. Microsoft’s “Gen-4” modular data center concept. (Image courtesy of Microsoft and blogs.technet.com.)

But a funny thing happened on the way to the technical forum. Those of us in the data center trenches started to recognize that what was going on in the rack was a process. For gosh sakes, even 90.1 recognizes it as such. So it wasn’t a space-cooling problem at all. In turn, we started to solve a new problem.

EXPAND THE LIMITS

So now that we have averted our gaze from the room and to the equipment, what starts to change? First, we took the baby step of hot aisles and cold aisles - tacit acceptance of having an uncomfortable space within a space. But mixing between the aisles wreaks havoc and leads to inefficiency, so we introduced hot aisle and cold aisle containment strategies.

Once you create true separation between the hot and cold spaces, you no longer have to account for mixing. In turn, if you only need 75° at the front of the rack, then you can deliver 75° at the floor. And if you can deliver warmer air in the cold aisle, you can supply warmer air at the computer room air conditioner (CRAC). This, in turn, provides cascading economizer opportunities all the way back to the plant.

But hey, the entering conditions at the rack widened further in 2008, as ASHRAE increases the “allowable” ranges to 59°/90° drybulb, and 20% to 80% rh with a maximum dewpoint of 63°. Ding, ding, ding - check your bin data folks. All of a sudden, the number of hours you can meet the conditions with outdoor air are quickly approaching 8,760. Throw in evaporative cooling, and you are probably there almost anywhere in the world.

Before we knew it, we weren’t looking at a cooling problem at all. It has evolved into a heat rejection problem. Now chilled water Delta-T and waterside economizers aren’t the savings drivers (chilled water is gone). Now it’s the acceptable temperature at and across the equipment. The greater the Delta, the less air you move, but how high can you go? I don’t know. Ask the equipment manufacturers who are continually working to increase their acceptable high end to meet the demands of their largest and most influential customers (again, the usual suspects, Microsoft, Google, et al.).

Wow, that was fast. Time sure does fly when you’re redefining an industry segment.

THIS IS NOT YOUR FATHER'S DATA CENTER

Some of you may recall when Oldsmobile tried to redefine itself with the tag line, “This is not your father’s Oldsmobile.” Obviously it didn’t work, because Oldsmobiles no longer ply the byways. Unfortunately, all GM did was change the sales pitch. The car still looked and drove like something more at home in your dad’s garage.

Figure 4. Google’s “pods.” (Image courtesy of Google and theregister.co.uk.)

And similarly, until recently, data centers continued to look and feel like data centers have always looked. It seems that data center operators could half-way accept the widened ranges being proffered by ASHRAE, but they couldn’t quite grasp the physical changes that full acceptance would require.

For example, you usually can’t see from one end to another in a data center that has separation between the aisles. To take advantage of airside economizers, central AHUs are required, so now there are no CRACs distributed around the room. And if you have accepted the wider environmental conditions, it likely feels more like a warehouse than the “precision-cooled” environment we all grew up with.

But you know what? Data center guys are pretty sharp, by and large. And they know a thing or two about energy costs and business drivers. So as the new parameters and the energy-saving opportunities they created became more evident, the material changes required to fully exploit them started to be embraced. That’s why we are starting to see data centers that look like chicken coops (Yahoo!)2, gigantic Lego® sets (Microsoft)3and shipping depots sans tractor trailers (Google)4(Figures 2, 3, and 4).

Caution: Don’t replace one absolute with another. A progressive data center design doesn’t by definition have to look - or not look - a certain way. Merriam-Webster defines progressive as “making use of or interested in new ideas, findings or opportunities.” And if we apply this mindset, we will find ways to incorporate new findings, ideas, and opportunities. In turn, data centers (how they look and operate) will continue to advance as well.

WHERE DO WE GO FROM HERE?

The laws of physics do not change, but the problems we are asked to solve do change. So while today the problem often leads us to air-based heat rejection solutions, there is no telling what exactly the next round will bring. When you consider the existing economizer-centric solutions, there are natural limits that will ultimately drive innovation.

For example, consider that as computing speed and the requisite amount of heat released from a single chip increase, the fiscal and physical balancing act of either increasing floor space to distribute heat load or decreasing room size, and, in turn, amplifying point loads will continue. If the past is prologue, then higher densities will likely win out. If that’s the case, and Delta-Ts across the equipment are constrained due to personnel or equipment limits, then the challenge becomes how in the world do you move that much air?

With this challenge in mind, it is worth noting that we are still at least one heat exchange process away from thermal ground zero. Specifically, we are still using air to remove or convert heat that has been transferred from the equipment to the surrounding environment. What if we take one step closer to the source and liquid cool directly at the chip level?

If liquid cooling is the answer, what are the implications to air quality? If air in the data center is no longer passing over the IT internal components, then what conditions does it have to meet? Does it have to be filtered? How many air changes in the space are even required? If we need liquid cooling in order to be most efficient, what happens to all of those data centers with massive AHUs but no water distribution infrastructure in place? And what are the fluid, the Delta-Ts, and the required supply temperatures?

These are more than hypothetical questions. These are potential problems and possibilities as manufacturers continue to seek solutions down to the chip level. You say there is no way data center operators will ever accept liquid of any kind connected directly to equipment. All right, but did you ever think they would allow 100% outdoor air? These are puzzles to ponder, my friend.

IN CONCLUSION

Not that you asked, but whenever I write an article I always do a great deal of study. I reach out to others and pore over the available information, because I don’t want to say anything stupid. And with the pace of innovation, I don’t want to give advice that is useless before the article hits the street.

In turn, when writing about data centers, I find that quite often I trend to the philosophical rather than the prescriptive. And you know what, I’m okay with that. And I hope you are too. Because while I believe there is a place for case studies and technical articles, I think a snapshot and a heads up is appropriate now and again. And why? Because I have never seen a rate of change as significant as the one we are experiencing today.

I dropped a quote in my last article, and because I believe it is relevant now as it was six months ago, I will drop it again:

“In design engineering there are two resources available: The laws of physics and the products of the market. The designer of excellence works with the former and the designer of mediocrity works with the latter. 5”

So as you approach the next data center design, I encourage you to understand your client’s history, needs, and comfort level. Survey the current state of the art within the industry. And then solve the challenge without a preconceived solution.

As designs evolve, there is no way of knowing what that solution might ultimately entail, but I think we can all agree that despite Babbage’s entreaty to a Higher Power, it probably won’t be steam-driven. ES

Works Cited

1.http://www.computerhistory.org/babbage/.

2.http://www.datacenterknowledge.com/archives/category/yahoo/.

3http://loosebolts.wordpress.com/2008/12/02/our-vision-for-generation-4-modular-data-centers-one-way-of-getting-it-just-right/.

4.http://www.theregister.co.uk/2009/04/10/google_data_center_video/.

5. Coad, William. “The Engineering Design Process in Energy Engineering and Management for Building Systems,”New York: Van Norstrand Reinhold Company, 1982.