In which the author isn’t trying to stir up trouble, really. As a society, we may not like to know exactly how the demands of modern technology get supported. But as designers, we have a duty to hone in on the physics of the data center situation, avoiding prefab filler and delivering answers that cut the mustard.
As I write this from my laptop at O’Hare International in Chicago, I’m thinking about meat.
As I survey the waiting area, I see a scene that takes place in all the nation’s airports. The guy across from me is working on his laptop. The gal next to him pounds away on her BlackBerry’s ridiculously small keyboard. The kid next to her yaks away on his cell phone. And the guy I don’t see is the poor schlub in the parking lot who will miss the plane because he depended too much on his rental car’s GPS.
And I’m musing on meat?
My Kind of Town ...I live in St. Louis, and it’s a great town, but Chicago is a real city. Back in the 1800s, St. Louis bet it all on steamboats while Chicago put its marker on railroads, and I don’t have to tell you who won that bet.
Tied to Chicago’s railroad legacy are the Union Stock Yards and the reinvention of meat packing.
Up until the mid-1800s, meat processing was provided by the local butcher, and what he butchered depended largely on the time of year and his proximity to the game and his customers. But the large centralized stockyards of Chicago - and the “R” in ASHRAE - led to technological breakthroughs that dramatically altered the industry.
In 1872, packers began using ice cooled units to preserve meat. With this technology, meatpacking was no longer limited to cold weather months and could continue year-round. In 1882, the first refrigerated railroad car showed up, thus making it possible to ship processed meat to far away markets. And decades before Henry Ford churned out a Tin Lizzy, meat packers had pioneered and perfected assembly line production.
So here we are in 2009. You can buy fresh meat at any grocery store and have a bratwurst at Wrigley and never think twice. In fact, if meat couldn’t be found at these places, then you would notice. In spite of PETA’s best efforts, we eat chickens, pigs, and cows at an ever-increasing rate. And processing facilities have become so efficient that it takes less than two minutes for the cow at the front door to become the packaged steaks out back.
Big Macs to MacsBy now you see where I’m going, right? That iPod in your hand is the proverbial hamburger of the 21st century. We have come to depend on technology to such an extent that we often don’t appreciate it until it is interrupted. We are consumers of the red meat that is technology, and just like the bovine variety, we are mostly (and in many cases consciously) oblivious to the incredible apparatus and infrastructure required to satiate our appetites.
The conventional wisdom says if the average American saw the machinations in a packing plant, they would likely reconsider that hot dog. And similarly, most in the U.S. probably don’t know, nor do they want to know, the impact their Google fix has on the planet. It’s more likely they would rather just boot up blindly every morning, answer their e-mail, twitter their life’s banal details to the universe, and then talk on their cell phone while driving home.
They harangue the auto industry for making SUVs and righteously buy a hybrid. They swap their incandescent bulbs for CFLs and pat themselves on the back. They mount overpriced solar panels on roofs and plant obtrusive windmills in fields and crow about how sustainable they have become. But at the same time, we are using and escalating our use of technology, which in turn demands more and more power and more and more infrastructure.
And just to peg the irony meter, how many “green” websites are out there? How many online calculators are there for carbon foot prints calculations, mpg comparisons, waterless urinal payback analysis, and the like? How many watts do we burn at data centers just so we can figure out how many watts we might save if we applied some sustainable strategy?
Moore, Page, and MaddenIn 2007 it was estimated that approximately 1.5% of the total energy consumed in the U.S could be attributed to data centers, and the raw power required was expected to double by 2011.1These dramatic figures can partly be attributed to Moore’s Law (named for Intel co-founder Gordon Moore), which portends that computing power doubles every two years or so (Figure 1). At the same time, the less precise Page’s Law (named for Google co-founder Larry Page) contends that software gets twice as slow about every 18 months due to complexity.
So about every year and a half our computing speed doubles, but we consume that capacity with more sophisticated programs and applications and in turn, no efficiencies are realized. In the meantime, consumers are exposed to more and more applications, which in turn drives demand for even more applications.
This phenomenon can be seen in the far too familiar realm of video games. As an example, in the early ’80s in the vestibule of a Woolworth in St. Ann, MO, a lonely dork (me) played the now classic arcade game Space Invaders on a console the size of a refrigerator. Today my three boys (dorks as well), play incredibly life-like games (like Madden Football in our basement on a gaming system about the size of a small brief case, which puts out as much heat as an inefficient toaster.
So what’s my point? Am I saying technology will consume us all and in so doing advocating anarchy? Have I painted a dire picture of techno-dependency as a set up to some Mad Max scenario? No. My intention is to convey that we are increasingly dependent on technology and the requisite energy it consumes, and this calls for a great deal of awareness, foresight, and innovation on the part of HVAC professionals.
What Now?Like the humble butcher of yesteryear who couldn’t have visualized a modern packing plant, so, too, are we. I don’t believe that any of us can conceive of what the data processing enterprise of tomorrow will look like. So it begs the question: How do we design today in anticipation of tomorrow?
For starters, I think we are at a crossroads when it comes to data center design. The data center of yesterday, with its sole dependence on computer room air conditioning (CRAC) units and underfloor air distribution (UFAD), seems remarkably awkward and inefficient. On the other hand, some of the modern air-based designs being posited, which incorporate hot or cold air containment, feel rigid and inflexible with their roots firmly planted in the regimented hot aisle/cold aisle layout. Water-based cooling is coming, but I know of no one who has (or should have) committed 100%.
On top of that, manufacturers of servers, mainframes, racks, and cabinets are not standardized on any configuration or cooling medium. Because of this, it wouldn’t be a surprise to see water-based mainframes, open racks, and enclosed chimney cabinets in the same facility. Unlike the big guys (Intel, HP, Google, Microsoft, et al.) who can build around a particular brand or concept, most data center owners can’t and frankly shouldn’t lock into anything proprietary.
So here we are at that fork in the road. The past is prologue, the present is in flux, and the future is unknown. What now?
Avoid the ShelfA very wise man once told me,
“In design engineering there are two resources available: The laws of physics and the products of the market. The designer of excellence works with the former and the designer of mediocrity works with the latter.”2
What that means to us as data center designers is that we have to throw away the marketing hype of the equipment manufacturers and shun off-the-shelf solutions.
Note, I’m not denigrating the many firms dedicated to our industry. They provide valuable tools, research, insights, and products and are an integral part of what we do. But as designers, we are system synthesizers, and we have the ability (and arguably the obligation) to assemble the pieces and parts necessary to meet the requirements that the physics demand.
Unfortunately, many system designers begin with the knowledge of the products available, and when faced with a design quandary, they assemble a solution using those established components like a kid with a Tinker Toy set. But the problem with trying to accomplish a design with a fixed equipment rubric is that it inevitably introduces more complexity. An example of this is the legacy CRAC and UFAD concept.
Most of us would never design a comfort conditioning system using an open supply plenum extending across a broad floor plate. The idea of dumping air into a plenum and then banking on diffusers strategically placed over workstations to provide adequate environmental control in an open office environment is counterintuitive, if not down right nuts. But that is basically what we do in the legacy data center.
Starting with this paradigm, we work to solve the inevitable problems it creates. First we try to establish order with hot aisles and cold aisles. Then to avoid mixing, we introduce means of separation and isolation. Because we cannot figure out underfloor air distribution intuitively, and it’s too complicated to calculate manually, an entire industry is built around computational fluid dynamic (CFD) modeling.
Just think of all of the band-aid products that are out there, designed in good faith and sold honestly, but which are band-aids none the less. But have we just overcomplicated our designs when perhaps the underlying premise may be fatally flawed … especially as we approach higher watt densities?
Now, I’m not trashing CRACs and UFAD. In some situations, they are the right solution. And I recognize that all systems cannot be custom and that we must use the technologies and equipment available to us. But I would suggest that in your design calculus you think of all of these “givens” like CRACs, UFAD, and hot aisle/cold aisle as outcomes instead of inputs.
ProblemetricsAs long as I’m on a roll, I will drop another nugget from my mentor,
“If you can write an equation for a problem, you will have the solution” Every month in at least one of the four primary industry journals (ES, ASHRAE Journal, HPAC, and Consulting-Specifying Engineer), there is an article on data center design. And almost everyone has a green spin. One of the best, by some of the best, was in a recent issue ofHPAC.3In the article, the high-performance building experts at Lawrence Berkeley National Laboratory (LBNL) discussed a number of key metrics for quantifying efficiency in data centers.
One of the problemsolving metrics that I found useful was the return temperature index (RTI), which is the ratio of the airside Delta-T at the AHU or CRAC over the Delta-T across the IT equipment:
RTI = ((T2 – T1)/ (T4– T3)) × 100
T1: Supply air temperature
T2: Return air temperature
T3: Rack inlet mean temperature
T4: Rack outlet mean temperature
An RTI less than 100% indicates that the air at the AHU is lower than at the equipment, and in turn some supply air must be bypassing the racks, while a value greater than 100% indicates the recirculation of hot air (Figure 2).4
This simple ratio may seem too simple, especially since I told you what the values indicate. But think of the solutions that fall out of understanding the product of the equation.
The equation tells us that we want to minimize, and ideally eliminate, bypass, and recirculation at the racks. Assume you have never seen a data center before but you understand the equation. You walk into a room full of distributed IT racks. Intuitively, do you really think you would choose to put CRACS around the perimeter of the room, provide uncontrolled supply air in front of the cabinets, and then return the hot air back over the racks to the CRACS with no separation? And yet, that’s a textbook legacy design (Figure 3)!5
How could that be? The early data center designers weren’t idiots. How did such a counterintuitive approach become the norm? Well, it isn’t necessarily because they didn’t understand the physics. They probably did. But they were working in a raised floor environment which was a product of the IT infrastructure, not of the HVAC infrastructure. So voila, necessity births invention, and we find we can cool relatively low watt densities using a supply plenum approach - albeit inefficiently, but no one cared about energy … until now.
ConclusionSometimes when I’m feeling a bit ornery, I choose to irritate my wife. She is apt to look at me sternly and ask in disbelief, “Why do you want to poke the bear?” As I wrote this, I worried that I might come off like I was poking the bear. But annoying admonishment is not my intent.
Like ham steaks, hot dogs, and hamburgers, technology is everywhere and taken for granted. Demand for new and better gadgets and applications increases exponentially, and the infrastructure required to support it merely keeps pace. The current state–of-the-art for data centers is anything but static, but designers still have to design today with only a glimpse of tomorrow.
The key to success, then, is to avoid designing around existing products and rote strategies and instead understand the physics so that you can identify and apply the appropriate tools. And to understand a problem, you must first boil it down to its equational essence.
As we look forward, we may not know the answers, but we should understand the challenge. It’s the same test we always face, just on a grander scale: To meet the environmental demand using the least amount of energy. But in the end, we have to recognize that in the arena of data center design, design evolution without innovation is merely change. And change alone just isn’t good enough.ES
Cited Works1. EPA. Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431, August 2, 2007.
2. Coad, William. “The Engineering Design Process,” inEnergy Engineering and Management for Building Systems, New York: Van Norstrand Reinhold Company, 1982.
3 Mathew, Greenberg, Ganguly, Sartor and Tschudi. “How Does Your Data Center Measure Up?”HPAC Magazine, May, 2009: 16-21.
4. Image courtesy of LBNL.http://hightech.lbl.gov/benchmarking-guides/data.html.
5. Image courtesy of HP. Technology Brief TC040202TB, “Optimizing Data Centers for High-Density Computing.” February, 2004.