When shopping for a new supercomputer array, officials at The University of Illinois at Urbana-Champaign positioned reliable cooling system scalability as the number one concern. The solution also incorporated an  environmentally friendly refrigerant which is pumped as a liquid, converts to a gas within the heat exchangers, and is then returned to the pumping station, where it is re-condensed to liquid.  


The University of Illinois at Urbana-Champaign conducts highly technical research requiring massive computing performance to support fields ranging from molecular biology to electromagnetic theory. The university decided to update its aging supercomputer array and data center to provide world-class computing support for its world-class research. The data center was built in the 1960s and required a complete overhaul.

After evaluating various options, the university’s interdepartmental Computational Science and Engineering (CSE) selected a design incorporating 640 Apple® Xserve G5 rack-mounted dual-processor servers. The cluster was projected to generate a cooling load of more than 550,000 Btuh, requiring 45 tons of new cooling capacity in the data center.

William A. Dick, CSE executive director, said an entirely new cooling system was necessary from the outset. “We maxed out the building on power. And in order for us to double the number of processors on the floor, we were going to have to bring in an entire transformer and an entire new line into the building. We did not have a large budget, so it was important that this solution be economical as well as energy-efficient.”

Starting From Scratch

University design engineer Tom Durbin and construction supervisor Tom Graham oversaw a complete overhaul of the 2,000-sq-ft data center, stripping the facility to the bare walls and ceiling. The old, 12-in. raised floor was replaced by a new 16-in. subfloor plenum.

For help with the cooling system design, the university enlisted Emerson Network Power and its Liebert XD cooling system. The XD family provides a flexible, scalable, and waterless solution delivering sensible cooling of heat densities higher than 500 W/sq ft. and using an environmentally friendly refrigerant which is pumped as a liquid, converts to a gas within the heat exchangers, and is then returned to the pumping station, where it is re-condensed to liquid.

After evaluating airflow requirements for the room, Emerson and university engineers configured the servers in 22 racks, including two racks for the high-speed Myrinet switching fabric. Each of the 22 racks contained 32 to 35 Xservers and was arranged in two rows and positioned back-to-back to create a hot aisle between the rows.

Ensuring reliable cooling system scalability was the top concern for Durbin and Dick, who recognized that relying solely on underfloor cooling would prove insufficient considering the escalating computing power required by the university.

Going Underfloor - And More

Durbin collaborated with Liebert representative Jeff Bilsland of Sepco Inc. to create an adaptive cooling architecture using underfloor cooling in combination with the Liebert system. Underfloor airflow computer modeling verified the university’s design specification for the traditional and high-density supplemental equipment.

The adaptive architecture included three separate cooling systems to provide an N+1 level of cooling redundancy.

First, base-level underfloor cooling and humidity control is provided through the campus-wide chilled water and building air-handling systems. Cold air rises through perforated tiles in front of the racks and enters them. The hot air from the racks is exhausted into the hot aisle and is circulated back to the A/C units, where it’s chilled again and recirculated in the raised floor back to the perforated tiles.

Second, the Liebert XDV rack-mounted cooling modules deliver supplemental cooling to the front of the racks. XDV modules are connected to the building chilled water systems through a Liebert XDP pumping unit, which isolates the building chilled water circuit from the pumped refrigerant circuit. The XDP circulates the refrigerant to the XDV modules at a temperature always above the actual dewpoint to prevent condensation. Third, the 10- and 20-ton Liebert Deluxe System/3 precision cooling units provide precise, reliable control of room temperature, humidity, and airflow as needed. They also serve as backup cooling to base building chilled water systems.

80°F Hot Aisles, 60° Cold Aisles

Currently, the temperature in the cold aisle averages 60°F, while hot aisle temperatures average 80°. “Maintaining continuous, reliable operation is vital because the supercomputer is often used for complex calculations that cannot be interrupted,” Dick said. “If the computing equipment fails, the calculations in process would have to be restarted, causing researchers to lose valuable time.”

In February 2005, the University of Illinois officially commissioned its Turing Cluster, named after Alan Turing, the famed British mathematician credited with founding the field of computer science. A few months later, the cluster was clocked at processing speeds of more than five teraflops, supplying more than enough computing capacity for critical research conducted through the CSE.ES