The new rooftop data center sitting atop this 26-story medical center represents a dramatic upgrade from the previous system and features three 70-ton AHUs plus a modular, scalable UPS, rack, and cooling architecture.


The University of Texas Health Science Center (UTHSC), located in downtown Houston, brings together multiple branches of UT’s health departments including dental, medical, nursing, and more. Approximately 2,600 graduate students are enrolled at UTHSC.

The original building that housed the existing UTHSC data center was built in 1950. The 5,000-sq-ft data center was built in the mid 1970s and had no generator capacity. Over the years, floor space was added to the data center and this created a situation that fostered inefficiency. The raised floor became choked with mainframe and communications cabling. Then, offices were set up in the middle of the data center.

Both of these situations exacerbated workflow, airflow, and electrical scalability problems. The fire suppression system consisted of an antiquated halon system from the 1970s, which posed challenges from both clean environment and technology perspectives.


Northern exposure

Richard L. Miller, CIO at UTHSC, felt it was time to upgrade the existing data center, as persistent power and cooling problems were negatively impacting overall data center operations. UTHSC director of data center operations and support services, Kevin Granhold, along with facilities project manager Jeff Carbone, were directed to spearhead the new data center D-B project.

After exploring a number of options, the IT team decided to build a new data center on the parking garage roof of a university-owned building in downtown Houston, which provided an opportunity to build the data center without any existing preconditions. “A rooftop in Houston, Texas is extremely hot,” explained Granhold.

“Placing the super-cooled data center on the north side of a 26-story tower was a good idea because that particular location is primarily in the shade most of the day for most of the year. The size of our proposed data center fit quite well in that space and we had multiple ways to get in and out of the facility.”

An engineering firm was hired to assess requirements and to provide a preliminary design. This assessment consisted of a series of workshops with several building architects and APC, the provider of the rack-based integrated UPS, cooling, and power distribution solution.

The new chiller system had to be configured for roof placement and correctly sized. A debate ensued regarding whether to deploy one, two, or three chillers. Installing only one large chiller represented a single point of failure, and running it at 25% or 50% of capacity would be inefficient. The three smaller chillers offered an N + 1 scenario in terms of availability. Two chillers would run at any one time with a third chiller available in case either of the two other chillers failed. The team specified three 70-ton Trane chillers for maximum flexibility.


Gaining floor space

Because a modular, scalable UPS, rack, and cooling architecture (the APC InfraStruXure™) had been selected to support the new data center, the team was not confronted with the issue of having to purchase excess capacity up front to meet the long-term needs. This allowed the team to leverage the project budget so that additional floor space could be built into the design to support additional racks with in row cooling and additional power capacity later.

Rack placement decisions were based on power density calculations, growth paths, and cooling-related issues. The existing infrastructure in the old data center consisted of outdated racks. Computer equipment was getting deeper and the old racks wouldn’t support the new equipment.

The self-enclosed, zoned architecture of InfraStruXure helped to facilitate the deployment of best practices within the new data center. The IT operations group wanted a hot aisle containment and cold aisle scenario that could accommodate both server consolidation and high-density servers. The hot aisle containment system provided Granhold with the option of better managing unpredictable server densities.

He said that using InfraStruXure greatly simplified his power distribution work in the new data center since he only needed to concern himself with supplying 480V to the in-row UPSs. The InfraStruXure design allowed for designing one breaker in the electrical room to run hundreds of servers vs. having to put in multiple breaker panels, which significantly reduced electrical wiring costs.

“I can unplug a whip or put in a PDU and I have the right voltage,” explained Granhold. “The facilities manager understood the issues of drift and hot spots and how I was containing that. The more understanding he had, the more positive he became. His department bought into it before we signed the contract with the building contractor.”ES