The data center space is one of the most quickly evolving spaces, in terms of cooling system design, when compared to the rest of the HVAC market. Many engineered solutions to improve data center cooling efficiency and operation come to market every quarter, and many are rapidly accepted into new data center construction due to the benefits they provide. But the fast pace of data center construction means many of these solutions have come and gone.

However, one solution that remains is the simple idea of containing the cold and/or hot air paths in the space, separating them, and increasing site efficiency while providing a more predictable operating environment. Often, this solution is deployed before companies gain a full understanding of the ramifications these changes may have on the operating space. This article explores a recently discovered issue that can occur when using hot and cold aisle containment — server leakage induced by positive differential pressure.

 

PAST DATA CENTER CONSTRUCTION

Before jumping into a discussion of this issue, it is important to understand the design of data centers prior to the mid-2000 time frame. Racks were laid out in the data center space in two ways: either in cold/hot aisle arrangements, or, in earlier designs, with the cold side of each rack facing in the same direction. Both designs resulted in differing amounts of hot and cold air mixing. In many cases, the general cooling approach actually cooled the entire room, as opposed to a more efficient approach that cooled only the equipment. These designs delivered cold air in front of the IT equipment to be cooled, where the cold air mixed with room temperature air, or hot air exhausting from the rack directly in front of the one to be cooled.

It did not take long for density levels to increase sufficiently and the equipment discharge temperature to reach a point where the limitations of this design became self evident. The initial response was to simply turn down the supply temperatures in an effort to decrease the worst-case temperature to the equipment after the mixing occurred. Most HVAC engineers would immediately understand the inefficiency to this approach, but with operational reliability typically driving decisions, this direction was understandable.

By the time of the dot.com bust of the early 2000s, it was clear that better approaches to data center design were needed. At this point, aisle containment entered the market as a viable solution to correct inefficiencies and operational issues in data center spaces.

 

DESIGNING WITH AISLE CONTAINMENT

The first approaches using either hot or cold aisle containment typically occurred in a retrofit environment. Data center managers, working in existing spaces where temperature and efficiency issues were the norm, began to apply easy retrofit solutions, such as flexible partitions and curtains or solid doors that separated airflow paths. These systems provided the basics of containment, but typically allowed for quite a bit of cold and hot air mixing when airflow in the containment system was not balanced.

In situations where excess air was provided to the system — more than required by the IT hardware — the containment systems typically leaked this cold air into the hot aisle due to the flexible nature of the majority of these containment systems. When too little air was available, the opposite occurred, and hot air was pulled through the system to make up for the volume required.

Experience using these systems led many data center operators to build better hard containment systems with tight seams to ensure that air did not leak through the containment structure. Typical request for information (RFI) and RFP specifications for containment systems called for leakage rates as a percentage of designed flow rate, or a volume of leakage at a particular static pressure maintained in the aisle, with either hot or cold aisle systems.

With the introduction of design and implementation improvements to containment systems, the market soon understood the importance of doing a better job restricting airflow paths. These containment systems typically took the form of hard containment systems, ranging from solid polycarbonate panels and doors to stud and drywall construction that used conventional commercial doors to access the aisle in purpose-built spaces.

These tight containment systems provided the first evidence of positive differential pressure in containment systems. When too much airflow was supplied to the system, instead of air leaking through the containment system itself, pressure built up in the aisle. In the case of cold aisle containment, the aisle acted as an extension of the plenum providing the cold air. Where hot aisle was used, a similar scenario could occur, with the return plenum pressure at a negative differential pressure compared to the outside space.

 

RAMIFICATIONS OF POSITIVE DIFFERENTIAL PRESSURE

In an idealized containment system, positive differential pressure (with respect to the server’s inlet and exhaust) would be non-existent. Servers would consume airflow at the rate their fans were designed to move air, based on the current load demand and heat production of the servers, and the system would operate with no change in airflow consumption. This model is typically presented in the majority of computational fluid dynamics (CFD) programs used for data center cooling design. These results could show exceptionally high levels of static pressure existing in the cold aisle containment system. In fact, the author has observed these effects in a number of data centers with tight containment systems.

The author researched the effect of positive differential pressure on servers. The following testing method was used: rack-mounted server models were acquired on the open market for testing. Each server was first bench tested to determine the flow rate through the servers when operated in a neutral pressure environment. Tate’s white paper on the subject provides more detail on the design of this testing rig. The inlet or cold side of the server was then exposed to higher differential pressures, from 0.02 up to 0.20 in. of H20. These results over the 11 servers tested were averaged, and the tabulated data follows is presented in Table 1.

On average, a server exposed to 0.05 in. of H2O positive differential static pressure will leak approximately 9.5 cfm per U when in operation. The average rack values generated in the table above were based on a 42U rack that was completely populated with IT hardware. Higher positive differential pressures have been observed in existing data centers. The author has personally measured pressure in excess of 0.18 in. of H20 in an active supercomputing site that uses cold aisle containment.

Additional analyses have also been conducted on many CFD models of potential containment systems, which on paper are designed correctly for the final load condition the site will experience when fully deployed. As with most sites, however, the initial load is a fraction of the design load, resulting in high levels of pressure in the cold aisle due to partial load conditions.

Similar effects are also noted with highly variable load profiles that use modern servers. The cfm of airflow consumption of a modern server is often based on the current heat generation at the component level. As utilization of the servers increases, cfm levels increase. Even in a fully deployed aisle containment system, the full cfm requirement of the servers would likely only be experienced during peak operating conditions. During idle times, a high level of positive differential pressure would be experienced, resulting in airflow leakage through the individual servers that exceeds their required flow rate. This would require unneeded fan energy to supply the air from the AHU and decrease the overall Delta T through the equipment, hampering the efficiency of the cooling system.

 

CORRECTING THE SITUATION

Controlling the positive differential pressure buildup in a containment system is clearly the key to eliminating server leakage. During the design phase, the engineer responsible for the airflow design of the data center must design the site for the enduser’s projected load profile. However, the design must also address management of system performance to compensate for the reality of staged IT hardware deployment and the variability of future IT hardware during actual use.

A simple way to add this function to a data center designed around aisle containment is through the use of automatic VAV dampers, with their control systems tied to pressure control mechanisms.

For years, VAV systems have been available that control the airflow delivery from a raised floor system by temperature control. Using pressure control is simply an extension of these units. From a design perspective, a single control unit is capable of monitoring the differential pressure throughout the cold or hot aisle. Pressure readings can be actively taken at multiple points throughout the aisle and compared to the space outside of the containment system. This unit would provide a control signal to the VAV system to vary either the amount of air delivered from the floor or the amount of air pulled back into the drop ceiling. The system would precisely meet the demand of the server equipment, while providing no excess cfm to the contained aisle and the resultant wasted energy.

This type of system can be designed with fail safes to ensure that the VAV system always returns to the fully open position in case of power or control failure to ensure that equipment is not compromised. The system should also be tied back into the BMS and through to the data center infrastructure monitoring (DCIM) tool to provide feedback on the operation of the system.Linking to the DCIM tool can also provide the enduser with a predictable method to determine remaining capacity in each contained aisle. Historical data on VAV maximum openings also provides the user with the amount of remaining cfm delivery for future IT hardware additions. This planning function can provide a parallel to electrical capacity planning, resulting in the ability to make knowledgeable decisions when new hardware is deployed.

 

CONCLUSION

Containment systems will continue to be deployed as a viable method of improving data center operational and energy efficiencies. Planning ahead for the realities of IT hardware staged deployments and the variability of new and future IT hardware through the use of pressure-controlled VAV systems will allow the engineer involved in future data center projects to provide an efficient data center cooling system. ES