Information technology (IT) has been growing exponentially as people’s day-to-day lives grow more dependent on data processing. This data is processed by super computers, stored in data centers, and transferred upon request. As a result, this data processing is growing on a daily basis. According to the latest reports, data center  energy consumption has gone up four times in less than a decade.

In the days of the room-sized, giant computers, a data center might have had only one supercomputer. As technology evolved, computer equipment decreased in size and price. Correspondingly, data centers started networking multiple computers together to supply the increasing demand for processing power. Large numbers of clustered servers can be housed in rooms called data halls or IT rooms. The entire building, including all the data halls and related areas, is called a data center.

Today’s data centers have many rooms with thousands of powerful servers working in parallel or series 24/7. Data centers consume massive amounts of energy to operate per unit of floor area compared to conventional buildings. In fact, data centers consume up to 2% of electricity used in the U.S. and are poised to become the largest global energy user by 2025 at 4.5%, according to reports.

FIGURE 1

> FIGURE 1.

It should come as no surprise that energy efficiency is a crucial element in the performance of data centers, and there is ample room for improvement. According to the DOE 2011 report, energy consumption in data centers can be reduced as much as 80% between inefficient and efficient data centers.

 

How Power Is Distributed in Data Centers

Power coming from the utility grid might range from 2-30+ kilovolts (kV). In order to provide 480 volts of power to the building, transformers are installed to reduce the voltage to the desired number. The power passes through an automatic transfer switch called SWGR. This switch senses when quality power is coming from the utility and signals the backup generators to turn on. This can happen due to brown outs, short-term power fluctuations, or blackouts. When these events occur, backup generators will kick on and deliver electricity to the whole facility.

Power distribution can occur in under three seconds, as it takes time for the generators to turn on and be ready to supply power. In order to fill the gap between the power outage from the utility and power generated by generators reaching a stable level of power output, a standby power source called an uninterruptible power supply (UPS) is installed. UPS generates power using either a battery or flywheel system. UPS stores specified amounts of energy to cover the short time between the power outage and generator power startup. The flywheel UPS, if used in series, can generate enough power for up to a few minutes of operation. A battery UPS can generate power for longer durations of 15 minutes or more depending on the capacity. Once the utility power is restored and stable, the automatic switch signals back to the utility feed and turns off the backup generator. The power from the utility feed then charges the UPS back to normal.

Power from the UPS goes directly to the power distribution units (PDUs). PDUs transform power from 480 volts down to either 400 or 208 volts, depending on the system requirements. In older data centers, 208-volt power was  used, but most new data centers use 400 volts. Power distribution units distribute the power to server racks and other IT equipment.

 

Data Center Metrics and Benchmarking

Energy efficiency metrics and benchmarks can be used to track and monitor the performance of new or existing data centers. They also can be used to identify potential energy conservation measures to minimize the energy consumption in data centers. Benchmarking values provided for PUE comparison are based on a data center benchmarking study carried out by Lawrence Berkeley National Laboratories (LBNL).

 

Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE):

PUE is a ratio of the total power a data center facility consumes versus how much of that power is consumed by IT equipment.

PUE is a ratio of the total power a data center facility

Total facility power includes IT load, HVAC, plug loads, and miscellaneous. A typical data center has a PUE of 2.0; however, several recent super-efficient data centers have been known to achieve a PUE lower than 1.10. Usually, data centers located in hot and humid regions have higher PUEs, whereas data centers in cold and dry regions can reach really low PUEs. The main reason PUEs in these areas are low is the existence of water-side and/or air-side economizers, which enable the data center to run on free cooling and avoid running the HVAC equipment when the outside temperature is low.

DCiE is defined as the ratio of the total power drawn by all IT equipment to the total power to run the data center facility, or, in other words, DCiE is the inverse of the PUE:

PUE is a ratio of the total power a data center facility

 

Standards

There are several standards and codes to follow when it comes to data center energy consumption. The following standards are currently being followed for data centers:

  • ANSI/BICSI 002-2014 Data Center Design and Implementation Best Practices;
  • Uptime Institute;
  • ASHRAE 90.4-2016;
  • The Green Grid; and
  • Pacific Gas & Electric (PG&E).

FIGURE

> FIGURE 2.

ANSI/BICSI 002-2014 Data Center Design and Implementation Best Practices: This standard is a data center design and operation guide that covers planning, construction, commissioning, protection, management and maintenance, cabling infrastructure, pathways, and spaces of data centers.

Uptime Institute: This advisory organization provides guidelines for improving the performance, efficiency, and reliability of critical data center infrastructure. This standard assumes four tiers for data centers based on the run time and how often there might be a small downtime.

ASHRAE 90.4-2016: This standard contains suggestions for the design, construction, operation, and maintenance of data centers. It also contains information about the use of both on-site and off-site renewable energy instead of relying on the utility grid all the time. Additionally, ASHRAE 90.4-2016 provides thermal guidelines for HVAC equipment and temperature set point ranges to ensure low energy consumption as well as a quality environment for servers.

Standard 90.4 is a performance-based design standard that offers numerous design components for mechanical load (MLC) and electrical loss (ELC). The absence of PUE in 90.4 allows the primary focus to be on energy consumption by the whole facility rather than efficiency. First calculations need to be done both for MLC and ELC, then these calculations should be compared to the maximum values from tables that are designed based on the climate regions. Compliance with the Standard 90.4 is achieved when the calculated values do not exceed the values contained in the tables of the standards. A substitute compliance path is provided to allow the exchange between the MLC and ELC.

The Green Grid: The Green Grid (TGG) is a global consortium of companies, government agencies, and educational institutions dedicated to advancing energy efficiency in data centers and business computing ecosystems. TGG uses PUE, data center energy productivity (DCeP), energy reuse effectiveness (ERE), and other metrics. The organization proposed the use of a new metric that addresses data center-specific carbon emissions, which are emerging as extremely important factors in the design, location, and operation of these facilities today and in the future.

Pacific Gas & Electric (PG&E): The original version of data center baselines was published in 2009 and adopted across PG&E programs beginning in the 2010 program year. At that time, a comprehensive mechanical system baseline for data centers and computer rooms was not provided by the Title 24 energy code. Title 24 is the building energy efficiency standards for residential and nonresidential buildings codified by the California Energy Commission. Today, many SOP and incentive programs use 2016 PG&E data center energy efficiency standards as their baseline standards.

FIGURE 3

> FIGURE 3.

 

Data Center Energy Efficiency Measures (ECMs):

Data center efficiency measures generally fall into the below categories:

  • Power Infrastructure
    • Install high efficiency UPS; and
    • Install high efficiency PDUs.
  • Cooling
    • Enable air-side/water-side economizers to benefit from free cooling;
    • Install VFDs to constant speed or two-speed fans; and
    • Temperature and humidity set point adjustments.
  • Airflow Management
    • Hot aisle/cold aisle;
    • Air circulation containment; and
    • Utilize floor grommets or perforated tiles.
  • IT technology efficiency
    • Server virtualization;
    • Decommissioning/consolidation of unused servers or lightly loaded servers; and
    • Data storage management strategies.

 

Other Measures Closet to Colocation

The in-house data center (closet) or server room is an on-site IT facility fully supplied and managed by the IT department. It’s usually located in a space within the company’s or organization’s offices or campus. All cooling, power and backup, and security requirements are met by the company/organization itself. A colocation facility is generally defined as a shared data center space in which a business or an organization rents space for servers and other computing hardware. Generally speaking, the colocation provides the building, HVAC, power, internet bandwidth, and physical security so customers can supply and maintain their hardware. Although there are different leasing arrangements and SLAs, space is often leased by the rack, cabinet, cage, suites, or room.

Moving a small data center to closet facilities has great benefits for the customer. Closet facilities have lower PUEs, so the energy usage by non IT-equipment would be less compared to small data center equipment. The operating cost of closet facilities is much lower than colocation facilities. Other benefits include:

  • Advanced infrastructure, greater bandwidth capacity, and reduced latency;
  • Access to managed services that keep pace with increasing business needs;
  • Increased uptime and reliability;
  • 24-hour monitoring (usually live feed); and
  • Reduced capital expenditures.

Colocation providers usually offer mirrored data centers for disaster recovery purposes, so a local power outage or disaster will have minimal impact on operations.

FIGURE 4

> FIGURE 4.

There are some utility programs that offer incentives for closet data centers to move to colocation facilities. Depending on the city where the closet data center is located, there might be opportunities to offset the cost by benefiting from rebate programs offered by the utilities.

 

Immersion Cooling

Immersion cooling in data centers means directly immersing servers and IT hardware in a non-conductive engineered liquid. Heat generated by the electronic components is directly and efficiently transferred to the fluid, reducing the need for active cooling components, such as chillers, cooling towers, and fans that are common in air cooling.

Typically, the non-conductive and dielectric liquid is sealed inside the data center systems. This fluid is interfaced with a more traditional liquid (like water) via a heat exchanger that takes unwanted heat away from the dielectric fluid. Because of the overall high efficiency of immersion cooling systems, it’s becoming very widespread in applications that require large amounts of heat exchange to cool down the heat generated by IT equipment. The use of dielectric coolant means that immersion cooling technology can be used in almost every data center to help increase the efficiency. Although immersion cooling is a cutting-edge heat reduction mechanism, and it has unlimited advantages, it’s still not yet widely used. There needs to be more efforts to familiarize the data center industry with this technology and its benefits. 

Back To Top