The proliferation and expansion of data centers represent a large and rapidly growing electric load, doubling between 2000 and 2006. Multiply that load by the soaring price of power, and operating costs are rising by 16% per year. As far as climate impact, the combined carbon footprint for data centers is greater than that of Argentina, and equal to about half that of the entire airline industry. Cutting high energy costs often involves the use of benchmarks and - until April 2008 - one didn’t exist for such “switch hotels.”
But now there’s CADE: Corporate Average Data Efficiency. By setting a bar that shows how much may be saved through energy efficiency, engineers and chief information officers (CIOs) will soon be looking for ways to limbo beneath it.
What's In The Cade?CADE was developed by McKinsey & Company, an economics consulting firm, and the Uptime Institute, an IT trade organization. CADE combines information about data transfer efficiency with that of energy use to develop a number expressed as a percent. The higher the CADE, the more efficient the data center. Designed to mimic the Corporate Average Fuel Efficiency (CAFE) standard for cars, CADE will hold CIOs and server providers accountable for costs due to inefficiency in much the same way CAFE standards press Detroit to make cars with better gas mileage.
CADE accounts for average CPU utilization, total IT load, facility capacity and the total energy consumed by a data center. It’s a global number that does not take into account energy efficiency of individual servers, storage, or networking equipment. Find the full CADE formula on slide #35 of a free downloadable presentation athttp://uptimeinstitute.org/content/view/168/57.
Raising CADETo raise one’s CADE, McKinsey and Uptime suggest appointing company-wide energy czars to oversee data center efficiency. According to William Forrest, associate principal for IT at McKinsey, most organizations could double the efficiency of their data centers by 2012 “if they just tried.” The study that led to creating CADE points out that past programs focused on improving component efficiencies (e.g., Energy Star), but little attention was given to system-wide efficiency.
The McKinsey/Uptime study found that server utilization was only 6% and facility utilization was about 56%, leading to widespread energy waste. Assumptions and plans for accommodating demand growth (for both data and power) were not always well planned, nor were CIOs being held accountable, because there was no metric against which to measure their performance.
Sizing MattersThe very rapid growth of the Internet and expanded use of computers in fast growing nations such as China has led to overbuilding and often inappropriate sizing of plant components at data centers, such as cooling systems, transformers, and uninterruptible power systems (UPS). The study found that less than one-third of data centers were even 50% utilized, with an overall average utilization of about 55%. The study recommends, for example, designing for a lower power density (e.g., 125 W/sq ft instead of the more common 200 W/sq ft).
A Ten-Step ProgramSome of the ten energy efficiency steps recommended in the study are obvious:
- Remove dead servers (i.e., those running but not performing); the study found that up to 30% of servers fell into this category, wasting both space and energy.
- Enable existing power management features.
- Eliminate dueling cooling systems serving the same spaces.
- Implement free cooling, such as waterside economizers, where available.
- Upgrade old inefficient equipment, including backup power systems (i.e., UPS) that pull significant power regardless of the data load at any particular time.
- Raise temperatures in aisles separating servers from the usual people-pleasing 74° F to people-toasting 90° (servers won’t notice the difference).
- Reduce chilled water temperatures.
- Employ virtualization (i.e., making a single server act like several).
- Selectively shutting off core components when utilization is low.