Recently, there has been a lot of discussion about the validity of PUE as a metric of energy efficiency. In fact,Herb Zien wrote an articlethat we published in the Nov/Dec issue of Mission Critical that describes many of the limitations of the metric. Some of these shortcomings, I believe, stem from a misunderstanding of the definition of the metric or its misuse. Other shortcomings flow inherently from assumptions baked into the metric to allow its use to be standardized. These limitations include accounting for on site generation, different levels of reliability, how to charge economizers, and site considerations. Most of these have found wide debate in the industry, at least among facilities people.

Less anticipated, I think, were changes that could profoundly affect the denominator. I’d include virtualization, cloud, and changes in server design. I distinctly remember Bruce Myatt, now of KlingStubbins, pointing out that improving server efficiency would change a company’s PUE so that its operations would seem less efficient. Don’t get me wrong, I’m still a fan of PUE. Still isn’t it time we began moving forward on a metric that would measure overall IT efficiency. Changes are underway in the processing environment and server manufacturing world that threaten to destabilize the validity of the PUE denominator for purposes of apples-to-apples comparisons across time and across facilities.

Server manufacturers are also under pressure to reduce the energy consumption of their products. For instance, what do changes in PUE mean when server manufacturers eliminate fans from servers? Mathematically, it means that server fan energy goes to zero and perhaps shifts to the other side. Similarly, some IT folks are struggling to rationalize how they allocate memory, processing resources, I/O, and storage, which can reduce in increased energy efficiency. Efforts on the software side to write more efficient code can also produce energy savings but higher PUE.

I ask these questions today because of two developments. Earlier this week the Green Grid, the initial author of the PUE metric, announced two new metrics, carbon usage effectiveness (CUE) and water usage effectiveness (WUE). “CUE will help managers determine the amount of greenhouse gas emissions generated in delivering work from the IT gear in a data center facility. Similarly, WUE will help managers determine the amount of water used by the facility, and the amount used to deliver work from IT operations,” said the Green Grid. I’m still examining the information available to me about these metrics, but I think the water metric, at least, is long overdue. Still I view the announcement of CUE and WUE as movement away from using (or misusing PUE) as a one-size-fits-all metric.

Still all these metrics seem to have the same vulnerability relating to the effect of improving the IT side of the equation. And the IT industry will also drive change; sometimes even disruptive change.

Long before the Green Grid introduced PUE, I interviewed David S. DeLorenzo, then of Intel, to imagine a world in which new materials would significantly reduce the heat produced by their chips. He suggested that advances in chip design and manufacture, could result in lower power designs. I think we talked about nanotechnology techniques having an effect over time. His answers, I think, in hindsight suggested that the IT denominator would eventually be a serious flaw in a metric meant to measure facility performance.

Just as DeLorenzo predicted, processor manufacturers are also developing low-energy products, which, of course, negatively affect PUE. Just yesterday, Karl Freund of Calxeda told me that low-power chips developed for the cell-phone market would be finding application in data centers.We didn’t talk in great depth about their physical structure, but Karl indicated that battery-life concerns had led ARM chip manufacturers to develop chips that switched off when not in use. In fact, the algorithms allow some chips to remain off even during a call, when the phone is in use. The chips, of course, have limitations, but 64-bit versions on the horizon may even enable ARM chips to be deployed on logic boards alongside floating-point chips on GPUs.

PUE would penalize companies that find a way to utilize low-power chips, but that simply means that Green Grid and others must continue to develop new metrics to use alongside PUE and properly credit organizations that reduce IT energy use by improving IT efficiency.