Ever come across something that’s incredibly important yet hard to define? That’s the status of hyperscale data centers. They are crucial to the running of today’s world economy — outages over the past few years prompted chaos by interrupting continuity, stranding businesses, affecting endusers, and leading to hundreds of millions of dollars in lost revenue for major enterprises.

Our lives are affected by hyperscale data centers in many ways, most of which go unnoticed. Simple activities such as utilizing online search engines, keeping in contact with friends via social media, uploading digital media to an online repository, downloading music, viewing the latest viral videos, and checking emails typically require interacting with hyperscale data centers located around the world.

Yet despite their prominence, there are varying industry opinions of what classifies a data center as “hyperscale.”      

Entities involved with research and analytics have different benchmarks of classifying data centers as hyperscale. Attributes such as scalability of IT infrastructure, systems, and network architecture is an important factor. Other factors such as business model (IaaS, PaaS, SaaS, e-commerce, etc.), revenue generated, power capacity, area, and scale of deployment are commonly considered to determine if a data center can be classified as hyperscale.

There are about 400 hyperscale data centers in the world with approximately half of them within the United States, based on an analysis by Synergy Research Group. The total number is expected to exceed 500 by 2020. The growth of these data centers is being fueled by cloud computing, virtual reality (VR), artificial intelligence (AI), the Internet of Things (IoT), social media and networking platforms, among other factors.

While each data center is different and can have unique characteristics, hyperscale data centers that are owned and operated by the providers typically possess most of the following attributes from an infrastructure perspective:

  1. Massive area and scale of deployment. Typically, multiple data center buildings are identical in layout and are deployed in phases in a campus setting.

  2. Significant power capacity, frequently in hundreds of megawatts (MW) for the campus.

  3. Capability of supporting hundreds to thousands of IT cabinets while still maintaining the operational effectiveness.

  4. Fast deployment (speed-to-market). Data centers are expected to be operational within 12 months of breaking ground.

  5. Thermal conditions well outside the ASHRAE TC9.9 recommended envelope. Acceptable inlet air conditioning at IT equipment frequently exceeds 90°F DB and 80% rh, primarily through use of aisle containment strategies and customized IT equipment.

  6. Utilizing direct or indirect evaporative cooling technologies with minimal or no mechanical cooling.

  7. Industry leading power usage effectiveness (PUE). Peak PUE (kW based) typically less than 1.25, and annualized PUE (kWh based) typically less than 1.10.

  8. Industry leading data center availability. Typically, 99.999% availability (“5x9” or “five nines”) or better.

  9. Industry leading cost metrics such as capital expense per MW (CAPEX/MW), operating expense per MW (OPEX/MW).

ESD is a consultant to hyperscale data center providers worldwide, including a number of Fortune 500 companies, and has designed several hyperscale data centers. Here’s an example of one project we worked on. It is located in the Midwest and can support a 70-MW IT load. Once the planned IT load is deployed, it will boast an annual electrical consumption of approximately 660 million kWh, which is equivalent to the average electrical consumption of about 60,000 households. Obviously, its impact is both vast and important to the region.

Hyperscale data centers are unique. While it’s true that the underlying engineering principles of traditional data centers are applicable, hyperscale data centers have nuances that need to be considered to ensure successful projects. These include but are not limited to the following.

 

1. AHJ approval

Hyperscale data centers have massive requirements for electricity and, in most instances, water. It is important to quantify the requirements and discuss them with the authority having jurisdiction (AHJ) and utility providers early during the site selection phase. Upgrades to the local infrastructure, such as water, sanitary, electrical (generation, transformation, transmission, distribution, etc.), and more are frequently required to support the massive requirements of hyperscale data centers. These can take several months, which can impact the project schedule. In most instances, the upgrades are performed by the AHJ and utility providers based on their standard project delivery processes. There is limited potential to expedite upgrades and keep pace with the aggressive schedules typical of hyperscale data centers.

In addition, certain regions require a detailed review by governing bodies (such as a state environmental protection agency, department of ecology, etc.) to study the impact of the proposed hyperscale data center on the local environment. The scope and extent of the review can vary, and it typically involves analyzing the sources of pollution such as generator flue exhaust (pollutants include NOx, PM, CO, NMHC) and ensuring that the immediate environment is not negatively affected. Based on their review, the local authorities can demand stringent requirements beyond what is asked for in the applicable codes, and in extreme cases, they can force a reduction in scale of the proposed deployment or limit the runtime of equipment such as generators. This can have major consequences on the business model of hyperscale data centers.

 

2. Futureproofing

Hyperscale data centers are master-planned for the ultimate deployment, and their construction is phased over a period of months or years. Data center requirements can undergo drastic changes during that period because of ongoing technological evolution. Therefore, it is important to ensure that the master plan is flexible and capable of accommodating reasonable design changes in the future buildouts.

Given the rapid pace of data center evolution, predicting and quantifying changes can be nearly impossible. However, collaboration with the stakeholders and a concerted effort is essential to provide some level of futureproofing and ensuring that the future phases are not stymied by poor master planning. For example, a recent hyperscale project our team worked on involved new data center buildings on an existing campus. The campus was master-planned in 2015 by another firm, and it could not support the recently revised requirements and required upgrades to the water and waste utilities. This problem could have been avoided by conservatively sized underground utilities during the master-planning phase.

 

3. Client Requirements

Operators of hyperscale data centers have high expectations pertaining to engineering, operations, safety, and security. While the engineering requirements are typically identified in the project charter for use by the design consultants, requirements related to other aspects such as operations, safety, and security are frequently overlooked since they can be subjective and not communicated during the design process. For example, the author worked on a project where the client conveyed during construction that mechanical components that require periodic access need to be accessible from a step ladder no higher than 10 ft, based on input received from the operations team. This required late modifications to the hydronic piping layout to reduce the elevation of valve actuators, sensors, transmitters, etc.

It is important to start the conversation early with the stakeholders to identify the data center requirements, especially since late modifications to ensure compliance can be prohibitive from a cost, quality, or schedule perspective. Trade-offs are frequently required since a design that is optimized for engineering might be lacking on other fronts.

 

4. Diverse Manufacturers

Due to the scale of deployment, hyperscale data centers require a significant quantity of infrastructure equipment. The equipment requirements coupled with the aggressive construction schedule can occasionally strain the equipment manufacturers and their supply chains. Where possible, ensure there are multiple manufacturers who can supply major equipment such as AHUs, CRACS, fans, generators, UPS, and beyond. Multiple manufacturers are essential for competitive bidding and allow for diversification to help meet the project demands.

It is easier to achieve multiple manufacturers for relatively standard equipment such as generators and UPS; however, it can be a challenge for custom equipment. The challenge can be overcome by utilizing technology that is nonproprietary wherever possible and constructed using off-the-shelf components.

 

5. Prefabrication

If the project is utilizing integrated project delivery (IPD) and the general contractor and the major subcontractors are onboard early during the design phase, work with the contractors to figure out assemblies and components that can be prefabricated.

For example, prefabricated modular equipment that is built, tested, and commissioned offsite and shipped to the project site for final integration can often be cheaper, faster, and more construction-friendly than stick-built solutions. Simple assemblies such as pipe-valve spools can be prefabricated by contractors in their shops. Specialized vendors can tackle complex assemblies such as complete mechanical equipment enclosures. The strategy of prefabrication is especially useful when the data centers are located in areas where there is a shortage of skilled labor.

 

6. Phasing

As mentioned, hyperscale data centers are typically built in phases, and the new infrastructure could have dependencies with the existing infrastructure. It is important to be cognizant of the future phases to ensure that operation, redundancy, and resiliency of the existing live data center is not diminished during the construction or commissioning of the subsequent phases. Contingency plans and fallback options help to mitigate the risks that should be reviewed with the stakeholders during the master-planning phase, and appropriate measures should be incorporated accordingly.

One day, there will be a clear industry definition for hyperscale data centers — one that will likely incorporate many of the attributes mentioned earlier. Until then, we’ll just have to settle for this fact: hyperscale data centers are a crucial component to the success of many industries and affect billions of people around the world. Their importance will only grow as the 21st century moves on. Understanding the nuances and challenges of designing and building hyperscale data centers is critical to ensure successful projects.

Back To Top