Intel Data Center Design
Intel Data Center Design
Daniel Costello
IT@Intel
Global Facility Services DC Engineering
This presentation is for informational purposes only. INTEL MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.
BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino logo, Core Inside, Dialogic, FlashFile, i960, InstantIP, Intel, Intel logo, Intel386,
Intel486, Intel740, IntelDX2, IntelDX4, IntelSX2, Intel Core, Intel Inside, Intel Inside logo, Intel. Leap ahead., Intel. Leap ahead. logo, Intel
NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, IPLink,
Itanium, Itanium Inside, MCS, MMX, Oplus, OverDrive, PDCharm, Pentium, Pentium Inside, skoool, Sound Mark, The Journey Inside, VTune,
Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2007, Intel Corporation. All rights reserved.
Last Updated: Aug 28, 2006
Objective
Corporate Acquisitions
70 since 1996, each with a data center
1. This growth is specific to the Intel silicon design engineering environment and does not include overall corporate IT
demand—e.g.,~25,000 servers in 2005 to support silicon design.
Data Center Assets
Age of Data Centers Plans to Build a New
Data Center
Applications
drive the need
for DC capacity
(not hardware)
62% of DC’s
more than 10 More than 1/3
years old % of Applications in forecast new DC
each Tier construction
1. Source: Data Center Operations Council research. Tier 4 applications have more demanding service levels.
Data Center Consolidation: What is Intel’s
Strategy?
y “Right sizing” model1 to rebalance the number,
locations, and use of the data centers
y Our cost analysis indicates large
global and regional hubs
– (Eastern Europe and Asia bandwidth constraint)
1. This model is for illustration purposes only and does not represent actual Intel data center locations.
Why Density versus Space
y Savings
– Would require the same megawatts of power to run “same”
amount of servers in a dense or spread out configuration
• Would require longer conductor runs if spread out
– Would require the same or more tons of cooling for “X” number of servers
• Spreading out increases space to be cooled
• Drives up chiller plant size
• Additional fan capacity would be required
– Spreading out cabinets adds additional cost in building square feet and
raised metal floor (RMF) space
– Decreases heating, ventilation, and air conditioning (HVAC) to universal
power supply (UPS) output ratio by increasing efficiency of cooling systems
– High Density DC is up to 25% more energy efficient than low density
y Additional Scope
– 42° F thermal storage for uninterruptible cooling system (UCS)
– Fully automated data center and integrated control systems
– Sound attenuation – NC60
Large Data Center Construction Economies of Scale
(Modular Approach1)
Industry average
for Tier II/III
data center USD
11,000 – 20,000
per kilowatt
After fifth
module,
diminishing
returns
NOTE: The first two modules are based on actual costs to build a high performance data center at Intel. Modules 3 through 7 are projections. All
timeframes, dates, and projections are subject to change.
1. Intel® IT currently defines a given module as 6,000 square feet of data center floor space.
2. Source: Intel white paper June 2006 “Increasing Data Center Density While Driving Down Power and Cooling Costs”
www.intel.com/business/bss/infrastructure/enterprise/power_thermal.pdf
Data Center Processor Efficiency Increases
1,000 Sq.Ft.
128 kW 30 Sq.Ft.
512 Servers 21 kW
25 Server Racks 53 Blades
?
1 Server Rack
3.7 Teraflops
3.7 Teraflops
1. The above testing results are based on the throughput performance of Intel design engineering applications relative to each new
processor and platform technology generation.
Accelerated Server Refresh
1. Source: Intel based on SPECint_rate_base2000* and thermal design power. Relative to 2H’05 single-core Intel® Xeon®
processor (“Irwindale”).
2. Based on internal Intel testing Q2 2006 using equivalent systems in a rack configuration versus a blade configuration.
Key Metrics - ACAE
The IT Equipment Power is defined as the effective power used by the equipment that is used to
manage, process, store, or route data within the raised floor space.
The Facility power is defined as all other power to the data center required to light, cool, manage,
secure and power (losses in the electrical distribution system) the data center.
Industry average is estimate at 2 or a DCE of 50%
Intel Data Center Cooling Development
WPSF=watts per square foot; kW=kilowatt; W=watt; ACAE=air conditioning airflow efficiency; W/CFM=watts per cubic feet per minute;
CFD=computational fluid dynamics; RMF=raised metal floor
Chimney Cabinet
Two-Story Vertical Flowthrough
High-Performance Data Center
Non-
Non-ducted Hot Air Return Space Above Ceiling
Plenum Area
Plenum Area
Cooling Coils Cooling Coils
Electrical Area
Hot Aisle Panel Closure System
Cooling air
leaking
through floor
is used for
cooling and
bypass to
temper
system delta-T
to 43º F
1,250 watts per square feet (WPSF), 30 kilowatts (kW) per cabinet, air
conditioning airflow efficiency (ACAE) 13.7 watts per cubic feet per minute
(W/CFM)
Decoupled Wet Side Economizer System
Intel Data Center Energy Efficiency
kWHVAC
HVAC performance index (%) =
kWUPS Output
kW=kilowatt; UPS=universal power supply
1 “Data Centers and Energy Use - Let’s Look at the Data.” ACEEE 2003 Paper #162. William Tschudi and Tengfang Xu, Lawrence Berkeley National Laboratory;
Priya Steedharan, Rumsey Engineers, Inc.; David Coup, NYSERDA; Paul Roggensack, California Energy Commission.
Industry moving to 45nm Benefits