We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 8
By David Kozischek
As enterprises increasingly decide to outsource some or all of their infrastructure
‘to IT service providers, the result is not at all surprising: fewer data centers overall
and hyper-sized facilities in their place.
Structured cabling ls defined as bullding
cor campus elecommunication cabling
infrastructure that consists ofa number
of standardized smaler elements (hence,
*structured" galled subsystems Forit to be
effedve, tuctured cabling is organize in
Such away that individual fibers are easy to
locate; moves, adds and changes (MACS) are
easly managed; and there ts ample srfiow
round cabling
Perhaps no envionment requires
effective structured cabling more than the
datacenter With no tolerance for downtime
rework alue, data center owners and
‘operators are among the main consumers
of traning esouces devoted to stctured
cabling. The reason is clear: Even asfewer
‘uadtional datacenters are being bull in
favor of outsourcing to the coud (Le, some
type of IT service provides there aes
physical structures enabling the cloud, and
these structures need tobe cabled
Facnately what coorstunea
effective structured cabling Is not open to
Interpretation; athe it is clearly explained
In ANSIT1A.942-, Telecmmmunicatins
Inftasirace Standard for Data Cente. This
article will xplore the standard and break
TODS e)
GE
der Data CentersConsider the different types of data
centers in operation today:
In-house data centers: Also
known as enterprise data
centers, these facilities are
privately owned by large
‘companies. The company
designs, builds and operates
its own facility, and can also
provide a service for profit
such as cloud services or music
streaming.
Wholesale data centers:
(Owned by IT service providers,
also known as cloud providers,
these data centers are in the
business of selling space. Instead
of building thelr own facilities,
enterprises buy space and deploy
their data center infrastructure
within the wholesale facility.
Colocation data centers: These
facilities are like wholesale data
centers, but enterprises rent just
a rack, cabinet or cage. The IT
service provider runs the
infrastructure,
Dedicated and managed
hosting data centers: IT service
providers operate and rent server
capacity in these facilites, but
‘each enterprise customer controls
lts own dedicated server.
Shared hosting data centers:
In these facilities, enterpris
customers buy space on an IT
service provider’ servers. These
servers are shared among
enterprise customers.
Today, a significant shift is
underway in how these different
types of data centers invest in their
8
IcTTODAY
Telecom Cloud -eEnterprise
$600
jo ge
s200
Infrastructure Spending ($bn)
2010 2911 2012 2013 201
FPGURE 1: Growin cou
Enterprise
= Premises
Sm —_
2
dowd
008 2009 2010 201i 2012 2013 2014 2015 2036 2017 2018 2019 2020
FARE 2: Govt ca sence oie ser toes
Infrastructure, LightCounting and their infrastructure to IT service
Forbes report that cloud/IT service providers, the result i not at all
provider spending is up while surprising: fewer data centers overall
enterprise IT spending is down and hypersized facilities in thelr
(Figure 1. place (Figure 3 on page 1
Further evidence ofthis shift i The structured cabling require-
reflected in DellOro's report of server ments of the resulting hyperscale,
investments, the lion's share of nultitenart data centers may
which ate shipping for installation differ from what has been installed
In cloud-type facilities (Figure 2). in the past in smaller single-
As enterprises increasingly tenant, enterprise-owned facilities,
decide to outsource some orell of but ANSI/IIA-942-B providesay
Enterprise
‘Contoren
ie ee
FFRURES: Sit fom enopise TT sovice rier gow
guidance. It always recommends
a star architecture, with different
areas for cross-connecting and
interconnecting cable. The standard
defines five different cross-connect)
interconnect areas, consisting of:
‘= Main distribution areas (MDA).
‘= Intermediate distribution areas.
‘= Horizontal distribution areas
(HDA).
= Zone distribution areas.
Equipment distribution areas
EDA).
‘These areas represent the full
network from racks and cabinets
to the main area where routers,
switches and othe: components
are located. TIA-922 also provides
guidance on redundancy
definitions by ranking them into
four tiers, called ratings. Rated-1
is the lowest tier with the least
redundancy. Rated-4 provides the
‘most redundancy :n a data center's
structured cabling and is typically
deployed in large M/service provider
LOMA
Architecture Recommends asta topology architecture
Cross-Connect vs. inerconnect | MA, IDA, HDA, ZDA, EDA
Redundancy Definitions atod 1-4
Zone Architecture Reduced topologies and consolidated points
Enery Eficiency Examples of routing cables and alow contention
TABLE 1: Tops Cavered by ANSUIIA 942-8 Tobcommunatons nastuce Seat Dita Cnt
101 IcTTODAY
data centers. The other basics
covered by this standard include
zone architectures and guidelines for
energy efficiency (Table 1).
‘When it comes to structured
cabling, the standard addresses
backbone and horizontal cabling
as shown in Figure 4, Each of the
distribution areas, or squares, is an
area where there isa patch panel and.
fiber coming in.
How much fiber is needed in
each of those areas isa function
of network speeds, network
architectures, oversubsc:iption and
switch configuration. Following are
afew examples that illustrate how
those considerations affect a data
center's fiber count.
Figure 5 on page 12shows how
network speed influences fiber count
asa data center moves from 10 to
100 Gigabit Ethernet (GbE). On the
left is the physical architecture with
four racks or cabinets each with a
switch on top and a switch at the
end of the row. In the center Is the
logical architecture in TIA-942's
recommended star configuration
for cabling, and on the right is
the network speed. 10 GbE can be
supported by only two fbers; 40
GbE can operate over two or eight
fibers; and 100 GDE requires two,
eight or even 20 fibers, depending
on the transceiver. The conclusion,
is clear: Network speeds affect fiber
count. Check road maps (IEEE for
Ethernet and, on the storage side,
ANSI for Fibre Channel) for detailed
information on per-port fiber counts.
Work Areas in
tices, Operations
center Support
Horizontal cating
(Offce and Operations
(enterLAN sites)
Backbone cabin
Access Providers
Primary Entrance
Room
(cariereauipment
Sand Demaraton)
(Routers Backbone
ANS Ste,
HDA
(uayjsanfevms
‘Switches
Horizontal cabling | Horizontal cabling
Horizontaleabling | Hor
‘Backbone cabing
Backbone bing
‘Access Providers
‘Secondary
Entrance Room
(Carer taupment
Sd Demareon)
acktone abling
IDA
(AN/SAN Switches)
EDA
(Rak/Cabinet
DA | [: EDA
acta | | aac
Each architecture's speed will
be constant at 40 GbE, with eight
fibers connecting each switch
(Figure 6 on page 12). Point-to-
point architecture is the simplest
—both logically, because itis a
star, and physically cabled as a star
with eight fibers to each cabinet.
A full mesh architecture connects
each switch to every other switch,
totaling 32 fibers for the same
sce antral eairg deuson seas
five switches. That logical mesh is
“cabled” physically at the cross-
‘connect, and it takes 32 fibers to do
that. The final architecture in this
‘example is the spine and leaf, in
‘which every spine switch (Switches
and 2) has to connect to every
leaf switch (Switches 3 - 5). In the
same physical configuration with
the same five switches, the spine-
aand-leaf logical architecture requires
16 fibers. Depending on the data
‘center's architecture, therefore, an
‘operator may need eight, 16 or 32
fibers for every cabinet. Conclusion:
Architecture redundancy increases
fiber count.
‘Next, consider how
oversubscription impacts fiber
count, Oversubscription is the ratio
of circuits coming in compared
to those going out of a switch. In
March/April 2018. | 111006
FIOURE 5: Nek spe nonce oe zu
a
Sone + Loa
le Laat
oF
PURE 6: Net ache ects cout,
12. | ICTTODAY
‘Network speeds affect
fiber counts
Check IEEE roadmaps
{or Ethernet
‘Check ANS! roadmaps
for Fibre Channel
Gene
2asewes
;
eeee p-Z.
URE 7: Nobo ooeuberpon not ow count
ese
IB #h----
fic
oe
FIGURE @: Never cc contguaton sins ber cou.
Gorn
‘Check Swich
Morch/Apil 2018) 13,ServersiCompute
(cauipment ditbuten area)
Data Center
‘Campus
Nain Oisttouton ara
the example shown in Figure 7
on page 13, the star architecture
is used physically and logically
with a constant network speed of
10 GbE. The variable shown is the
oversubscription rate. The example
shows a 4:1 oversubscription with
24 10 GbE circuits coming in and
six going out; in the middle, 24
10 GbE circuits come in and 12 go
‘out for a 2:1 rate; at the bottom,
the rate is 1:1, with 24 10 GbE
circuits both entering and exiting
each switch. Depending on the
141 IetTODaY
FGURE¢: Large lisence proce ct ore
oversubscription rate, with all other
variables remaining constant, the
required per-switch fiber count can be
12, 24 oF 48 fibers. Conclusion: The
lower the oversubscription ration, the
higher the fiber count. Ultimately,
the oversubscription rate isa function
of network ingress/egress traffic
needs, meaning the fiber count is
duiven by this requirement as well
Finally, consider how the
network's switch configuration
drives fiber count (Figure 8 on page
13). Using constant architectures
and running 10 GbE to all of the
servers on the racks, what happens
‘on the right side when the switch is
reconfigured? At the tap, al of the
circuits going dowa are 10 GbE; two
of the 40 GbE ports are quad small-
form-factor pluggable (QSFP) optical
transceivers (Le., eght-fiber multi-
fiber push on (MPO] connection);
they break out into four 10 GbE to
total 16 more ports, yielding two
40 GE ports yolug up, or 2x 8= 16.
In the middle of tke figure, the same
switch with all four of the 40 GbE
‘ports is seen going back up to the
‘core, equating to 8 x 4 = 32 fibers.
“The final scenario shows an equal
distribution of 10 GbE going down
as going up. 40 GbE ports break out
Into 10 GbE for 16 x 10 GbE ports.
“Adding more 10 GbE to make It even
totals 64 fibers, Conclusion: Just
deciding how to configure the switch,
affects the fiber count in these
scenarios from 16, 32 or 64 fibers.
Note that this switching
configuration only addresses the
Bthernet side of these servers. The
fiber count would continue to
climb if the servers also had a Fibre
Channel network and/or ports for
Infiniband higi-speed computing,
After seeing how the four
‘variables can independently increase
‘the aumber of fibers needed in
data centers, imagine the impact
‘that mixed variables can have in
‘driving fiber counts up even higher.
‘Changing the network's operating.
‘speed affects the fiber count, but
‘what happens when the speed and
‘the architecture are changed? Or
‘the speed and the oversubscription
rate? Fiber counts that were already
relatively high go up even more.
What remains is the question
‘of how to cable this type of data
center. Typically, today’s increasingly
large data centers extend to separate
locations much like an enterprise
campus, as shown in Figure 9.
Indoor cable is typically used
‘within each building, connected
by indoor/outdoor cable and
transitional optical splice enclosures
(Table 2).
Toa een
Meet Me room ‘= Demarcation» Gross-conrect,
Main cstrbution area 1 Racks/cabnets« Cross-cconect,
Indoor cabling 1 Plenum rated
‘Indoot/outdoor cabling 1 Plenumrise « Armored cable
Optical splice enclosure (OSE) | « Transition from indoor to autor cables
TABLE 2 Fact cating amas
‘There are three deployment
methods to consider:
‘+ Preterminated cable: Typically
deployed for indoor ples
rated cabling, these trunks
are factory-terminated on both,
ends with eight- or 12-fiber
‘MPO connectors. They are
ideal for MDA to HDA or EDA
installations involving raceway
oor racefloor where the entire
fiber count is being dep!
FIRE 10: Perinat cable
one run at a single location at
each end of the link (Figure 10).
Pigtailed cable: These semi-
‘num pre-connectorized assemblies are
factory-terminated on one end
‘with MPO connectors for easy
high-fiber-count deployment
while remaining unterminated
on the other end to fit through
small conduit or allow for
on-site length changes. Often
‘used in building-to-building
toyed in
March/apri2018 1516
installations, pigtailed cable is
ideal for situations when conduit
Is too small for pulling grips or
the cable pathway cannot be
determined before ordering
(Figure 11).
Bulk cable: This deployment
option requires field connect
crization on both ends, typically
‘with MPO splice-on connectors
(Figure 12). Bulk cable is best for
deployments requiring center-
pull installation or extremely
high fiber counts (such as 1728,
fibers and up),
FIGURE 11: Pld cc
. Fleld-Terminated
MTP® Connectors,
FIGURE 12:Buk cela,
to crecte an A fabric and a B fabric.
Figure 13 on page 18 shows how
these recommendations translate to
a logical architecture
Asshown in Figure 14 on
Table 3 on page 18 provides an
overview of the three deployment
methods and their corresponding
fiber counts.
Putting all of this information
to practice, the following example page 19, Cisco's spine-and-leaf
illustrates how a four-way spine
is cabled and the resulting fiber
count. Based on manufacturer
recommendations, there are 48
architecture guidance provides
for a four-way spine with 48 leaf
switches. Starting at the rack and
‘working backward, 32 ports go out
leaf switches with 32 ports down ‘to every leaf switch which translates
to servers and 32 ports going back
up into the fabric. In this example,
two Cisco 3064 switches are used at
the top of each of the 24 cabinets
‘to 64 fibers required per switch.
‘With two switches on each rack,
128 fibers are needed to support
this architecture for every cabinet.
IcTTODaAY
Seliinonsiive ice Ke
5, LOW Voltage Lighting and
SEMAN To Nec Iroc Oy
Uneven eCaeoneCt
Int
eee
Se
oe
ooo
bec
Po ee
Ibol UU!
nlermational Standards
ra Ca
PUT ania eeC eeCABLE METHOD ENVIRONMENT CONNECTOR COUNTS TRUNKTYPE FIBER TYPE
| Premises |r weotenro 14,192.26, | tonamows | ana
Pigtall rf 283, 432, 576 Singlemode
| Pretermirated Indoor/Outdoor ‘MPO connector to fiber 144, 192, 216, ‘Armored Armored
| Pigtail | \Won-armored Non-armored
|
} =
[uk a rowtme | wore | Amace | Amores
tana | Novem
This design called for 10 GbE and a
4:1 oversubscription as previously
alscussed, and we will proceed with
this example using fiber counts that
ae divisible by 12.
There are several options for
cabling this scenario. Some of them
ate not good options like the one
depicted in Figure 15 that uses
jumpers — over 3,000 of them. A
better option would be consolidating
jumpers into 48 72-iber cables
(Figure 16 on page 20). Better yet is
the third option: using high-fber-
count trunks, $76 fibers in each
‘one, reducing the number from
3,000 jumpers to six $76sfiber trunks
(Figure 17 on page 20)
To understand why some of
these options are better than others,
consider their relative ease of use
(which translates to labor spent)
both in the intial installation and
during the MACs that are inevitable
ina data center (Table §on page 21)
Furthering the case for high-
fiber-count trunks are thelr impact
‘on valuable data center real estate
— the pathway for cabling, T1A-S69
provides calculations to understand
‘what percentage of tray/conduit/
raceway is taken up by cabling along
18 1 ICTTODAY
TABLE 8: Decoynent ances zt ating cones
‘Cisco Nexus 7000
Cisco Nexus
3064
2
32 Servers {| eee
wy) i Oh Ow
FIGURE 13: Fou se
with a recommendation that the
individual maximum fl rationot
exceed 25 percent. Though net
intuitive, itis a fact that a $0 percent
fill ratio actually uses up an entire
pathway, because the spaces between
cables are part of the equation. With
this in mind (Figure 18 on page 21),
the first option using more than
3,000 jumpers is no option at all.
However, the second cabling option,
(48 72-fber trunks) does workin a
*=101.6 x 152 millimeters (mm [4 x
re ese
6 inch (in)) tray but not quite as
well in a =101.6 x 101.6 mm (4 x 4
{n) tray. Both tray sizes can easily
accommodate the six 576.fiber
trunks option.
High-fiber-count trunks can be
the best it (when it comes to fill
ratios) in today's data centers. The
days of trusty 12-and 24-fiber trunks
to each rack are no more. Today,
data centers ae increasingly growing
{in scale and in the fiber counts
required to support higher speeds,
‘oN 7000
ear Requied par Suiteh or
‘TRE Tank Cable per Switch
FPGURE 14:Spneacel ecto,
(oN 7000
GeoNews
Sere
FIGURE 18: Cling opion
March/Apri 2018 119OT iptaeisy Cra ro OPTION 2 corr)
| ‘co Nes 09 ‘oxoNenus 7001 BETWEEN. Bexar rere
|
Test and dean 6,144 2-Fiber Duplex 576 12-Fber MTP® 576 12-Fer MIP
| Lc connectors Connectors Connectors
; suctone Document and label 3,072 Jumpers and 48 Trunks, sicTrunks,
7 ZB : E] oe 6,146 Connectors 576 Connectors | $76 Connectors
co ee oe l —= = |
e Pull and instal 3,072 Jumpers ot ends) | ° 48 Trunks bth ends) ic Trunks (oth ends)
Seon Fa a _ . Purchase 3.072 Jumpers 48 (72 Fiber Trunks) ‘Sx (576 Fiber Trunks)
4 xo Nens
a ye ee wee Tratistot—_| 8072Unt,> 6000 Grmectrs | BL 576 Comers | _ Sli 576 Comoros
= _ Move, add or change One Jumper ata tne, Create Cross-Conect, Creat Oross-Comect
— Or oy OG Point-to-Point Configuration Use Short Jumpor
— “ABLES: Delamont itso an urs geo is
URE 1: aoa oon 2
a
. (el [isal [al
‘Ge eu 7000 ‘so es 09 =
‘00 feted >
Badbne os
coo | ‘ae i»
“ “a ‘ (Be L. | La]
» *
tenes eee
me Ww Ww coeanen URE 18a rs ar anges ibe nt is
“Be 2
* ‘greater oversubscription rates, methods and global manufacturers da coils and LAN Hes ches inthe
Biever te ° redundant architectures, and creative __ with many years of experience Sock ate Telcormutisios Espns
switch configurations. It is clear that solving data center challenges — and a Thelin! Soo x Op Enger
large facilities are the new normal; _the assurance of TIA-942 continuing He abopapatsn vous courses icriog
Tr ‘enterprise ITcustomers will continue to provide guidance. Inoccton osm Pot Coniguan 00
me +o shift away from small, single- Irieretcrk Deon, LANWAN Furdarens
FGURE 17:2ing enon ‘tenant facility operators toward AUTHOR BIOGRAPHY: David Kazsoek Broad Communication Network Dson
outsourcing all or part of their data Appian ad Maret Marae Coving ioducen o Newark Proc and asia
‘center infrastructure, Fortunately, Oped Commuricsion, whe eistezensble —_ruadn Des. He can be react
there are proven structured cabling i ova mse poston nd protaly i dai shes Dosrng com.
20 1 ICTTODAY Morch/Apri2018 1 21