2018
Dhiman Deb Chowdhury
http://www.dhimanchowdhury.com
6/23/2018
Dhiman Deb Chowdhury
400GIG ETHERNET
1
t is insanely fast, even a maverick will agree! And, it is equally fattish
than the all known preceding transports technologies given that a 1 RU
switch can now pack a whopping 12.8 Tbps bandwidth: get the picture?
Welcome to the world of terabit transport!
Target for massive aggregation in data center and service provider networks,
400 Gigabit Ethernet (400GbE) was approved as IEEE802.3bs standard on
December 6, 2017. The 400GbE standardization marks a milestone towards 1
Tbps per port speed. With Chip vendors keeping up their game, 400GbE is
now available in per port speed at 1 RU form factor and in chassis form
factor. For a standard 32 ports 400 GbE 1 RU switch such as Delta’s
Agema® AGC032, total bandwidth is whopping 12.8tbps. At chassis level, a
combination of chips could provide service provider an option of upto 1 Pbps
(Petabit per Second) system. To put in perspective, 1 Pbps is nearly thousand
times bandwidth of 1 terabit system. That’s about breaking the barrier: like
5,000 two-hour-long HDTV videos in one second (Sverdlik, 2015).
Secrete ingredients of whopping speed
Sounds mind boggling! It should not be, advances in silicons made it possible
for a combination of chips to support Petabits system. As for 400GbE
Systems, the fundamental block of data interchange speed is 50Gig per
second (50Gbps) run through 8 lanes to give a speed of 400GbE [50Gbits x
8] per port. This basic block of data interchange is known as SerDes
(Serializer/Deserializer). In 100GbE, 4 lanes of 25Gbps SerDes are
implemented to achieve 100GbE speed. Depending upon SerDes
implementation, speed for each SerDes may varies, e.g. a 25G SerDes may
have speed upto 26.56G. The diagram below depicts SerDes speed and lanes
respectively for each type of link speed. For example, 10GbE uses a SerDes
with single lane for 10G and 40G link speed uses 4 lanes of 10G SerDes to
acheive 40GbE and so on. When 100G SerDes will be developed, such
SerDes with 8 channels can achieve 800GbE link speed.
I
Dhiman Deb Chowdhury
400GIG ETHERNET
2
Figure 1. A diagrammatical representation of lanes and speed of SerDes
that help achieve port speed of an Ethernet Switch.
The IEEE Standard
As stated earlier, IEEE802.3bs specifies requirements for 400Gig Ethernet
implementation. The following diagram depicts typical vendor specific
implementation of “400GbE layered architecture” of IEEE802.3bs. A
semiconductor vendor may choose to implement IEEE802.3bs architecture in
two chips or in a single chip [please refer diagram below]. The single chip
implementations are most popular and silicon that offers such solution with
additional software features are known as SoC (System on Chip). The
sublayers specified in IEEE802.3bs architecture is drawn from original 802.3
Dhiman Deb Chowdhury
400GIG ETHERNET
3
std and subsequent revisions thereof for 1/10/100GbE. For those who are not
familiar with sublayers of IEEE802.3 architecture, I am providing a brief
herein below:
• MAC (Medium Access Control): A sublayer for framing, addressing and
error detection.
• RS (Reconciliation Sublayer): It provides interfaces to Ethernet PHY.
• PCS (Physical Coding Sublayer): The PCS is used for coding (64B/66B),
lane distribution, EEE functions.
• PMA (Physical Medium Attachment): This sublayer provides
Serialization, clock and data recovery.
• PMD (Physical Medium Dependent): It is a physical interface driver.
• MDI (Medium Dependent Interface): It describes interface for both
physical and electrical/topical from physical layer implementation to
physical medium.
The sublayer of CDMII is not implemented and reserved for future use for
which physical instantiation in not needed (D’Ambrosia, 2015).
Dhiman Deb Chowdhury
400GIG ETHERNET
4
Figure 2. A comparative outlook of IEEE802.3bs layered architecture of
400GbE and vendor specific implementation of 400GbE.
What’s in a box?
A typical SoC can packed up enough horse power to support more than 12
tbps of bandwidth for a 32 ports 1 RU switch. For example, Delta’s Agema®
AGC032 supports upto 12.8tbps for 32 x 400GbE switch (please refer the
figure below).
Dhiman Deb Chowdhury
400GIG ETHERNET
5
Figure 3. Agema® AGC032 12.8tbps 32 ports 400GbE whitebox switch.
Such design generally needs densely packed front panel ports and uses
QSFP-DD form factor of optical transceivers for link level transport. The
QSFP-DD expands QSFP/QSFP28 (an optical pluggable module commonly
used in 40GbE and100GbE respectively) from four lanes electrical signals to
eight lane signals. The QSFP-DD MSA group defines the specification for
QSFP-DD. If you need further details on qsfp-dd, please download the
specification from http://www.qsfp-dd.com/specification/ (QSFP-DD MSA,
2017). The QSFP-DD specification (QSFP-DD MSA, 2017) defines power
classes for upto 14 Watts for single QSFP-DD module, however depending
Dhiman Deb Chowdhury
400GIG ETHERNET
6
upon cage and module design power consumption may vary (please refer to
table 5 and 6 of QSFP-DD specification for further details). Similar to QSFP-
DD, other MSA (Multi-source Agreement) groups are also working on
optical module specification for 400G and a list of them are given below.
Please note, MSA is not a standard group rather an interest group comprised
of optical transceiver vendors that often specifies form factor and electrical
interface for optical modules.
MSA groups that are working on 400G optical modules:
• CFP (C Form-factor Pluggable) at http://www.cfp-msa.org/ .
• OSFP (Octal Small Form Factor Pluggable) at http://osfpmsa.org/
• COBO (Consortium for On-Board
Optics): http://onboardoptics.org/ The Interconnects
The Interconnects
400GbE supports four different types of transceivers: 400G-DR4, 400G-
SR16, 400G-FR8 and 400G-LR8. The following table depicts particulars for
each type of interconnect.
Table 1. 400G Interconnect types, distance limit and signaling
requirements.
Dhiman Deb Chowdhury
400GIG ETHERNET
7
Typical Deployment
400G Ethernet Switches can be deployed in various configurations in Data
Centers for Super Spine in a five folded Clos, Data Center Interconnect
(DCI), Telecom Networks for aggregation and IXCs for transport peering etc.
The following diagrams shows some typical deployments.
Dhiman Deb Chowdhury
400GIG ETHERNET
8
Figure 4. Typical deployment of 400G Whitebox Switch (e.g. Agema®
AGC032) in super spine for Data Center Clos architecture.
Figure 5. Typical deployment of 400G in edge aggregation at telecom
networks.
For distance over 10km, 400G deployment may need to use transponder and
ROADM for better transport distribution on a dark fiber. Hopefully, distance
limitation can be overcome with new optical transceivers for upto 100km in
near future. However, transponders and ROADM will be necessary for
distance beyond 100km and even for sharing single fiber at lower distance.
Conclusion
Welcome to the world of terabit transport. 400GbE is a great step towards
terabit per port transport capabilities and given that whitebox switch is
offering such possibilities at a better price, data centers and service providers
are now able to build infrastructure for future payloads. Hence, addressing
bandwidth demands would not be issue. With significant CAPEX and OEPX
reduction offered by whitebox switches, service providers and hyperscale
data center now can focus on more service offerings for what future holds.
Dhiman Deb Chowdhury
400GIG ETHERNET
9
Reference:
1. Sverdlik, Y. 2015. Custom Google Data Center Network Pushes 1
Petabit Per Second. DataCenter Knwloedge. Avilable online
at http://www.datacenterknowledge.com/archives/2015/06/18/custo
m-google-data-center-network-pushes-1-petabit-per-second
2. D’Ambrosia, 2015. IEEE P802.3bs Baseline Summary: Post July 2015
Plenary Summary. Available online
at http://www.ieee802.org/3/bs/baseline_3bs_0715.pdf .
3. QSFP-DD MSA, 2017. QSFP-DD Hardware Specification for QSFP
DOUBLE DENSITY 8X PLUGGABLE TRANSCEIVER. Available
at http://www.qsfp-dd.com/wp-content/uploads/2017/09/QSFP-DD-
Hardware-rev3p0.pdf

More Related Content

PDF
100 g qsfp28 optics introduction
PPT
ECOC Panel on OIF CEI 56G
PDF
OIF CEI 56-G-FOE-April2015
PPT
Fujitsu Iccad Presentation--Enable 100G
PDF
Fujitsu 100G Overview
PPT
Generic framing procedure
PPTX
Transport SDN: From Wish to Reality - OIF at ECOC 2015
PDF
Next-Generation Ethernet: From 100 Gbps to 400 Gbps and Beyond-PowerPoint Sl...
100 g qsfp28 optics introduction
ECOC Panel on OIF CEI 56G
OIF CEI 56-G-FOE-April2015
Fujitsu Iccad Presentation--Enable 100G
Fujitsu 100G Overview
Generic framing procedure
Transport SDN: From Wish to Reality - OIF at ECOC 2015
Next-Generation Ethernet: From 100 Gbps to 400 Gbps and Beyond-PowerPoint Sl...

What's hot (20)

PDF
Ipv6 application in 5G bearer network--C&T RF Antennas Inc
PDF
40 gbps parallel and bidirectional transceiver
PDF
Transceiver Options for Brocade 5100 Switch
PDF
Beyond 100GE
 
PDF
ENRZ Advanced Modulation for Low Latency Applications
PDF
OIF 2015 FOE Architecture Presentation
PDF
To HDR and Beyond
PDF
Metro High-Speed Product Line Manager
 
PDF
Are You Ready for Embracing 100G Ethernet?
PPT
10G Ethernet Outlook for HPC
PDF
Remote phy: podcast or hangout episode 28
PPT
Mpls L3_vpn
PPT
Optical Networks Infrastructure
PDF
OIF 112G Panel at DesignCon 2017
PDF
Qfx3500
PDF
Fiber Technology Trends for Next Generation Networks
 
PDF
Nokia L3 VPN Configuration Guide
PPT
Multi-Protocol Label Switching: Basics and Applications
PDF
CEI-56G - Signal Integrity to the Forefront
PDF
Nokia IES Configuration guide
Ipv6 application in 5G bearer network--C&T RF Antennas Inc
40 gbps parallel and bidirectional transceiver
Transceiver Options for Brocade 5100 Switch
Beyond 100GE
 
ENRZ Advanced Modulation for Low Latency Applications
OIF 2015 FOE Architecture Presentation
To HDR and Beyond
Metro High-Speed Product Line Manager
 
Are You Ready for Embracing 100G Ethernet?
10G Ethernet Outlook for HPC
Remote phy: podcast or hangout episode 28
Mpls L3_vpn
Optical Networks Infrastructure
OIF 112G Panel at DesignCon 2017
Qfx3500
Fiber Technology Trends for Next Generation Networks
 
Nokia L3 VPN Configuration Guide
Multi-Protocol Label Switching: Basics and Applications
CEI-56G - Signal Integrity to the Forefront
Nokia IES Configuration guide
Ad

Similar to 400 Gigabits Ethernet (20)

PPTX
Trends in Optical Networking
PPTX
Best trends in optical networking
PDF
Cr33562566
PDF
Ms 2374 final
PDF
05. DF - Latest Trends in Optical Data Center Interconnects
PDF
25 gbe how much do you know about it
PDF
100G QSFP28 Optics Introduction
PDF
A Quick Look at the Differences 400G QSFP-DD vs. 400G OSFP Optical Transceive...
PDF
25GbE Cabling vs 40GbE Cabling
PDF
Scaling Beyond 100G With 400G and 800G
 
PDF
Intel® Ethernet Update
PDF
400G/800G High Speed Networking product guide
PDF
IRJET- Performance Analysis of IP Over Optical CDMA System based on RD Code
PDF
PLNOG16: Coping with Growing Demands – Developing the Network to New Bandwidt...
PDF
ebook-400g-and-beyond how to leverage 400G
PDF
The Performance Evaluation of IEEE 802.16 Physical Layer in the Basis of Bit ...
PDF
25 g ethernet q&a
PDF
White paper 10 gigabit ethernet
PDF
Closer Look at 400G QSFP-DD Optical Transceiver Module
PDF
OIF on 400G for Next Gen Optical Networks Conference
Trends in Optical Networking
Best trends in optical networking
Cr33562566
Ms 2374 final
05. DF - Latest Trends in Optical Data Center Interconnects
25 gbe how much do you know about it
100G QSFP28 Optics Introduction
A Quick Look at the Differences 400G QSFP-DD vs. 400G OSFP Optical Transceive...
25GbE Cabling vs 40GbE Cabling
Scaling Beyond 100G With 400G and 800G
 
Intel® Ethernet Update
400G/800G High Speed Networking product guide
IRJET- Performance Analysis of IP Over Optical CDMA System based on RD Code
PLNOG16: Coping with Growing Demands – Developing the Network to New Bandwidt...
ebook-400g-and-beyond how to leverage 400G
The Performance Evaluation of IEEE 802.16 Physical Layer in the Basis of Bit ...
25 g ethernet q&a
White paper 10 gigabit ethernet
Closer Look at 400G QSFP-DD Optical Transceiver Module
OIF on 400G for Next Gen Optical Networks Conference
Ad

More from Dhiman Chowdhury (20)

PPTX
NextGen Network Synchronization
PDF
Synchronization for 5G Deployments
PDF
The Matter of Time
PDF
5G for Business Transformation
PDF
Addressing 5G Sync plane issues
PDF
BGP Peering Test Report - IP Infusion
PDF
World's First Disaggregated 5G-Ready Mobile backhaul Solution
PDF
IP Infusion Application Note for 4G LTE Fixed Wireless Access
PDF
768K Day - Internet Doomsday: is it real?
PDF
Data Center Network in a Bundle
PDF
IPI DC-BOX Bundle: Data Center in a Box
PDF
Large Scale Data center Solution Guide: eBGP based design
PDF
World's First Dsiaggregated Networks for Internet Exchange Point (IXP)
PDF
Interoperability Showcase: Eantc tested IPInfusion software with 21 other ven...
PDF
Intel and IP Infusion Deliver Deterministic NFV Performance
PDF
The Intelligent Mesh: Bringing together converging forces to enable NextGen I...
PDF
Open, Efficient & Intelligent: Smart ICT Infrastructure
PDF
Wireless-fiber convergence: Ethernet at Fronthaul creating new possibilities ...
PPSX
Private Cloud 101 - Part I
PDF
Our Common Future
NextGen Network Synchronization
Synchronization for 5G Deployments
The Matter of Time
5G for Business Transformation
Addressing 5G Sync plane issues
BGP Peering Test Report - IP Infusion
World's First Disaggregated 5G-Ready Mobile backhaul Solution
IP Infusion Application Note for 4G LTE Fixed Wireless Access
768K Day - Internet Doomsday: is it real?
Data Center Network in a Bundle
IPI DC-BOX Bundle: Data Center in a Box
Large Scale Data center Solution Guide: eBGP based design
World's First Dsiaggregated Networks for Internet Exchange Point (IXP)
Interoperability Showcase: Eantc tested IPInfusion software with 21 other ven...
Intel and IP Infusion Deliver Deterministic NFV Performance
The Intelligent Mesh: Bringing together converging forces to enable NextGen I...
Open, Efficient & Intelligent: Smart ICT Infrastructure
Wireless-fiber convergence: Ethernet at Fronthaul creating new possibilities ...
Private Cloud 101 - Part I
Our Common Future

Recently uploaded (20)

PPTX
Configure Apache Mutual Authentication
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PDF
Comparative analysis of machine learning models for fake news detection in so...
PDF
UiPath Agentic Automation session 1: RPA to Agents
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PPTX
Internet of Everything -Basic concepts details
DOCX
search engine optimization ppt fir known well about this
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PPT
Geologic Time for studying geology for geologist
PPTX
GROUP4NURSINGINFORMATICSREPORT-2 PRESENTATION
PPTX
Build Your First AI Agent with UiPath.pptx
PDF
The influence of sentiment analysis in enhancing early warning system model f...
PDF
Improvisation in detection of pomegranate leaf disease using transfer learni...
PPT
What is a Computer? Input Devices /output devices
PPTX
Custom Battery Pack Design Considerations for Performance and Safety
PDF
sbt 2.0: go big (Scala Days 2025 edition)
PDF
“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,...
PPTX
Module 1 Introduction to Web Programming .pptx
PDF
STKI Israel Market Study 2025 version august
PDF
A proposed approach for plagiarism detection in Myanmar Unicode text
Configure Apache Mutual Authentication
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
Comparative analysis of machine learning models for fake news detection in so...
UiPath Agentic Automation session 1: RPA to Agents
Final SEM Unit 1 for mit wpu at pune .pptx
Internet of Everything -Basic concepts details
search engine optimization ppt fir known well about this
NewMind AI Weekly Chronicles – August ’25 Week III
Geologic Time for studying geology for geologist
GROUP4NURSINGINFORMATICSREPORT-2 PRESENTATION
Build Your First AI Agent with UiPath.pptx
The influence of sentiment analysis in enhancing early warning system model f...
Improvisation in detection of pomegranate leaf disease using transfer learni...
What is a Computer? Input Devices /output devices
Custom Battery Pack Design Considerations for Performance and Safety
sbt 2.0: go big (Scala Days 2025 edition)
“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,...
Module 1 Introduction to Web Programming .pptx
STKI Israel Market Study 2025 version august
A proposed approach for plagiarism detection in Myanmar Unicode text

400 Gigabits Ethernet

  • 2. Dhiman Deb Chowdhury 400GIG ETHERNET 1 t is insanely fast, even a maverick will agree! And, it is equally fattish than the all known preceding transports technologies given that a 1 RU switch can now pack a whopping 12.8 Tbps bandwidth: get the picture? Welcome to the world of terabit transport! Target for massive aggregation in data center and service provider networks, 400 Gigabit Ethernet (400GbE) was approved as IEEE802.3bs standard on December 6, 2017. The 400GbE standardization marks a milestone towards 1 Tbps per port speed. With Chip vendors keeping up their game, 400GbE is now available in per port speed at 1 RU form factor and in chassis form factor. For a standard 32 ports 400 GbE 1 RU switch such as Delta’s Agema® AGC032, total bandwidth is whopping 12.8tbps. At chassis level, a combination of chips could provide service provider an option of upto 1 Pbps (Petabit per Second) system. To put in perspective, 1 Pbps is nearly thousand times bandwidth of 1 terabit system. That’s about breaking the barrier: like 5,000 two-hour-long HDTV videos in one second (Sverdlik, 2015). Secrete ingredients of whopping speed Sounds mind boggling! It should not be, advances in silicons made it possible for a combination of chips to support Petabits system. As for 400GbE Systems, the fundamental block of data interchange speed is 50Gig per second (50Gbps) run through 8 lanes to give a speed of 400GbE [50Gbits x 8] per port. This basic block of data interchange is known as SerDes (Serializer/Deserializer). In 100GbE, 4 lanes of 25Gbps SerDes are implemented to achieve 100GbE speed. Depending upon SerDes implementation, speed for each SerDes may varies, e.g. a 25G SerDes may have speed upto 26.56G. The diagram below depicts SerDes speed and lanes respectively for each type of link speed. For example, 10GbE uses a SerDes with single lane for 10G and 40G link speed uses 4 lanes of 10G SerDes to acheive 40GbE and so on. When 100G SerDes will be developed, such SerDes with 8 channels can achieve 800GbE link speed. I
  • 3. Dhiman Deb Chowdhury 400GIG ETHERNET 2 Figure 1. A diagrammatical representation of lanes and speed of SerDes that help achieve port speed of an Ethernet Switch. The IEEE Standard As stated earlier, IEEE802.3bs specifies requirements for 400Gig Ethernet implementation. The following diagram depicts typical vendor specific implementation of “400GbE layered architecture” of IEEE802.3bs. A semiconductor vendor may choose to implement IEEE802.3bs architecture in two chips or in a single chip [please refer diagram below]. The single chip implementations are most popular and silicon that offers such solution with additional software features are known as SoC (System on Chip). The sublayers specified in IEEE802.3bs architecture is drawn from original 802.3
  • 4. Dhiman Deb Chowdhury 400GIG ETHERNET 3 std and subsequent revisions thereof for 1/10/100GbE. For those who are not familiar with sublayers of IEEE802.3 architecture, I am providing a brief herein below: • MAC (Medium Access Control): A sublayer for framing, addressing and error detection. • RS (Reconciliation Sublayer): It provides interfaces to Ethernet PHY. • PCS (Physical Coding Sublayer): The PCS is used for coding (64B/66B), lane distribution, EEE functions. • PMA (Physical Medium Attachment): This sublayer provides Serialization, clock and data recovery. • PMD (Physical Medium Dependent): It is a physical interface driver. • MDI (Medium Dependent Interface): It describes interface for both physical and electrical/topical from physical layer implementation to physical medium. The sublayer of CDMII is not implemented and reserved for future use for which physical instantiation in not needed (D’Ambrosia, 2015).
  • 5. Dhiman Deb Chowdhury 400GIG ETHERNET 4 Figure 2. A comparative outlook of IEEE802.3bs layered architecture of 400GbE and vendor specific implementation of 400GbE. What’s in a box? A typical SoC can packed up enough horse power to support more than 12 tbps of bandwidth for a 32 ports 1 RU switch. For example, Delta’s Agema® AGC032 supports upto 12.8tbps for 32 x 400GbE switch (please refer the figure below).
  • 6. Dhiman Deb Chowdhury 400GIG ETHERNET 5 Figure 3. Agema® AGC032 12.8tbps 32 ports 400GbE whitebox switch. Such design generally needs densely packed front panel ports and uses QSFP-DD form factor of optical transceivers for link level transport. The QSFP-DD expands QSFP/QSFP28 (an optical pluggable module commonly used in 40GbE and100GbE respectively) from four lanes electrical signals to eight lane signals. The QSFP-DD MSA group defines the specification for QSFP-DD. If you need further details on qsfp-dd, please download the specification from http://www.qsfp-dd.com/specification/ (QSFP-DD MSA, 2017). The QSFP-DD specification (QSFP-DD MSA, 2017) defines power classes for upto 14 Watts for single QSFP-DD module, however depending
  • 7. Dhiman Deb Chowdhury 400GIG ETHERNET 6 upon cage and module design power consumption may vary (please refer to table 5 and 6 of QSFP-DD specification for further details). Similar to QSFP- DD, other MSA (Multi-source Agreement) groups are also working on optical module specification for 400G and a list of them are given below. Please note, MSA is not a standard group rather an interest group comprised of optical transceiver vendors that often specifies form factor and electrical interface for optical modules. MSA groups that are working on 400G optical modules: • CFP (C Form-factor Pluggable) at http://www.cfp-msa.org/ . • OSFP (Octal Small Form Factor Pluggable) at http://osfpmsa.org/ • COBO (Consortium for On-Board Optics): http://onboardoptics.org/ The Interconnects The Interconnects 400GbE supports four different types of transceivers: 400G-DR4, 400G- SR16, 400G-FR8 and 400G-LR8. The following table depicts particulars for each type of interconnect. Table 1. 400G Interconnect types, distance limit and signaling requirements.
  • 8. Dhiman Deb Chowdhury 400GIG ETHERNET 7 Typical Deployment 400G Ethernet Switches can be deployed in various configurations in Data Centers for Super Spine in a five folded Clos, Data Center Interconnect (DCI), Telecom Networks for aggregation and IXCs for transport peering etc. The following diagrams shows some typical deployments.
  • 9. Dhiman Deb Chowdhury 400GIG ETHERNET 8 Figure 4. Typical deployment of 400G Whitebox Switch (e.g. Agema® AGC032) in super spine for Data Center Clos architecture. Figure 5. Typical deployment of 400G in edge aggregation at telecom networks. For distance over 10km, 400G deployment may need to use transponder and ROADM for better transport distribution on a dark fiber. Hopefully, distance limitation can be overcome with new optical transceivers for upto 100km in near future. However, transponders and ROADM will be necessary for distance beyond 100km and even for sharing single fiber at lower distance. Conclusion Welcome to the world of terabit transport. 400GbE is a great step towards terabit per port transport capabilities and given that whitebox switch is offering such possibilities at a better price, data centers and service providers are now able to build infrastructure for future payloads. Hence, addressing bandwidth demands would not be issue. With significant CAPEX and OEPX reduction offered by whitebox switches, service providers and hyperscale data center now can focus on more service offerings for what future holds.
  • 10. Dhiman Deb Chowdhury 400GIG ETHERNET 9 Reference: 1. Sverdlik, Y. 2015. Custom Google Data Center Network Pushes 1 Petabit Per Second. DataCenter Knwloedge. Avilable online at http://www.datacenterknowledge.com/archives/2015/06/18/custo m-google-data-center-network-pushes-1-petabit-per-second 2. D’Ambrosia, 2015. IEEE P802.3bs Baseline Summary: Post July 2015 Plenary Summary. Available online at http://www.ieee802.org/3/bs/baseline_3bs_0715.pdf . 3. QSFP-DD MSA, 2017. QSFP-DD Hardware Specification for QSFP DOUBLE DENSITY 8X PLUGGABLE TRANSCEIVER. Available at http://www.qsfp-dd.com/wp-content/uploads/2017/09/QSFP-DD- Hardware-rev3p0.pdf