0% found this document useful (0 votes)
53 views

Intel IPU

Uploaded by

Dina Kar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Intel IPU

Uploaded by

Dina Kar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Product Brief

Intel® Infrastructure Processing Unit


Adapter E2100-CCQDA2
Enables rapid innovation for the modern data center

Key Features
• Intel® IPU SoC E2100 with 200Gb • Supports 2x 100GbE, or 4x 25GbE, or
Ethernet bandwidth 1x 200GbE interfaces
• Full-height, three-quarter length, PCIe form factor • USB and 1GbE out-of-band RJ45
• 16-lane PCIe 4.0 management
• 2x QSFP56 ports • Total 48GB on board LPDDR4x memory
• Single-width, passive heatsink
The Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100-CCQDA2 delivers infrastructure
acceleration, virtual storage enablement and enhanced security features in the data center. The adapter features
a rich packet-processing pipeline, 200Gb Ethernet bandwidth, and includes NVMe, compression and crypto
accelerators. The Arm Neoverse N1 compute complex allows customer-provided software to execute features,
ranging from complex packet-processing pipelines to storage transport, device management, and telemetry. By
utilizing a combination of acceleration hardware and software running in the compute complex, this IPU adapter
enables rapid innovation necessary for the modern data center.

Agile Platform for Flexible Deployments Network Data Processing


The E2100-CCQDA2 offers three main advantages to The E2100-CCQDA2 network subsystem supports
data center infrastructure managers and the 200Gb/s of throughput. The programmable packet
workloads run in the Cloud, Enterprise and Telco Cloud processor delivers leadership support for switch
data centers. offload, firewalls, and telemetry functions while
supporting up to 200Mpps performing in real-world
Separation and isolation of infrastructure workloads. implementations. The NVMe offload engine exposes
Whether tenants in a cloud environment or high-performance NVMe devices to the host
application workloads in an edge or enterprise processor, enabling infrastructure providers to use the
environment, IPUs optimize host CPU applications IPU to implement their storage protocol of choice (e.g.
by removing the infrastructure overhead from hardware accelerated, NVMe over fabric or custom
traditional host-based network and storage software backend compute system). Additionally, the
infrastructure applications. E2100 adapter provides inline IPSec which can secure
every packet sent across the network.
Offload virtualized networks to the IPU where the
accelerators can process tasks more efficiently. For an
IaaS, host CPUs can be used for more workload-
intensive tasks and greater revenue.

Replace previously necessary local disk storage with


detached virtualized storage. This architecture
enables flexible allocation of disk storage lowering
overall costs.
Improved Packet Processing Efficiency Use Cases
The flexible packet processor enables data-plane Intel® Infrastructure Processing Unit Software
use cases such as network virtualization, Development Kit (Intel® IPU SDK) is a software stack
microservices, physical networking, and telemetry, that runs on the compute complex and the attached
and several legacy and advanced use cases for Cloud, host. Developers can use the Intel® IPU SDK on the
Enterprise, and Telco. SoC to create targeted customer solutions.

The packet processing engine consists of the packet • Tenant Hosting: Virtualized Network and Storage
processor and traffic shaper, which support up to functionality; abstraction interface for tenants to
200Mpps. Additionally, the packet processor supports access cloud services; hosts customer control-
P4 Programmable Pipeline with Inline IPsec, Hardware plane; provides custom device support.
Connection Tracking, and Stateful ACLs, providing • Accelerators as a Service: Enables network-to-
flexibility for defining and customizing the behavior of device memory data path; provides service
network data planes. abstraction for access to devices while
implementing functions like QoS.
• Appliance: Performs packet processing either
Dedicated Compute for
on a per-packet or per-flow basis; can soft
Infrastructure Processing terminate packets.
The adapter compute complex is equipped with 16 Arm • Smart Switch: Multiple IPUs perform packet
Neoverse N1 cores. These cores run up to 2.5GHz and processing for select top-of-rack (ToR) packets.
are backed by a large 32MB system level cache. Three • Kubernetes Acceleration: Kubernetes platform
channels of LPDDR4x memory are supported for high- solution for Enterprise customers using
bandwidth usage. Together, these features give this container-based SW development models;
IPU the bandwidth and horsepower to take on large offloads container networking and optionally
infrastructure workloads. container storage to the IPU.

The compute complex is tightly coupled with the


network subsystem allowing accelerators to access the
system-level cache as a last-level cache providing
high-bandwidth and low-latency connections. This
architecture enables a combination of hardware and
software packet processing allowing for custom
configurations. The Lookaside Crypto and
Compression Engine is derived from Intel® Quick
Assist Technology. Storage applications benefit from
the compression engine while securely transmitting
the data.

Programmable Port Configuration


The port speed and the number of ports for this
adapter can be configured on demand, reducing
network adapter validation and simplifying
deployment. Port configurations available: single-port
200GbE, dual-port 100GbE (2 lanes of 50Gb PAM4 or
4 lanes 25Gb NRZ), and 4x25GbE breakout.

2
Intel IPU Adapter E2100-CCQDA2 is designed with
Intel IPU SoC E2100 and includes these features:

Ethernet Compute Complex


• 2x QSFP56 ports • Up to 16 Arm Neoverse N1 cores at up to
2.5GHz with 64 KB L1 cache and 512 KB L2
• Can support 2x 100GbE, or 4x 25GbE with cache per core
breakout cable, or 1x 200GbE interfaces
• Coherent Mesh Network Interconnect with
• Network Controller Sideband Interface 32MB System Level Cache (SLC)
(NC-SI) on-board header
• 3 channels of 16GB LPDDR4x memory
PCIe totaling 48GB

• PCIe 4.0 x16, SMBus Storage Features


• Supports PCIe CEM 4.0 (electrical) and • NVMe: NVMe Initiator Offload
PCIe CEM 5.1 (mechanical) specifications
• Customized Storage Protocols with
• Up to 150W 12V Auxilary power input AES-XTS and CRC offloads on the
Compute Complex
Management Interface
• Nonvolatile memory express (NVMe)
• 1000BASE-T front panel RJ45 port to storage device, total of 64GB
E2100 manageability
• NC-SI supported through onboard header NVMe Performance
and cable • Up to 200Gbps line rate bi-directional
• NC-SI is supported in any state where 3.3V throughput
Aux and +12 V CEM are present • Hardware paths support up to 6M 4KB R/W
• USB front panel debug port IOPS simultaneously
• 1x RJ45 Connector to Data Center
Management Network Specification Security and Crypto
• Inline IPSec engine supports PSP
Packet Processing Engine AES-GCM 128/256
• P4 Programmable Pipeline with Inline IPsec, • Lookaside Cryptography and Compression
Hardware Connection Tracking and Engine (LCE)
Stateful ACLs • Support for chained operations
• Up to 1M LPM Routes, up to 16M Exact • 200Gb Bulk Crypto per direction
Match Entries, 1M Meters/Policers/Shapers, including TLS offload
TCAM and range tables
• Internal/External RoT, Secure Boot, Secure
• Programmable Parsing, Multi-stage Match- Debug, TRNG via management complex
Action, Mirroring, Multicast, Modification
and Recirculation • Meets Security Standard SP800-193

3
Adapter Features Certifications and Compliance
Data Rate Supported 200/100/25GbE Hardware Certifications cURus, CE, FCC, ICES, CB, UKCA, VCCI, ACMA,
KCC, BSMI and Morocco
Bus Type/Bus Width PCIe 4.0 x16
RoHS Compliance EU RoHS, BMSI RoHS, EU WEEE, EU REACH,
Controller Intel IPU SoC E2100 China RoHS
Dimensions 256mm x111mm; full-height, three-quarter length,
single-width card, compliant with CEM 5.1
mechanical specification.

Supported Physical Layer Interfaces


1x200GbE 2x100GbE 4x25GbE

DACs 200GBASE-CR4 (Port 0 only) 100GBASE-CR4 25GBASE-CR (Port 0 only)


DAC Cables DAC Cables
Optics and AOCs Up to Class 6 (3.5W) SR4 extended temp AOCs Up to Class 6 (3.5W) SR AOC breakouts
N/A
Up to Class 6 (3.5W) SR4 extended temp optics transceivers

Technical Specifications
Storage Humidity Maximum: 85% relative humidity at 25 °C
Storage Temperature -40 °C to 70 °C (-40 °F to 158 °F)
Operating Temperature 0 °C to 45 °C (32 °F to 113 °F)

Product Order Code


Configuration Product Code
Dual Port E2100CCQDA2RJG1

Customer Support Product Information


For customer support options in North America visit: For information about Intel® Infrastructure Processing Units,
intel.com/content/www/us/en/support/contact-support.html visit: intel.com/ipu

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel disclaims all express and implied warranties, including without
limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing,
or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel
representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors which may cause deviations from published specifications.
© Intel Corporation. Intel, the Intel logo,Xeon, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
Other names and brands may be claimed as the property of others.

0324/ED/Axiom 816692-001US

You might also like