Cisco SD-Access: Training
Cisco SD-Access: Training
Training
3
Understanding LAN Network Architecture
End-to-end VLANs allow a host to exist anywhere on the campus network, while maintaining Layer-2
connectivity to its resources.
However, this flat design poses numerous challenges for scalability and performance:
· STP domains are very large, which may result in instability or convergence issues.
· Broadcasts proliferate throughout the entire campus network.
· Maintaining end-to-end VLANs adds administrative overhead.
· Troubleshooting issues can be difficult.
As network technology improved, centralization of resources became the dominant trend. Modern networks
adhere to the 20/80 design:
· 20 percent of traffic should remain on the local network.
· 80 percent of traffic should be routed to a remote network.
Instead of placing workgroup resources in every local network, most organizations centralize resources into
a datacenter environment. Layer-3 switching allows users to access these resources with minimal latency.
The 20/80 design encourages a local VLAN approach. VLANs should stay
localized to a single switch or switch block:
4
Understanding LAN Network Architecture
5
Understanding LAN Network Architecture
The Cisco Hierarchical Network Model
To aid in designing scalable networks, Cisco developed a hierarchical
network model, which consists of three layers:
· Access layer
· Distribution layer
· Core layer
6
Understanding LAN Network Architecture
Cisco Hierarchical Model – Access Layer
The access layer is where users and hosts connect into the network. Switches at the access layer typically
have the following characteristics:
· High port density
· Low cost per port
· Scalable, redundant uplinks to higher layers
· Host-level functions such as VLANs, traffic filtering, and QoS
In an 80/20 design, resources are placed as close as possible to the users that require them. Thus, most traffic
will never need to leave the access layer.
In a 20/80 design, traffic must be forwarded through higher layers to reach centralized resources.
7
Understanding LAN Network Architecture
Cisco Hierarchical Model – Distribution Layer
The distribution layer is responsible for aggregating access layer switches, and connecting the access layer to the
core layer. Switches at the distribution layer typically have the following characteristics:
· Layer-3 or multilayer forwarding
· Traffic filtering and QoS
· Scalable, redundant links to the core and access layers
Historically, the distribution layer was the Layer-3 boundary in a hierarchical network design:
· The connection between access and distribution layers was Layer-2.
· The distribution switches are configured with VLAN SVIs.
· Hosts in the access layer use the SVIs as their default gateway.
This remains a common design today.
However, pushing Layer-3 to the access-layer has become increasingly prevalent. VLAN SVIs are configured on
the access layer switch, which hosts will use as their default gateway.
A routed connection is then used between access and distribution layers, further minimizing STP convergence
issues and limiting broadcast traffic.
8
Understanding LAN Network Architecture
Cisco Hierarchical Model – Core Layer
The core layer is responsible for connecting all distribution layer switches. The core is often referred to as the network
backbone, as it forwards traffic from to every end of the network.
Proper core layer design is focused on speed and efficiency. In a 20/80 design, most traffic will traverse the core layer.
Thus, core switches are often the highest-capacity switches in the campus environment.
Smaller campus environments may not require a clearly defined core layer separated from the distribution layer.
Often, the functions of the core and distribution layers are combined into a single layer. This is referred to as a
collapsed core design.
9
Understanding LAN Network Architecture
Cisco Hierarchical Model – Practical Application
A hierarchical approach to network design enforces scalability and manageability. Within this
framework, the network can be compartmentalized into modular blocks, based on function.
10
Understanding LAN Network Architecture
Cisco Hierarchical Model – Practical Application
The above example illustrates common block types:
· User block – containing end users
· Server block – containing the resources accessed by users
· Edge block – containing the routers and firewalls that connect users to the WAN or
Internet
Each block connects to each other through the core layer, which is often referred to as the
core block. Connections from one layer to another should always be redundant.
A large campus environment may contain multiple user, server, or edge blocks. Limiting
bottlenecks and broadcasts are key considerations when determining the size of a block.
11
Understanding different
Models of Switches and their
role in Campus LAN network
Understanding different Models of Switches and
their role in Campus LAN network
Cisco Catalyst 9200 Series
Cisco Catalyst 9200 Series switches improve network performance and simplify IT operations. As part of the award-
winning Catalyst 9000 family, the platform provides best-of-breed capabilities not offered by other switches in its class.
Enjoy innovations in advanced telemetry, automation, and security, while achieving twice the performance of the
previous generation.
Features:
▪ Optional stacking, Layers 2 and 3, up to 160 Gbps
▪ Up to 48 ports full Perpetual PoE+ and multigigabit
▪ 1G/10G/25G/40G uplinks
▪ Entry level for intent-based networking
13
Understanding different Models of Switches and
their role in Campus LAN network
Cisco Catalyst 9300 Series
The Catalyst 9300 Series breaks new ground, with up to 1 Tbps of capacity in a stackable switching platform. And for
security, IoT, and the cloud, these switches form the foundation of Cisco Software-Defined Access, our leading
enterprise architecture.
Features:
▪ Stackable, Layers 2 and 3, up to 1 Tbps
▪ PoE, PoE+, UPOE, UPOE+, Cisco StackPower
▪ 25G/10G fiber, 1G/2.5G/5G/10G multigigabit; multigigabit/25G/40G/100G uplinks
14
Understanding different Models of Switches
and their role in Campus LAN network
Cisco Catalyst 9400 Series
The Catalyst 9400 Series is the next generation of modular access switches built for security, flexibility, IoT, and smart
buildings. They deliver high availability, support up to 9 Tbps, and provide the latest in 90-watt UPOE+, forming the
foundation for the return to the trusted workplace.
Features:
▪ Modular, Layers 2 and 3, up to 9 Tbps
▪ Cisco Multigigabit Technology, SFP/SFP+
▪ PoE, PoE+, UPOE, UPOE+
▪ Designed for Cisco DNA and Cisco SD-Access
15
Understanding different Models of Switches
and their role in Campus LAN network
Cisco Catalyst 1000 Series
Cisco Catalyst 1000 Series switches provide enterprise-grade network access sized for small businesses. With a wide
range of Power over Ethernet (PoE) and port combinations, these easy-to-manage switches provide the performance
a modern small office needs.
Features:
▪ Fanless design
▪ Data, PoE, or PoE+, 2 SFP uplinks
▪ Extended temperature range
▪ Managed with web UI or CL
16
Understanding different Models of Switches and
their role in Campus LAN network
Cisco Catalyst PON (Passive Optical Network) Series
With enterprise-grade features such as power and uplink redundancy, Power over Ethernet (PoE+) and simple, low-
cost operations, the Cisco Catalyst PON Series gives you what you need today, in a simple, safe, and cost-effective
GPON solution..
Features:
▪ 1G data, POTS, CATV, Wi-Fi, and PoE+
▪ Redundant fans and power supply
▪ Managed with CLI or free Cisco Catalyst PON Manager
17
Understanding different Models of Switches
and their role in Campus LAN network
Meraki Series Switches
18
Understanding Software
-Defined Networking (SDN)
and Cisco’s Approach to
SDN
SDN
Quick Overview
Software Defined Networking
SDN Definition (ONF): The physical separation of the network control plane from the forwarding plane, and
where a control plane controls several devices.
Control Plane
Openstac
App-1 App-2 Puppet/Chef
k
Data
/NSO
Plane
In SDN, Not All REST Northbound API
Processing
Happens Inside Device ODL, OSC, APIC,
Control Plane - SDN Controller CONTRAIL
Control Plane
Openflo Netcon Opflex Southbound API
w f
Openflo
w
OF Agent Device Device Device
DataPlane
Overlay
CLI, Vendor Protocols
SNMP, Specific OpenFlow Vendor (e.g. VXLAN)
Netflow (e.g. , PCEP, Specific
Nexus OpenFlow Vendor I2RS, (e.g.
API) , PCEP, Specific Netconf Nexus
I2RS, (e.g. API)
Netconf Nexus
Control Plane Control Plane API) Control Plane
Control Plane
Overlays
Data Plane Data Plane Data Plane Data Plane Data Plane
Device Programmability Options – No Single Answer!
C/Java Python NETCONF REST OpenFlow ACI Fabric OpenStack Puppet Protocols
…
RESTful
Management Puppet …
Orchestration Neutron
“Protocols”
Network Services BGP, PCEP,...
Control OpFlex
Forwarding OpenFlow
YANG JSON
Cisco Architectural Vision
SDN/NFV and Orchestration enable change
Service Orchestration
Service
Orchestration Automation, provisioning and interworking
of physical and virtual resources
NFV
SDN NFV Network functions and software running
on any open standards-based hardware
SDN
Control & Data Plane separation…Centralized
Control…abstraction & programmability
Traditional
Traditional
Distributed control plane components, physical
entities
Understanding Cisco's
approach of SDN for LAN and
WAN
Understanding Cisco's approach of SDN for LAN and WAN
The Cisco’s approach of SDN for LAN and WAN is divided into:
· SD-WAN – SDN approach for WAN
· SD-Access – SDN approach for LAN
Note: This training covers SD-Access in details
29
SD – Access Architecture
Overview
Fabric Fundamentals
Architecture| key Components| Fabric Constructs
Cisco’s Intent-Based Networking SAAS
LEARNING Branch
SD-WAN Wireless
Policy Automation Analytics
Control
Fabric
Edge
SECURITY
32
Cisco Software Defined Access
The Foundation for Cisco’s Intent-Based Network
Cisco DNA Center
Identity-Based
Policy and Segmentation
Policy definition decoupled
Policy Automatio Assurance
n from VLAN and IP address
B B
C
Outside Automated
Network Fabric
Single fabric for Wired and
Wireless with full automation
Insights and
SD-Access
Extension User Mobility
Telemetry
Analytics and insights into
Policy follows User User and Application experience
IoT Network Employee Network
What is Cisco SD-Access?
Campus Fabric + Cisco DNA Center (Automation and Assurance)
APIC-EM
NCP
▪ Cisco SD-Access
1.X
GUI approach provides automation
ISE NDP
PI
and assurance of all Fabric
Cisco DNA configuration, management and group-
Center based policy
A Fabric is an Overlay
An Overlay network is a logical topology used to virtually connect
devices, built over an arbitrary physical Underlay topology.
An Overlay network often uses alternate forwarding attributes to provide
additional services, not provided by the Underlay.
35
SD-Access
Fabric Terminology
Encapsulation
Hosts
(End-Points)
36
Cisco SD-Access
Fabric Roles & Terminology
▪ Network Automation – Simple GUI
Automation and APIs for intent-based Automation
Identity Cisco ISE Cisco DNA Center of wired and wireless fabric devices
Services
▪ Network Assurance – Data Collectors
analyze Endpoint to Application flows
Assurance and monitor fabric network status
▪ Identity Services – NAC & ID Services
(e.g. ISE) for dynamic Endpoint to Group
Fabric Border IP Fabric mapping and Policy definition
Nodes Wireless
Controllers
B B ▪ Control-Plane Nodes – Map System that
manages Endpoint to Device relationships
Control-Plane
Intermediate Nodes ▪ Fabric Border Nodes – A fabric device
C
Nodes (Underlay) (e.g. Core) that connects External L3
network(s) to the SD-Access fabric
38
Cisco SD-Access Architecture
Control-Plane Nodes – A Closer Look
Control-Plane Node runs a Host Tracking Database to map location information
B B
• Host Database supports multiple types of Endpoint ID
lookup types (IPv4, IPv6 or MAC)
39
For more details: cs.co/sda-compatibility-matrix
NE
W
40
For more details: cs.co/sda-compatibility-matrix
SD-Access Platforms
Fabric Control Plane
NE
W
41
Introduction SD-Access
Campus Fabric
Architecture| key Components| Fabric Constructs
SD-Access Fabric
Edge Nodes – A Closer Look
Edge Node provides first-hop services for Users / Devices connected to a Fabric
B B
• Register specific Endpoint ID info (e.g. /32 or /128)
with the Control-Plane Node(s)
50
SD-Access Fabric
Border Nodes
Border Node is an Entry & Exit point for data traffic going Into & Out of a Fabric
51
For more details: cs.co/sda-compatibility-matrix
52
For more details: cs.co/sda-compatibility-matrix
SD-Access Platforms
Fabric Edge Node
NE
W
53
SD-Access Fabric
Host Pools – A Closer Look
B B
• Fabric uses Dynamic EID mapping to advertise each
Host Pool (per Instance ID) Pool
Pool Pool Pool
.4
.17 .8 .25
• Fabric Dynamic EID allows Host-specific (/32, /128 or Pool Pool
Pool
.19 Pool
Pool
MAC) advertisement and mobility .13 .23 .11 .12
54
SD-Access Fabric
Anycast Gateway – A Closer Look
B B
• The same Switch Virtual Interface (SVI) is present on
EVERY Edge with the SAME Virtual IP and MAC
55
SD-Access Fabric
Campus Fabric - Key Components
56
LISP - Locator / ID Separation Protocol
Location and Identity separation
Traditional Behavior -
Location + ID are “Combined”
IP core
192.158.28.101
When the Device moves, it gets a new
IPv4 or IPv6 Address for its new
Device IPv4 or IPv6 Identity and Location
Address represents both 189.16.17.89
Identity and Location Prefix RLOC
192.158.28.101 …...171.68.226.120
…...171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128
192.58.28.128
….....171.68.228.121
….....171.68.228.121 Mapping
Database Overlay Behavior -
189.16.17.89 ….....171.68.226.120
192.168.1.11/32 192.168.1.13/32
External Border node acts as PXTR (LISP Proxy
Database Mapping Entry
Tunnel Router) and provides default gateway 172.16.101.11/16 -> 192.168.1.11
when no mapping exists. Database Mapping Entry Employee Contracto
172.16.101.12/16 -> 192.168.1.13 SGT rSGT
172.16.101.11/16 172.16.101.12/16
Corporate VN
58
SD-Access Fabric
Key Components - LISP
Routing Protocols = Big Tables & More CPU LISP DB + Cache = Small Tables & Less CPU
with Local L3 Gateway with Anycast L3 Gateway
BEFOR AFTE
E
IP Address = Location + Identity R
Separate Identity from Location
Prefix RLOC
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
Prefix Next-hop 22.78.190.64
172.16.19.90
….....171.68.226.121
….....171.68.226.120
189.16.17.89 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 192.58.28.128 ….....171.68.228.121
172.16.19.90 ….....171.68.226.120 192.58.28.128 ….....171.68.228.121
192.58.28.128 …....171.68.228.121 189.16.17.89 ….....171.68.226.120
189.16.17.89 …....171.68.226.120 Prefix Next-hop 22.78.190.64
172.16.19.90
….....171.68.226.121
….....171.68.226.120
Mapping
22.78.190.64 ….....171.68.226.121 189.16.17.89 ….....171.68.226.120
172.16.19.90 …......171.68.226.120 192.58.28.128 ….....171.68.228.121
22.78.190.64 ….....171.68.226.121
192.58.28.128 ….....171.68.228.121 172.16.19.90 ….....171.68.226.120
Endpoint
189.16.17.89 …....171.68.226.120 192.58.28.128 …....171.68.228.121
22.78.190.64 ….....171.68.226.121
Database
172.16.19.90 …......171.68.226.120
192.58.28.128 …......171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 …......171.68.226.121
Routes are
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
Consolidated
Prefix
189.16.17.89
Next-hop
…......171.68.226.120
to LISP DB
22.78.190.64 ….....171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90
192.58.28.128
…......171.68.226.120
….....171.68.228.121
Prefix Next-hop
189.16.17.89 ….....171.68.226.120
189.16.17.89 …....171.68.226.120 22.78.190.64 ….....171.68.226.121
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
172.16.19.90 …......171.68.226.120 192.58.28.128 …....171.68.228.121
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
Prefix Next-hop
Prefix Next-hop 189.16.17.89
22.78.190.64
….....171.68.226.120
….....171.68.226.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
172.16.19.90 ….....171.68.226.120 192.58.28.128 …....171.68.228.121
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 …....171.68.226.120
Topology Routes
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
Endpoint Routes
59
Fabric Operation Control-Plane EID RLOC
(Control-Plane) c.c.c.0/24
d.d.0.0/16
z.q.r.5
z.q.r.5
EID RLOC
• EID to RLOC mappings Edge
a.a.a.0/24
w.x.y.1
b.b.b.0/24
x.y.w.2
60
Fabric Operation
Fabric Internal Forwarding (Edge to Edge)
3 EID-prefix: 10.2.2.2/32
Mapping Locator-set: Path Preference
Entry Controlled
2.1.2.1, priority: 1, weight:100 by Destination Site
1
DNS Entry:
Branch Non-Fabric Non-Fabric
D.abc.com A 10.2.2.2
10.1.0.0/24
Fabric
Fabric Borders
S Edge
2
1.1.1.1
10.1.0.1 10.2.2.2 5.3.3.3
5 Fabric Edges
10.1.0.1 10.2.2.2
D
10.2.2.3/16 10.2.2.2/16 10.2.2.4/16 10.2.2.5/16
61
🡪
🡪
🡪
🡪
Fabric Operation 3 EID-Prefix: 10.2.2.2/32
Forwarding from Outside (Border to Edge) Mapping Locator-Set:
Entry 2.1.2.1, priority: 1, weight: 100
1
DNS Entry:
D.abc.com A 10.2.2.2 192.3.0.1
S
Non-Fabric
2
192.3.0.1 10.2.2.2 Fabric Borders
4.4.4.4
4 5.3.3.3
5 Fabric Edges
192.3.0.1 10.2.2.2
D
10.2.2.3/16 10.2.2.2/16 10.2.2.4/16 10.2.2.5/16
62
🡪
🡪
🡪
🡪
Fabric Operation Fabric Control Plane
Host Mobility – Dynamic EID Migration Map Register 10.10.0.0/16 – 12.0.0.1
EID: 10.17.1.10/32
Node: 12.1.1.1 10.2.1.10/32 – 12.1.1.1
D 10.2.1.10/32 – 12.2.2.1
10.10.10.0/24
2.1.1.1
DC1 3.1.1.1
Fabric Borders 1.1.1.1
Mapping
System
12.0.0.1 12.0.0.2
Routing Table 5
10.2.1.0/24 – Local 3 Routing Table
10.2.1.10/32 – Local 10.2.1.0/24 – Local 42
10.2.1.10/32 – LISP0
10.2.1.10/32 - Local
IP Network
Campus Campus
S Fabric Edges 1
Bldg 1 Bldg 2
10.2.1.10 10.2.1.10
63
SD-Access Fabric - VXLAN
VXLAN Data Plane
Cisco DNA Center ISE
B B
VXLAN header contains VNID (VXLAN Network C
Identifier) field which allows up to 16 million
VRFs.
VXLAN
Corporate VN
64
SD-Access Fabric
Cisco TrustSec Policy Plane
Scalable Group Tag (SGT) is a logical construct Cisco DNA Center ISE
defined/identified based on the user and/or
device context.
Automation Analytics Policy
IoT VN Corporate VN
65
SD-Access Fabric – Virtual Networks
How VNs work in SD-Access
66
Introduction to Cisco
DNA Center Policy App
and Cisco ISE
AAA/ISE Integration
AAA Server - ISE Integration
Objectives and Key Points
• Single pane of management for all AAA/policy administration between
network devices and ISE
• Automate Radius/TACACS configuration for network devices.
• Support only one ISE cluster.
• Enable secure services between DNA-C and ISE:
o pxGrid Service to pull the info out of ISE (Uni-Directional)
Obtain TrustSec metadata such as SGT, IP-SGT mappings & TrustSec policy.
o ERS APIs (Bi-Directional Communication)
▪ Fetch deployment model from ISE, such as PAN and PSN info
▪ Add devices to ISE as network devices
▪ Create SGT, IP-SGT mappings & TrustSec policy on ISE
AAA Server - ISE Integration
Pre-Requisites
Policy Preview
AAA Server - (Non-ISE) Integration
Key Points:
• Non-ISE Server Definition:
• ISE running 2.2 or below
• ACS or any third-party Radius Server
• Only automate Radius/TACACS configuration
for network devices
• Require to add network devices to AAA server
manually.
• Can have multiples AAA servers
ISE – Cisco DNA Center Operation
Admin/Operate
Network
Devices
DNA-Center
Thing
Config Sync Context
s
73
SD-Access Policy
Two Level Hierarchy - Macro Level
Known Unknown
Networks Networks
SD-Access
VN
“A”
VN
“B”
VN
Fabric
Virtual Network (VN)
“C”
First level Segmentation ensures zero
communication between forwarding
domains. Ability to consolidate multiple
networks into one management plane.
74
SD-Access Policy
Two Level Hierarchy - Micro Level
Known Unknown
Networks Networks
SG
1
SG
4
SG
7
SD-Access Scalable Group (SG)
SG
2
SG
3
SG
5
SG
6
SG
8
SG
9
Fabric
Second level Segmentation ensures
role based access control between
two groups within a Virtual Network.
Provides the ability to segment the
network into either line of businesses or
functional blocks.
75
SD-Access Policy
Two Level Hierarchy - Macro Level
Known Unknown
Networks Networks
SD-Access
VN
“A”
VN
“B”
VN
Fabric
Virtual Network (VN)
“C”
First level Segmentation ensures zero
communication between forwarding
domains. Ability to consolidate multiple
networks into one management plane.
76
SD-Access Policy
Two Level Hierarchy - Micro Level
Known Unknown
Networks Networks
SG
1
SG
4
SG
7
SD-Access Scalable Group (SG)
SG
2
SG
3
SG
5
SG
6
SG
8
SG
9
Fabric
Second level Segmentation ensures
role based access control between
two groups within a Virtual Network.
Provides the ability to segment the
network into either line of businesses or
functional blocks.
77
Cisco SD-Access Fabric – Endpoint Regestration
Host Registration
Control Plane state:
B
Fabric WLC
SDA Fabric
78
Cisco SD-Access Fabric Architecture
Host Registration
B
Fabric WLC
SDA Fabric
FE1 FE2
79
Cisco SD-Access Fabric Architecture
Host Registration
SDA Fabric
FE1 FE2
80
Cisco SD-Access Fabric Architecture
Host Registration
3 The control plane node upon receiving the map-register
C 10 populates the database tables for the host
FE1 FE2
81
🡪
🡪
Cisco SD-Access Fabric Architecture
Host Registration
4 The Control plane then takes the information from the IP
C 10 and MAC table and populates the ARP table
FE1 FE2
82
🡪
🡪
🡪
Validating L3 EID
IP address of the Host
83
Validating L3 EID
IP address of the Host
Control-Plane#sh lisp site
LISP Site Registration Information
* = Some locators are down or unreachable
# = Some registrations are sourced by reliable transport
84
Validating L2 EID
Mac-Address of the Host
86
Introduction to Campus Fabric
External Connectivity in SD-
Access (Transit)
Cisco SD-Access: IP as Transit/
Peer Network
IP Transit / Peer Network
Network Plane Analysis Perspectives
89
Communicating to Peer Network – IP
Control/Data/Policy Plane
1 1
CONTROL-PLANE LIS eBG External Domain(BGP/IGP)
P P
11
DATA-PLANE VXLAN VRF-LITE External Domain(IP/MPLS/VXLAN
1
POLICY-PLANE1 SGT in VXLAN SGT* External Domain ( IP ACL/SGT)
Tagging
C
B
B
B
External/Peer Domain
E E E
• Manual & Every hop needs to support SGT propagation
90
Inter-Connecting Fabrics/Sites
IP Based WAN
MANAGEMENT
&
Cisco DNA-Center POLICY
SGTs in SXP
Per VRF
C C
B
SD-Access Transit B
SD-Access
B B
Fabric Site (WAN) Fabric Site
Border Border Border
BGP BGP
LISP MP-BGP / Other VRF-lite LISP CONTROL-PLANE
VRF-lite
1
VXLAN SGT (16 bits) 802.1Q 802.1Q VXLAN SGT (16 bits)
MPLS DATA-PLANE
Header VNID (24 bits) VLAN ID (12 bits) Labels VPNID (20 bits) VLAN ID (12 bits) Header VNID (24 bits)
91
Inter-Connecting Fabrics/Sites
DMVPN
11
CONTROL-PLANE
LIS DMVPN/GRE LIS
P P
1
DATA/POLICY-PLANE VXLAN+SGT IP+SGT inline tagging VXLAN+SGT
C C
B B IP Network B B
DMVPN Tunnels
E E E E E E
92
Cisco SD-Access: SD-Access as
Transit
Cisco SD-Access Multi-Site
Consistent Segmentation and Policy across sites
Cisco SD-Access Multi-Site
Cloud
Data Advantages:
Center
End-to-end Segmentation andfor policy
Smaller or isolated Failure Domains
Horizontally scaled networks
Metro Single view of Entire Network
Cisco SD-Access H Local breakout at each Site for Direct
Metro Metro Q
Transit Internet Access (DIA) and other Services
Elimination of Fusion router at every site*
Campu
s1
Campu Campus 3
s2
94
Cisco SD-Access Multi-Site
Key Considerations
Cisco SD-Access Multi-Site Key
Cloud
Data Considerations:
Center for
High-bandwidth connection (Ethernet full
port speed with no sub-rate services)
95
Cisco SD-Access Multi-Site – SD-Access Transit
CONTROL-PLANE
1
LISP LISP LISP
C TC TC C
B B B B
Cisco SD-Access Transit
Border Border
Cisco DNA-Center
DATA+POLICY-PLANE
12
VXLAN+SGT VXLAN+SGT VXLAN+SGT
96
Cisco SD-Access Transit Control Plane for Global Scale
West site Prefixes Only East + West East site Prefixes Only
West Site B B
Cisco SD-Access East Site
Transit
BR-W BR-E
97
Cisco SD-Access Multi-Site
Transit Control Plane Deployment Location
C
C C C
TC TC
West Site B B
Cisco SD-Access East Site
Transit
BR-W BR-E
98
Cisco SD-Access Multisite
Fabric Border support Matrix
C6K NO YES
N7K NO YES
99
Cisco SD-Access for Distributed Campus
Cisco SD-Access Transit
Remote Building Remote Building Remote Building
1 2 N Key Decision Points
Site BN • Tends to be like a Metro area
Site B1 Site B2
B E C with multiple buildings or sites
• Requires direct Internet access
at multiple sites
Cisco SD-Access
Transit • Requires local resiliency
and smaller fault domains
MAN
T T
• 2 Transit CP
DNAC
DC 5-7 NCP +
NDP
DDI
1 DHCP 1
DNS
(Multiple Exits)
1 IPAM A A E E
B B B B
Site
C
HQ C
P P
HQ
Campus
100
Understanding Handoff in Cisco
SD-Access for External
SD-Access Fabric
How VNs work in SD-Access
102
SD-Access Fabric – L3 Handoff, Fusion Router and Route Leaking
ip vrf USERS
rd 1:4099
route-target export 1:4099
route-target import 1:4099
ip vrf GLOBAL
Control Plane rd 1:4097
route-target export 1:4097
route-target import 1:4097
C route-target export 1:4099
T5/1
SVI B
ISIS
SVI B
AF VRF B G0/0/0.B
BGP
GR
T5/2 B SVI A
AF VRF A G0/0/0.A
T
T5/8 G0/0/0 G0/0/3
T1/0/1 T5/1 AF IPv4
MP-BGP
Edge Node Border Node Fusion Router Switch
VRF A Shared
SVI A Services
103
Cisco’s Intent-based Networking
Learning
DNA Center
The Network. Intuitive.
Policy Automation Analytics
Intent Context
Network Infrastructure
Powered by Intent.
Informed by Context.
Switching Routers Wireless
Security
Cisco DNA-Center - PnP DeepDive
❑ PnP Overview
❑ Provision Workflow
Day 0 Deployment Challenges
w/o Automation
~50%
Cisco DNA Automation Day 0 OPEX Savings*
With Plug & Play
Order Deploy • Drop Ship devices
Equipment device on • Centralized device discovery
site (DHCP, DNS, Cloud)
• Non-technical installer at site
• Template based configurations
• Secure SUDI Authentication
PnP Solution Components Cisco DNA Center
1 (PnP Server)
Auto-provision device w/
images & configs.
PnP Connect
Cloud-based device Policy Automation Analytics
discovery
SSL Customer On-Premise
PnP Connect
4 Redirects devices to On- SSL
Prem Cisco DNA Center
PnP Protocol
3 HTTPs/XML based Open
Schema protocol
2 PnP Agent
5 PnP Helper App*
Cisco® switches, routers, Delivers bootstrap status
and wireless AP and troubleshooting checks
Wireless
Access Points
DNS lookup
2
pnpserver.localdomain resolves to Cisco DNA Center IP Address
Switches
(Catalyst®)
USB-based bootstrapping
router-confg/router.cfg/ciscortr.cfg
4 Routers, Switches-Cat9K Only
Manual discovery
not supported for
Manual
Access Points
Goo
d
Device SN#
2 Pre-provision workflow
Workflow securely
pushed to device
w/o
Smart Account
CCW
Bette Cisco DNA
Order with Center
Smart Account r Provisionin
g
Device SN#
3
PnP Connect Installe Workflow securely
pushed to device
Bes
Cloud-sync workflow customer site/branch
t
Day-0 Automation – From Order To Provision
Cisco® Customer Smart
supply chain Device SN Account Device SN
PnP Connect
Cloud-based device
discovery
Label
Cisco DNA Center downloads SN
Device SN added SN per Smart Account from PnP Connect
r
into customer available in PnP
lle
ro
Smart Account Connect
nt
N
Device SN
co
tS
en
s
ise
es
Co
em
Pr
SSL SSL
nt
pr
ro
n-
lle
Cisco DNA Center
to
rI
ac
P
registers its identity
nt
with PnP Connect
co
to
ct
tru
CCW order
s
In
SSL
Deploy image and configuration
Admin
PnP Workflows =
Design + Provision
PnP Workflows = Design + Provision
• PnP is achieved through two major phases, Design and Provision.
• Design Phase
• Provision Phase
• Claim device via PnP (apply device credentials, image and CLI template in profile)
Device mapped to
site Network devices inherit
the properties of the profile Provisio
associated to the site n
114
Network Profiles
Intent Based Design
• Only one for each type of network profile is allowed per site.
CLI Templates
Device Credentials
User Defined
Configuration
System Generated Configuration by
Cisco DNA Center UI Orchestration
• Network Settings
• Device Credentials
Network Settings
• AAA (Radius and TACACS)
• DHCP and DNS
• Syslog, SNMP, and Netflow
Collector
• NTP Server
• Message of Day
Device Onboarding Design Workflow
Switching/Routing
Create
Define Network Define Golden Onboarding Define Network Assign Network
Create Sites
Settings Image (Optional) Templates Profile Profile to Sites
(Optional)
Device Onboarding Design Workflow
Create Sites
Area Level
Define Network
Settings
Building Level
Define Golden
Floor Level
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
Define Network
Settings
2a) AAA Settings
TACACS
- ISE
Define Golden
Image (Optional)
Policy Admin Policy Service
Node Node
Create Onboarding
Templates
(Optional)
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
2b) Non-AAA
Define Network
Common Settings
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
Define Network
Settings
2c) Site-Level
Inheritance and
Override
Define Golden
Image (Optional)
Inheritance logo
Create Onboarding
Templates
(Optional)
Overridden
Define Network
Profile
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
2d) Device
Credentials
Define Network
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
Assign Network
Profile to Sites
Note that users MUST select and save the credentials at site or global level after create them. During PnP Onboarding,
Cisco DNA Center will not proceed unless the credentials at site are selected and saved.
Device Onboarding Design Workflow
Create Sites
Image Repository Key Points
Define Network • “Marking Golden Image” is key concept in Cisco DNA Center to standardize the
Settings image version across the enterprise.
Define Golden • Prior to Cisco DNA Center1.2.8 release, “Marking Golden Image” functionality is
Image (Optional) only available when devices are part of inventory.
Create Onboarding • After Cisco DNA Center 1.2.8 release, we extend this functionality to Day-0
Templates device onboarding scenario where devices are not part of inventory yet.
(Optional)
• Only “Golden” image is eligible for Day-0 PnP claim after 1.2.8 release.
Define Network
Profile
• Only routers and switches software upgrade is supported for Day-0 PnP claim.
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
3a) Import image
Define Network
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
3a) Import image
Define Network
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
A few minutes after importing is completed, the imported image is shown under the new generic Family called
“Imported Images”.
Assign Network
Profile to Sites
Device Onboarding Design Workflow
3b) Assign Device
Create Sites
Family to Image
Define Network
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
Assign Network
Profile to Sites
Device Onboarding Design Workflow
3c) Mark Golden
Create Sites
Image
Define Network
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites Template Editor Key Points
• There is new out-of-box template project “Onboarding Configuration”.
Define Network
Settings
• It is intended for Day-0 PnP claim only.
Define Golden • Only templates under it are eligible for Day-0 PnP claim.
Image (Optional)
• User-defined projects and templates under them are for Day-N provisioning.
Create Onboarding
Templates
(Optional) • PnP claim supports only a single template, not composite or multiple templates.
Define Network • Only routing and switching Day-0 onboarding templates are supported.
Profile
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
Day-N
Project
Define Network
Settings
Variable
Out-of-Box
Project for Day-0
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Day-0
Template
Define Network
Profile
Assign Network
Profile to Sites Only latest committed version of template can be used for provisioning, including PnP claim.
Device Onboarding Design Workflow
How to use desired static IP, not DHCP as management IP for device after
PnP claim?
Create Sites Answer: Use “ip http client source-interface <>” in template which makes device to use this new
IP address to call home to PnP server after config is applied. Then PnP server will give it to
inventory component of Cisco DNA Center as management IP. It is especially useful for
scenarios of multiple IPs or VRFs. Also it is recommended to shut down or remove DHCP from
Define Network original interface for PnP discovery to ensure previous HTTPs session with DHCP IP is cleared.
Settings
Assign Network
Profile to Sites
Note that switch uses VLAN 1 for DHCP and PnP discovery by default.
Device Onboarding Design Workflow
Create Sites
Define Network
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile Users can add Onboarding template
defined in “Template Editor” previously.
Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
Define Network
Settings
Define Golden
Image (Optional)
Create Onboarding
Templates
(Optional)
Define Network
Profile
Assign Network
Profile to Sites
Note that users can only apply one for each type of network profile per site.
PnP Provision Workflow
Switch
Step 2
Step 0 Step 1
Complete Profile
Plan for PnP Discovery Claim to Site via PnP
Provisioning
Plan DHCP Option 43 or DNS for What are Provisioned? What are Provisioned?
devices to discover Cisco DNA
Center • Part 1- PnP Claim • Network Settings of Profile
• Device Credentials of Profile
• CLI Template(s) of Profile
• Part 2- Add to Inventory
• Device Controllability if it is
enabled
Profile Profile
Device Provision Workflow – Step 0 PnP Discovery (DHCP
Option 43)
Cisco DNA
Center IP Cisco DNA Center
Option 43
5A1D;B2;K4;I192.168.139.151;J80
New 1.3.1 UI
Stack icon
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1b) Site Assignment
Assign to Site
New 1.3.1 UI
Switch Example
1c) Configuration
New 1.3.1 UI
Preview template
New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example- Stack
1c) Configuration
New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1d) Advanced Configuration
New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example 1e) Summary
Green checkmark
Click on device to review to indicate “Ready
Summary on what will be to Claim”
detailed info
done by PnP
New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example 1e) Summary
New 1.3.1 UI
User-defined CLI
Configuration
Note that Day-0 configuration generated by Cisco DNA Center is applied first,
then user-defined configuration.
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1f) PnP Complete-
Provision Success
New 1.3.1 UI
Static IP is used as management Note that when “Device Controllability” is on Cisco DNA Center, the following
IP automatically. configurations are added when the switch is added into inventory:
Stack Switch
• IP Device Tracking
• Cisco DNA Center Root CA
• SNMP Server and Enable SNMP Traps Sample Switch
• Syslog Server and Set it to “Critical” Configuration
• SSH Sourcing from Management IP -Add to Inventory
• Cisco TrustSec (CTS) Credential
Device Provision Workflow – Step 2
Complete Provisioning
Switch Example
2a) Provision Device
Device Provision Workflow – Step 2 Complete
Provisioning
Switch Example
2b) Assign to Site
Device Provision Workflow – Step 2 Complete
Provisioning
Switch Example
2c) Advanced
Configuration
2d) Summary
Post
Streamlined Upgrade Process Deploy
Select
Devices
Validations
Upgrade base image,
patches, and other add-ons
in one single flow
Activate
Trustworthiness Integration Software
Create CR
DNA Center
Assures that device images
are not compromised in
any way. Distribute Approve
Software CR
PreCheck
Patching Support Validations
SWIM/SMU
• Benefits
• Golden Image based workflows
drive software consistency
• Pre/Post check ensures that
software updates do not have
adverse effects on the network
• Patching provides small updates to
react quickly to security fixes
Visualize Software Images
❖ Image Repository to
centrally store Software
Images, VNF Images and
Network Container Images
Image Repository: Suggested Images- Automatic
Suggested Images:
• Cisco DNA Center can display the Cisco-recommended software
images for the devices that it manages (by device type).
• Cisco Credentials are required
• If the recommended Golden Image is selected as Golden, Cisco DNA
Center automatically downloads from cisco.com.
Manual Software Images Upload
▪ Import Images/SMU
from :
▪ Cisco.com
▪ URL(http/ftp)
▪ Local PC
▪ Another managed
network device
Built-in
Compliancy
checks to
Automatically
flag devices
Devices not Compliant with Golden Image
Image Update Readiness
Check : Provides a way to
check if network device is
ready for upgrade with Golden
image or not
1) HTTPS
2) SCP
3) *SFTP (For WLC’s and Nexus Devices)
Managing Software Lifecycle
Select Software Images
section , so you can get a
Detailed view of devices
and Software images.
SWIM/SMU Workflow Experience with DNA Center
1
SWIM/SMU Workflow Experience with DNA Center
2
Distribute the image to Network Devices,
you can do it now or schedule it based
on requirement.
SWIM/SMU Workflow Experience with DNA Center
3
4
Confirm the summary of Distribution &
Activation tasks.
SWIM/SMU Workflow Experience with DNA Center
Why SMU ?
Each device Copy Images to
Reduced IT
update causes New Code site over slow
Staff
network outage VPN tunnels
Requires
Business Slows down
bug Time
Loss & software
analysis, Consuming
Downtime rollouts
certification
SMU (Software Maintenance Upgrade)
Affects Quality/Throughput
What
WAN is the problem?
There are 100+
DHCP
points of failure Office site Where is the Network
problem?
services DC
between user
Mobile clients
APs Cisco Prime™
4x Replication
challenge
Slow resolution
Traceroute
Complex
Syslog NetFlow correlation Clients Baseline
3 Ecosystem Integration
• Wi-Fi 6 Readiness
• Wi-Fi 6 Benefits
• Advanced Troubleshooting
3 Ecosystem Integration
3 Ecosystem Integration
Get event
notifications via email
or webhooks
2
Device Ecosystem Integration (Samsung Analytics)
3.
1.
3 Ecosystem Integration
3 Ecosystem Integration
5 Enterprise Robustness
Deployment Flexibility
(On-Prem, Cloud, Hybrid)
Cisco DNA Assurance – Strategic Investment Pillars
DNA Center – Maglev Logical Architecture
App Stack 1 App Stack 2 App Stack N
Maglev Services
IaaS
(Baremetal, ESXi, AWS, OpenStack etc)
Cisco SD-Access (Fusion) Package Services
Trap events, host discovery we leverage ipam-service IP Address manager
apic-em-event-service
snmp traps so they are handled here.
Critical during Provisioning
apic-em-inventory- Provides communication service between network-orchestration-service
orchestation.
manager-service inventory and discovery service
Certificate authority and enables controller orchestration-engine-service Orchestration Service
apic-em-jboss-ejbca
authority on the DNAC. pnp-service PNP Tasks
apic-em-network- Configure devices. Critical service to check
programmer-service during provisioning. policy-analysis-service Policy related
apic-em-pki-broker-
PKI Certificate authority policy-manager-service Policy related
service
command-runner- Responsible for Command Runner related Core database management
postgres
service task system
distributed-cache- rbac-broker-service RBAC
Infrastructure
service
sensor-manager Sensor Related
dna-common-service DNAC-ISE integration task
site-profile-service Site Profiling
dna-maps-service Maps Related services Core service during Provisioning
spf-device-manager-service
phase
dna-wireless-service Wireless Core service during Provisioning
spf-service-manager-service
phase
identity-manager-
DNAC-ISE integration task swim-service SWIM
pxgrid-service
Assurance Services Base Services
cassandra Database cassandra Core Database
collector-agent Collector Agents
catalogserver Local Catalog Server for update
collector-manager Collector Manager
elasticsearch Search elasticsearch Elastic Search Container
ise ISE data collector glusterfs-server Core Filesystem
kafka Communication service
identitymgmt Identity Managenent container
mibs-container SNMP MIBs
netflow-go Netflow data collector influxdb Database
pipelineadmin kibana-logging Kibana Logging collector
pipelineruntime-jobmgr
kong Infrastructure service
pipelineruntime-taskmgr
pipelineruntime-taskmgr maglevserver Infrastructure
pipelineruntime- mongodb Database
taskmgr-data
pipelineruntime- rabbitmq Communication service
taskmgr-timeseries Various Pipelines and Task nanager workflow-server
snmp SNMP Colelctor
workflow-ui
syslog Syslog Collector
trap Trap Collector workflow-worker Various Update workflow task
Most Commonly Used Maglev CLI
$ maglev $ magctl
Usage: maglev [OPTIONS] COMMAND [ARGS]... Usage: magctl [OPTIONS] COMMAND [ARGS]...
Tool to manage a Maglev deployment Tool to manage a Maglev deployment
Options: Options:
--version Show the version and exit. --version Show the version and exit.
-d, --debug Enable debug logging -d, --debug Enable debug logging
-c, --context TEXT Override default CLI context --help Show this message and
--help Show this message and exit. exit.
Commands:
backup Cluster backup operations
Commands:
catalog Catalog Server-related management api API related operations
operations appstack AppStack related operations
completion Install shell completion completion Install shell completion
context Command line context-related operations disk Disk related operations
cronjob Cluster cronjob operations glusterfs GlusterFS related
job Cluster job operations operations
login Log into the specified CLUSTER iam Identitymgmt related
logout Log out of the cluster operations
maintenance Cluster maintenance mode operations job Job related operations
managed_service Managed-Service related runtime operations logs Log related operations
node Node management operations maglev Maglev related commands
package Package-related runtime operations node Node related operations
restore Cluster restore operations service Service related operations
service Service-related runtime operations tenant Tenant related operations
system System-related management operations token Token related operations
system_update_addon System update related runtime operations user User related operations
system_update_package System update related runtime operations workflow Workflow related operations
Logging
Live Log - Service
Log Files:
• To follow/tail the current log of any service:
magctl service logs –r -f <service-name>
EX: magctl service logs -r -f spf-service-manager-service
BRKCRS-2813
Check Service Log in GUI
Click on Kibana Icon
BRKCRS-2813
Check Service Log using Log Explorer
Log Messages
BRKCRS-2813
Changing DNA Center Logging Levels
How to Change the Logging Level
• Navigate to the Settings Page: System Settings Settings Debugging Levels
• Select the service of interest
• Select the new Logging Level
• Set the duration DNA Center should
keep this logging level change
• Intervals: 15 / 30 / 60 minutes or forever
BRKCRS-2813 39
🡪
🡪
🡪
Required information to report an issue
• RCA file
[Sun Feb 11 14:26:00 UTC] [email protected] (maglev-
• SSH to server using maglev user master-1)
$ rca
ssh –p 2222 maglev@<dnacenter_ip_address>
=============================================================
• rca ==
Verifying ssh/sudo access
• Generated file can be copied using scp/sftp from external =============================================================
==
server [sudo] password for maglev: <passwd>
Done
scp –P 2222 mkdir: created directory '/data/rca'
changed ownership of '/data/rca' from root:root to
maglev@<dnacenter_ip_address>:<rca_filena maglev:maglev
me> =============================================================
==
Verifying administration access
=============================================================
• Error Screenshot from UI ==
[administration] password for 'admin': <passwd>
User 'admin' logged into 'kong-frontend.maglev-
• API Debug log using system.svc.cluster.local' successfully
browser debugging mode =============================================================
==
RCA package created on Sun Feb 18 14:26:14 UTC 2018
=============================================================
==
BRKCRS-2813
Package Update
Upgrade time comparison – DNAC 1.3.1.0
3-Node Summary Security Guide for Port and Hosts To be opened
DNAC1.2.8.0 to 1.2.10.4
on1.3.0.3
DNAC Firewall
To
DNAC 1.2.10.4 To 1.3.0.0
1.3.1.0
System Update 2Hr 30Min 3Hr 50 Min 5Hr
221
🡪
Trust Status on Cisco DNA Center
• Identity source status: (Under System360)
• AAA server Status (Settings – Auth/Policy Server) • INIT
• INPROGRESS • Available/Unavailable (PxGRID state)
• ACTIVE • TRUSTED/UNTRUSTED
• FAILED
• RBAC_FAILURE
Discovery/Inventory/
Provisioning Debugging
Troubleshooting – Discovery/Inventory
• Check for IP address reachability from DNAC
to the device
• Check username/password configuration in
Settings
• Check whether telnet/ssh option is properly
selected
• Check using manual telnet/ssh to the
device from DNAC or any other client
• Check SNMP community configuration
matches on switch and DNA-C
• Discovery View will provide additional
information.
Process-Cfs-Step
Persist the data and take snapshot for all namespaces in a
single transaction
Rfs-Merge-Step
Complete Update the task with an aggregate merged message
SDA Provisioning – Task Status Check
Click on View Target Device List
Click on Show task
Status Check the status
B 10.2.100.1
B ▪ Overlay Network
▪ Instance Id - 4099
10.2.120.1 10.2.120.2 10.2.120.3
▪ Dynamic EID – 10_2_1_0-Campus
Fabric Edge 1 Fabric Edge 3 ▪ VLAN – 1021
(FE1) (FE3)
10.2.1.99 Fabric Edge 2 10.2.1.89
(FE2)
Fabric Domain
(Overlay)
Here Is How You Begin
External Connectivity
FE1
router lisp
10.2.1.99 site site_sjc
...
eid-prefix instance-id 10.2.1.0/24 accept-more-specifics
exit
router lisp
...
eid-table Campus instance-id
dynamic-eid 10_2_1_0-Campus
database-mapping 10.2.1.0/24 locator-set campus_fabric
exit
Registration
Message flow
C
1 Client send ARP, DHCP or
DATA pkt
FE1 CP
1 2 FE saves the host info in
local database. Send the
2 registration message to
CP (Map–server)
3 3 CP receives the
registration message
saves the host tracking
database and send the
reply
C
MAC B B
Address ?
1 FE1#show mac address
1021 0013.a91f.b2b0 DYNAMIC Te1/0/23
If you don’t see the MAC address entry, then it’s a SILENT HOST.
ARP
Entry ?
2 FE1#show arp vrf Campus
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.2.1.99 0 0013.a91f.b2b0 ARPA Vlan1021
IP Device
Tracking ?
3 FE1#show device-tracking database
Network Layer Address Link Layer Address Interface vlan
ARP 10.2.1.99 0013.a91f.b2b0 Te1/0/23 1021
Fabric Edge Fabric Edge can learn the IP address from ARP, DHCP or DATA pack. If device tracking entry is
missing then check if client got an IP
C
B B
LISP local
database ?
FE1#show ip lisp instance-id 4099 database
4 LISP ETR IPv4 Mapping Database for EID-table vrf Campus (IID 4099)
LSBs: 0x1 Entries total 3, no-route 0, inactive 0
Instance
Enable debug if the database entry is missing ID
EID
FE1 RLOC
C
B B
Dynamic EID
FE1 RLOC EID
Instance
ID
C
B B
LISP Control
Plane Entry ?
5 CP#show lisp site instance-id 4099
Site Name Last Up Who Last Inst EID Prefix
C Register Registered ID
site_sjc never no -- 4099 10.2.1.0/24
10.2.1.99/32 3d23h yes# 10.2.120.1 4099
message ?
debug lisp control map-request
*Jan 17 01:56:01.045: LISP: Send map request for EID prefix IID 4099 10.2.1.99/32
B B
Control
Plane FE1 RLOC
Verification at the FEs
4b FE1#show ip lisp instance-id 4099 database 10.2.120.1
10.2.1.99/32, locator-set rloc_021a8c01-5c45-4529-addd-b0d626971a5f
Locator Pri/Wgt Source State
10.2.120.1 10/10 cfg-intf site-self, reachable
10.2.120.3
4c FE3#show ip lisp instance-id 4099 database
10.2.1.89/32, locator-set rloc_021a8c01-5c45-4529-addd-b0d626971a5f
Locator Pri/Wgt Source State
10.2.120.3 10/10 cfg-intf site-self, reachable