0% found this document useful (0 votes)
39 views

Cisco SD-Access: Training

The document discusses Cisco SD-Access training and understanding LAN network architecture. It describes legacy 80/20 network designs that placed most resources locally and used end-to-end VLANs. Modern 20/80 designs centralize resources and use localized VLANs for better scalability. Cisco's hierarchical network model is also explained, consisting of access, distribution and core layers to aggregate switches and traffic. Different Cisco Catalyst switch series - 9200, 9300 and 9400 - suitable for each network layer are introduced.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Cisco SD-Access: Training

The document discusses Cisco SD-Access training and understanding LAN network architecture. It describes legacy 80/20 network designs that placed most resources locally and used end-to-end VLANs. Modern 20/80 designs centralize resources and use localized VLANs for better scalability. Cisco's hierarchical network model is also explained, consisting of access, distribution and core layers to aggregate switches and traffic. Different Cisco Catalyst switch series - 9200, 9300 and 9400 - suitable for each network layer are introduced.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 242

Cisco SD-Access

Training

Mohamed Sarwar Shariff


Understanding LAN Network
Architecture
Understanding LAN Network Architecture
Network Traffic Models
Traffic flow is an important consideration when designing scalable, efficient networks. Fundamentally,
this involves understanding two things:
· Where do resources reside?
· Where do the users reside that access those resources?
Legacy networks adhered to the 80/20 design, which dictated that:
· 80 percent of traffic should remain on the local network.
· 20 percent of traffic should be routed to a remote network.
To accommodate this design practice, resources were placed as close as possible to the users that
required them. This allowed most of the traffic to be switched, instead of routed, which reduced latency
in legacy networks.
The 80/20 design allowed VLANs to be trunked across the entire campus network, a concept known
as end-to-end VLANs:

3
Understanding LAN Network Architecture
End-to-end VLANs allow a host to exist anywhere on the campus network, while maintaining Layer-2
connectivity to its resources.

However, this flat design poses numerous challenges for scalability and performance:
· STP domains are very large, which may result in instability or convergence issues.
· Broadcasts proliferate throughout the entire campus network.
· Maintaining end-to-end VLANs adds administrative overhead.
· Troubleshooting issues can be difficult.
As network technology improved, centralization of resources became the dominant trend. Modern networks
adhere to the 20/80 design:
· 20 percent of traffic should remain on the local network.
· 80 percent of traffic should be routed to a remote network.
Instead of placing workgroup resources in every local network, most organizations centralize resources into
a datacenter environment. Layer-3 switching allows users to access these resources with minimal latency.

The 20/80 design encourages a local VLAN approach. VLANs should stay
localized to a single switch or switch block:

4
Understanding LAN Network Architecture

This design provides several benefits:


· STP domains are limited, reducing the risk of convergence issues.
· Broadcast traffic is isolated within smaller broadcast domains.
· Simpler, hierarchical design improves scalability and performance.
· Troubleshooting issues is typically easier.
There are nearly no drawbacks to this design, outside of a legacy application requiring Layer-2
connectivity between users and resources. In that scenario, it’s time to invest in a better application.

5
Understanding LAN Network Architecture
The Cisco Hierarchical Network Model
To aid in designing scalable networks, Cisco developed a hierarchical
network model, which consists of three layers:
· Access layer
· Distribution layer
· Core layer

6
Understanding LAN Network Architecture
Cisco Hierarchical Model – Access Layer
The access layer is where users and hosts connect into the network. Switches at the access layer typically
have the following characteristics:
· High port density
· Low cost per port
· Scalable, redundant uplinks to higher layers
· Host-level functions such as VLANs, traffic filtering, and QoS
In an 80/20 design, resources are placed as close as possible to the users that require them. Thus, most traffic
will never need to leave the access layer.

In a 20/80 design, traffic must be forwarded through higher layers to reach centralized resources.

7
Understanding LAN Network Architecture
Cisco Hierarchical Model – Distribution Layer
The distribution layer is responsible for aggregating access layer switches, and connecting the access layer to the
core layer. Switches at the distribution layer typically have the following characteristics:
· Layer-3 or multilayer forwarding
· Traffic filtering and QoS
· Scalable, redundant links to the core and access layers
Historically, the distribution layer was the Layer-3 boundary in a hierarchical network design:
· The connection between access and distribution layers was Layer-2.
· The distribution switches are configured with VLAN SVIs.
· Hosts in the access layer use the SVIs as their default gateway.
This remains a common design today.
However, pushing Layer-3 to the access-layer has become increasingly prevalent. VLAN SVIs are configured on
the access layer switch, which hosts will use as their default gateway.

A routed connection is then used between access and distribution layers, further minimizing STP convergence
issues and limiting broadcast traffic.

8
Understanding LAN Network Architecture
Cisco Hierarchical Model – Core Layer
The core layer is responsible for connecting all distribution layer switches. The core is often referred to as the network
backbone, as it forwards traffic from to every end of the network.

Switches at the core layer typically have the following characteristics:


· High-throughput Layer-3 or multilayer forwarding
· Absence of traffic filtering, to limit latency
· Scalable, redundant links to the distribution layer and other core switches
· Advanced QoS functions

Proper core layer design is focused on speed and efficiency. In a 20/80 design, most traffic will traverse the core layer.
Thus, core switches are often the highest-capacity switches in the campus environment.

Smaller campus environments may not require a clearly defined core layer separated from the distribution layer.
Often, the functions of the core and distribution layers are combined into a single layer. This is referred to as a
collapsed core design.

9
Understanding LAN Network Architecture
Cisco Hierarchical Model – Practical Application
A hierarchical approach to network design enforces scalability and manageability. Within this
framework, the network can be compartmentalized into modular blocks, based on function.

10
Understanding LAN Network Architecture
Cisco Hierarchical Model – Practical Application
The above example illustrates common block types:
· User block – containing end users
· Server block – containing the resources accessed by users
· Edge block – containing the routers and firewalls that connect users to the WAN or
Internet

Each block connects to each other through the core layer, which is often referred to as the
core block. Connections from one layer to another should always be redundant.

A large campus environment may contain multiple user, server, or edge blocks. Limiting
bottlenecks and broadcasts are key considerations when determining the size of a block.

11
Understanding different
Models of Switches and their
role in Campus LAN network
Understanding different Models of Switches and
their role in Campus LAN network
Cisco Catalyst 9200 Series

Cisco Catalyst 9200 Series switches improve network performance and simplify IT operations. As part of the award-
winning Catalyst 9000 family, the platform provides best-of-breed capabilities not offered by other switches in its class.
Enjoy innovations in advanced telemetry, automation, and security, while achieving twice the performance of the
previous generation.
Features:
▪ Optional stacking, Layers 2 and 3, up to 160 Gbps
▪ Up to 48 ports full Perpetual PoE+ and multigigabit
▪ 1G/10G/25G/40G uplinks
▪ Entry level for intent-based networking

13
Understanding different Models of Switches and
their role in Campus LAN network
Cisco Catalyst 9300 Series

The Catalyst 9300 Series breaks new ground, with up to 1 Tbps of capacity in a stackable switching platform. And for
security, IoT, and the cloud, these switches form the foundation of Cisco Software-Defined Access, our leading
enterprise architecture.
Features:
▪ Stackable, Layers 2 and 3, up to 1 Tbps
▪ PoE, PoE+, UPOE, UPOE+, Cisco StackPower
▪ 25G/10G fiber, 1G/2.5G/5G/10G multigigabit; multigigabit/25G/40G/100G uplinks

14
Understanding different Models of Switches
and their role in Campus LAN network
Cisco Catalyst 9400 Series

The Catalyst 9400 Series is the next generation of modular access switches built for security, flexibility, IoT, and smart
buildings. They deliver high availability, support up to 9 Tbps, and provide the latest in 90-watt UPOE+, forming the
foundation for the return to the trusted workplace.
Features:
▪ Modular, Layers 2 and 3, up to 9 Tbps
▪ Cisco Multigigabit Technology, SFP/SFP+
▪ PoE, PoE+, UPOE, UPOE+
▪ Designed for Cisco DNA and Cisco SD-Access

15
Understanding different Models of Switches
and their role in Campus LAN network
Cisco Catalyst 1000 Series

Cisco Catalyst 1000 Series switches provide enterprise-grade network access sized for small businesses. With a wide
range of Power over Ethernet (PoE) and port combinations, these easy-to-manage switches provide the performance
a modern small office needs.
Features:
▪ Fanless design
▪ Data, PoE, or PoE+, 2 SFP uplinks
▪ Extended temperature range
▪ Managed with web UI or CL

16
Understanding different Models of Switches and
their role in Campus LAN network
Cisco Catalyst PON (Passive Optical Network) Series

With enterprise-grade features such as power and uplink redundancy, Power over Ethernet (PoE+) and simple, low-
cost operations, the Cisco Catalyst PON Series gives you what you need today, in a simple, safe, and cost-effective
GPON solution..
Features:
▪ 1G data, POTS, CATV, Wi-Fi, and PoE+
▪ Redundant fans and power supply
▪ Managed with CLI or free Cisco Catalyst PON Manager

17
Understanding different Models of Switches
and their role in Campus LAN network
Meraki Series Switches

Cloud Managed Network Switching.


Features:
▪ Layer 2 and Layer 3 models, up to 48 ports
▪ Cloud-managed, GUI-based configuration
▪ Stacking and non-stacking models
▪ PoE+ and UPOE support (by model)

18
Understanding Software
-Defined Networking (SDN)
and Cisco’s Approach to
SDN
SDN
Quick Overview
Software Defined Networking
SDN Definition (ONF): The physical separation of the network control plane from the forwarding plane, and
where a control plane controls several devices.

Control Plane

Openstac
App-1 App-2 Puppet/Chef
k
Data
/NSO
Plane
In SDN, Not All REST Northbound API
Processing
Happens Inside Device ODL, OSC, APIC,
Control Plane - SDN Controller CONTRAIL
Control Plane
Openflo Netcon Opflex Southbound API
w f
Openflo
w
OF Agent Device Device Device
DataPlane

Data Plane – Network Devices


Some SDN Controllers
Open Source SDN Controllers
▪ OpenDaylight
▪ ONOS
▪ NOX/POX
Commercial SDN Controllers
▪ Cisco Open SDN Controller (OSC)
▪ Cisco APIC (Application Centric Infrastructure
Controller)
▪ Cisco APIC-EM (APIC- Enterprise Module)
▪ VMware NSX Controller
▪ HP Virtual Application Networks (VAN) SDN Controller
▪ Nuage Virtualized Service Controller (VSC)
▪ Juniper Contrail
Network
Programmability
CLI to API
• Familiar Manual, CLI-driven, device-by-device approach is inefficient
• Increased need for programmatic interfaces which allow faster and
automated execution of processes and workflows with reduced errors
• Need for programmatically readable data structures
Network Programmability Options
1 2a 3 Overlays Networks
Programmable APIs Pure SDN
2b Application
Hybrid SDN s

Applications Applications Applications


NSO, ESC etc Open APIs
Applications Vendor- Open APIs Open APIs
(Network Mgmt, specific APIs Virtual Switch
Monitoring) Controller Controller Overlays

Overlay
CLI, Vendor Protocols
SNMP, Specific OpenFlow Vendor (e.g. VXLAN)
Netflow (e.g. , PCEP, Specific
Nexus OpenFlow Vendor I2RS, (e.g.
API) , PCEP, Specific Netconf Nexus
I2RS, (e.g. API)
Netconf Nexus
Control Plane Control Plane API) Control Plane
Control Plane
Overlays
Data Plane Data Plane Data Plane Data Plane Data Plane
Device Programmability Options – No Single Answer!

C/Java Python NETCONF REST OpenFlow ACI Fabric OpenStack Puppet Protocols

RESTful

Management Puppet …

Orchestration Neutron
“Protocols”
Network Services BGP, PCEP,...
Control OpFlex

Forwarding OpenFlow

YANG JSON
Cisco Architectural Vision
SDN/NFV and Orchestration enable change
Service Orchestration
Service
Orchestration Automation, provisioning and interworking
of physical and virtual resources
NFV
SDN NFV Network functions and software running
on any open standards-based hardware
SDN
Control & Data Plane separation…Centralized
Control…abstraction & programmability

Traditional
Traditional
Distributed control plane components, physical
entities
Understanding Cisco's
approach of SDN for LAN and
WAN
Understanding Cisco's approach of SDN for LAN and WAN
The Cisco’s approach of SDN for LAN and WAN is divided into:
· SD-WAN – SDN approach for WAN
· SD-Access – SDN approach for LAN
Note: This training covers SD-Access in details

29
SD – Access Architecture
Overview
Fabric Fundamentals
Architecture| key Components| Fabric Constructs
Cisco’s Intent-Based Networking SAAS

Delivered by Cisco Software Defined Access ACI


Data Center

LEARNING Branch

Cisco DNA Center

SD-WAN Wireless
Policy Automation Analytics
Control

INTENT CONTEXT Fabric


Border
Fabric
Intent-Based Control
Network Infrastructure Cisco SD-
Access
Switch Route Wireless

Fabric
Edge
SECURITY

32
Cisco Software Defined Access
The Foundation for Cisco’s Intent-Based Network
Cisco DNA Center
Identity-Based
Policy and Segmentation
Policy definition decoupled
Policy Automatio Assurance
n from VLAN and IP address

B B
C
Outside Automated
Network Fabric
Single fabric for Wired and
Wireless with full automation

Insights and
SD-Access
Extension User Mobility
Telemetry
Analytics and insights into
Policy follows User User and Application experience
IoT Network Employee Network
What is Cisco SD-Access?
Campus Fabric + Cisco DNA Center (Automation and Assurance)
APIC-EM
NCP
▪ Cisco SD-Access
1.X
GUI approach provides automation
ISE NDP
PI
and assurance of all Fabric
Cisco DNA configuration, management and group-
Center based policy

Cisco DNA Center integrates


multiple management systems, to
orchestrate LAN, Wireless LAN and
B
WAN access
B
▪ Campus Fabric
C CLI or API approach to build a LISP +
VXLAN + CTS Fabric overlay for your
enterprise Campus networks

CLI provides backwards compatibility,


Cisco SD-Access but management is box-by-box. API
Fabric provides device automation via
NETCONF/YANG

Separated management systems


SD-Access
What exactly is a Fabric?

A Fabric is an Overlay
An Overlay network is a logical topology used to virtually connect
devices, built over an arbitrary physical Underlay topology.
An Overlay network often uses alternate forwarding attributes to provide
additional services, not provided by the Underlay.

Examples of Network Overlays


• GRE, mGRE • LISP
• MPLS, VPLS • OTV
• IPSec, DMVPN • DFA
• CAPWAP • ACI

35
SD-Access
Fabric Terminology

Overlay Network Overlay Control Plane

Encapsulation

Edge Device Edge Device

Hosts
(End-Points)

Underlay Network Underlay Control Plane

36
Cisco SD-Access
Fabric Roles & Terminology
▪ Network Automation – Simple GUI
Automation and APIs for intent-based Automation
Identity Cisco ISE Cisco DNA Center of wired and wireless fabric devices
Services
▪ Network Assurance – Data Collectors
analyze Endpoint to Application flows
Assurance and monitor fabric network status
▪ Identity Services – NAC & ID Services
(e.g. ISE) for dynamic Endpoint to Group
Fabric Border IP Fabric mapping and Policy definition
Nodes Wireless
Controllers
B B ▪ Control-Plane Nodes – Map System that
manages Endpoint to Device relationships
Control-Plane
Intermediate Nodes ▪ Fabric Border Nodes – A fabric device
C
Nodes (Underlay) (e.g. Core) that connects External L3
network(s) to the SD-Access fabric

SD-Access ▪ Fabric Edge Nodes – A fabric device


(e.g. Access or Distribution) that connects
Fabric Edge
Nodes Fabric Fabric Wireless
Access Points
Wired Endpoints to the SD-Access fabric

▪ Fabric Wireless Controller – A fabric device


(WLC) that connects Fabric APs and
Wireless Endpoints to the SD-Access fabric

38
Cisco SD-Access Architecture
Control-Plane Nodes – A Closer Look
Control-Plane Node runs a Host Tracking Database to map location information

• A simple Host Database that maps Endpoint IDs to a C


current Location, along with other attributes Known
Networks
Unknown
Networks

B B
• Host Database supports multiple types of Endpoint ID
lookup types (IPv4, IPv6 or MAC)

• Receives Endpoint ID map registrations from Edge


and/or Border Nodes for “known” IP prefixes

• Resolves lookup requests from Edge and/or Border


Nodes, to locate destination Endpoint IDs

39
For more details: cs.co/sda-compatibility-matrix

SD-Access Platforms The Channelco®


CRN®
Products of the Year
Fabric Control Plane 2017, 2018

NE
W

Catalyst 9300 Catalyst 9400 Catalyst 9500 Catalyst 9600

• Catalyst 9300 • Catalyst 9400 • Catalyst 9500 • Catalyst 9600


• 1/mG RJ45 • Sup1XL • 40/100G QSFP • Sup1
• 10/25/40/mG NM • 9400 Cards • 1/10/25G SFP • 9600 Cards

40
For more details: cs.co/sda-compatibility-matrix

SD-Access Platforms
Fabric Control Plane

Catalyst 3K Catalyst 6K ISR 4K & ENCS ASR1K

NE
W

• Catalyst 3650/3850 • Catalyst 6500/6800 • ISR 4430/4450 • ASR 1000-X


• 1/mG RJ45 • Sup2T/Sup6T • ISR 4330/4450 • ASR 1000-HX
• 1/10G SFP • C6800 Cards • ENCS 5400 • 1/10G RJ45
• C6880/6840-X • ISRv / CSRv • 1/10G SFP
• 1/10/40G NM Cards

41
Introduction SD-Access
Campus Fabric
Architecture| key Components| Fabric Constructs
SD-Access Fabric
Edge Nodes – A Closer Look

Edge Node provides first-hop services for Users / Devices connected to a Fabric

• Responsible for Identifying and Authenticating C


Endpoints (e.g. Static, 802.1X, Active Directory) Known
Networks
Unknown
Networks

B B
• Register specific Endpoint ID info (e.g. /32 or /128)
with the Control-Plane Node(s)

• Provide an Anycast L3 Gateway for the connected


Endpoints (same IP address on all Edge nodes)

• Performs encapsulation / de-encapsulation of data


traffic to and from all connected Endpoints

50
SD-Access Fabric
Border Nodes

Border Node is an Entry & Exit point for data traffic going Into & Out of a Fabric

There are 3 Types of Border Node! C


Known Unknown
Networks Networks

• Internal Border (Rest of Company) B B


• connects ONLY to the known areas of the company

• External Border (Outside)


• connects ONLY to unknown areas outside the company

• Internal + External (Anywhere)


• Used for “Unknown” Routes outside your company

51
For more details: cs.co/sda-compatibility-matrix

SD-Access Platforms Winner


The Channelco®
CRN®
Fabric Edge Node
Products of the Year
2017, 2018
NE NE
W W

Catalyst 9200 Catalyst 9300 Catalyst 9400 Catalyst 9500

• Catalyst 9200/L* • Catalyst 9300 • Catalyst 9400 • Catalyst 9500


• 1/mG RJ45 • 1/mG RJ45 • Sup1/Sup1XL • 1/10/25G SFP
• 1G SFP (Uplinks) • 10/25/40/mG NM • 9400 Cards • 40/100G QSFP

52
For more details: cs.co/sda-compatibility-matrix

SD-Access Platforms
Fabric Edge Node
NE
W

Catalyst 3K Catalyst 4500E Catalyst 6K

• Catalyst 3650/3850 • Catalyst 4500E • Catalyst 6500/6800


• 1/mG RJ45 • Sup8E/Sup9E (Uplink) • Sup2T/Sup6T
• 1/10G SFP • 4600/4700 Cards (Host) • C6800 Cards
• 1/10/40G NM Cards • C6880/6840-X

53
SD-Access Fabric
Host Pools – A Closer Look

Host Pool provides basic IP functions necessary for attached Endpoints

• Edge Nodes use a Switch Virtual Interface (SVI), with C


IP Address /Mask, etc. per Host Pool Known
Networks
Unknown
Networks

B B
• Fabric uses Dynamic EID mapping to advertise each
Host Pool (per Instance ID) Pool
Pool Pool Pool
.4
.17 .8 .25
• Fabric Dynamic EID allows Host-specific (/32, /128 or Pool Pool
Pool
.19 Pool
Pool
MAC) advertisement and mobility .13 .23 .11 .12

• Host Pools can be assigned Dynamically (via Host


Authentication) and/or Statically (per port)

54
SD-Access Fabric
Anycast Gateway – A Closer Look

Anycast GW provides a single L3 Default Gateway for IP capable endpoints

• Similar principle and behavior to HSRP / VRRP with a C


Known Unknown
shared “Virtual” IP and MAC address Networks Networks

B B
• The same Switch Virtual Interface (SVI) is present on
EVERY Edge with the SAME Virtual IP and MAC

• Control-Plane with Fabric Dynamic EID mapping


maintains the Host to Edge relationship

• When a Host moves from Edge 1 to Edge 2, it does G G G G G


W W W W W
not need to change it’s Default Gateway ☺

55
SD-Access Fabric
Campus Fabric - Key Components

1. Control-Plane based on LISP


2. Data-Plane based on VXLAN
3. Policy-Plane based on CTS
B B
Key Differences
C

• L2 + L3 Overlay -vs- L2 or L3 Only


• Host Mobility with Anycast Gateway
• Adds VRF + SGT into Data-Plane
• Virtual Tunnel Endpoints (Automatic)
• NO Topology Limitations (Basic IP)

56
LISP - Locator / ID Separation Protocol
Location and Identity separation

Traditional Behavior -
Location + ID are “Combined”
IP core
192.158.28.101
When the Device moves, it gets a new
IPv4 or IPv6 Address for its new
Device IPv4 or IPv6 Identity and Location
Address represents both 189.16.17.89
Identity and Location Prefix RLOC
192.158.28.101 …...171.68.226.120
…...171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128
192.58.28.128
….....171.68.228.121
….....171.68.228.121 Mapping
Database Overlay Behavior -
189.16.17.89 ….....171.68.226.120

Location & ID are “Separated”


IP core
192.158.28.101 When the Device moves, it keeps the
171.68.228.121 171.68.226.120 same IPv4 or IPv6 Address.
Device IPv4 or IPv6 Address It has the Same Identity
represents Identity only Underlay addressing 192.158.28.101
End Point ID (EID) space space

Location is Only the Location Changes


Here (Route Locator (RLOC) 57
SD-Access Fabric
LISP Control Plane
Fabric nodes use LISP as a control plane for Cisco DNA Center ISE
Endpoint Identifier (EID) and Routing Locator
(RLOC) info
Automation Analytics Policy
172.16.101.11/16 -> 192.168.1.11
Fabric Control Plane node acts as a Map
172.16.101.12/16 -> 192.168.1.13
Server / Resolver for EID to RLOC mappings B B
C

Fabric Edge and Internal Border devices


registers EIDs to the Map Server.

192.168.1.11/32 192.168.1.13/32
External Border node acts as PXTR (LISP Proxy
Database Mapping Entry
Tunnel Router) and provides default gateway 172.16.101.11/16 -> 192.168.1.11
when no mapping exists. Database Mapping Entry Employee Contracto
172.16.101.12/16 -> 192.168.1.13 SGT rSGT
172.16.101.11/16 172.16.101.12/16

Corporate VN
58
SD-Access Fabric
Key Components - LISP

1. Control-Plane based on LISP


Host
Mobility

Routing Protocols = Big Tables & More CPU LISP DB + Cache = Small Tables & Less CPU
with Local L3 Gateway with Anycast L3 Gateway

BEFOR AFTE
E
IP Address = Location + Identity R
Separate Identity from Location
Prefix RLOC
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
Prefix Next-hop 22.78.190.64
172.16.19.90
….....171.68.226.121
….....171.68.226.120
189.16.17.89 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 192.58.28.128 ….....171.68.228.121
172.16.19.90 ….....171.68.226.120 192.58.28.128 ….....171.68.228.121
192.58.28.128 …....171.68.228.121 189.16.17.89 ….....171.68.226.120
189.16.17.89 …....171.68.226.120 Prefix Next-hop 22.78.190.64
172.16.19.90
….....171.68.226.121
….....171.68.226.120

Mapping
22.78.190.64 ….....171.68.226.121 189.16.17.89 ….....171.68.226.120
172.16.19.90 …......171.68.226.120 192.58.28.128 ….....171.68.228.121
22.78.190.64 ….....171.68.226.121
192.58.28.128 ….....171.68.228.121 172.16.19.90 ….....171.68.226.120

Endpoint
189.16.17.89 …....171.68.226.120 192.58.28.128 …....171.68.228.121
22.78.190.64 ….....171.68.226.121

Database
172.16.19.90 …......171.68.226.120
192.58.28.128 …......171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 …......171.68.226.121

Routes are
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121

Consolidated
Prefix
189.16.17.89
Next-hop
…......171.68.226.120
to LISP DB
22.78.190.64 ….....171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90
192.58.28.128
…......171.68.226.120
….....171.68.228.121
Prefix Next-hop
189.16.17.89 ….....171.68.226.120
189.16.17.89 …....171.68.226.120 22.78.190.64 ….....171.68.226.121
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
172.16.19.90 …......171.68.226.120 192.58.28.128 …....171.68.228.121
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121

Prefix Next-hop
Prefix Next-hop 189.16.17.89
22.78.190.64
….....171.68.226.120
….....171.68.226.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
172.16.19.90 ….....171.68.226.120 192.58.28.128 …....171.68.228.121
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 …....171.68.226.120

Topology + Endpoint Routes Only Local Routes


22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120

Topology Routes
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121

Endpoint Routes

59
Fabric Operation Control-Plane EID RLOC

Control-Plane Roles & Responsibilities


a.a.a.0/24
w.x.y.1
b.b.b.0/24
x.y.w.2
c.c.c.0/24 z.q.r.5
d.d.0.0/16 EID RLOC

LISP Map Server / Resolver


z.q.r.5
a.a.a.0/24

EID Space w.x.y.1


b.b.b.0/24
x.y.w.2

(Control-Plane) c.c.c.0/24
d.d.0.0/16
z.q.r.5

z.q.r.5
EID RLOC
• EID to RLOC mappings Edge
a.a.a.0/24
w.x.y.1
b.b.b.0/24
x.y.w.2

• Can be distributed across


c.c.c.0/24 z.q.r.5

Non-LISP Prefix Next-hop


w.x.y.1
d.d.0.0/16
z.q.r.5

multiple LISP devices e.f.g.h


x.y.w.2
e.f.g.h
z.q.r.5
e.f.g.h

LISP Tunnel Router - XTR Border


z.q.r.5
e.f.g.h RLOC Space
(Edge & Internal Border)
• Register EID with Map Server
• Ingress / Egress (ITR / ETR) Edge

LISP Proxy Tunnel Router - PXTR EID Space


(External Border)
• Provides a Default Gateway • EID = Endpoint Identifier
when no mapping exists • Host Address or Subnet
• RLOC = Routing Locator
• Ingress / Egress (PITR / PETR)
• Local Router Address

60
Fabric Operation
Fabric Internal Forwarding (Edge to Edge)
3 EID-prefix: 10.2.2.2/32
Mapping Locator-set: Path Preference
Entry Controlled
2.1.2.1, priority: 1, weight:100 by Destination Site
1
DNS Entry:
Branch Non-Fabric Non-Fabric
D.abc.com A 10.2.2.2
10.1.0.0/24

Fabric
Fabric Borders
S Edge
2
1.1.1.1
10.1.0.1 10.2.2.2 5.3.3.3

IP Network 5.1.1.1 5.2.2.2


4 Mapping
System
1.1.1.1 2.1.2.1

10.1.0.1 10.2.2.2 2.1.1.1 2.1.2.1 3.1.2.1


3.1.1.1

5 Fabric Edges

10.1.0.1 10.2.2.2
D
10.2.2.3/16 10.2.2.2/16 10.2.2.4/16 10.2.2.5/16

Subnet 10.2.0.0 255.255.0.0 stretched across

61
🡪
🡪
🡪
🡪
Fabric Operation 3 EID-Prefix: 10.2.2.2/32
Forwarding from Outside (Border to Edge) Mapping Locator-Set:
Entry 2.1.2.1, priority: 1, weight: 100
1
DNS Entry:
D.abc.com A 10.2.2.2 192.3.0.1

S
Non-Fabric
2
192.3.0.1 10.2.2.2 Fabric Borders
4.4.4.4

4 5.3.3.3

4.4.4.4 2.1.2.1 IP Network 5.1.1.1 5.2.2.2


Mapping
192.3.0.1 10.2.2.2 System

2.1.1.1 2.1.2.1 3.1.1.1 3.1.2.1

5 Fabric Edges
192.3.0.1 10.2.2.2

D
10.2.2.3/16 10.2.2.2/16 10.2.2.4/16 10.2.2.5/16

Subnet 10.2.0.0 255.255.0.0 stretched across

62
🡪
🡪
🡪
🡪
Fabric Operation Fabric Control Plane
Host Mobility – Dynamic EID Migration Map Register 10.10.0.0/16 – 12.0.0.1
EID: 10.17.1.10/32
Node: 12.1.1.1 10.2.1.10/32 – 12.1.1.1
D 10.2.1.10/32 – 12.2.2.1
10.10.10.0/24
2.1.1.1

DC1 3.1.1.1
Fabric Borders 1.1.1.1

Mapping
System

12.0.0.1 12.0.0.2
Routing Table 5
10.2.1.0/24 – Local 3 Routing Table
10.2.1.10/32 – Local 10.2.1.0/24 – Local 42
10.2.1.10/32 – LISP0
10.2.1.10/32 - Local
IP Network

12.1.1.1 12.1.1.2 12.2.2.1 12.2.2.2

Campus Campus
S Fabric Edges 1
Bldg 1 Bldg 2

10.2.1.10 10.2.1.10

63
SD-Access Fabric - VXLAN
VXLAN Data Plane
Cisco DNA Center ISE

Fabric nodes use VXLAN (Ethernet Based) as


the data plane which supports both L2 and L3
Automation Analytics Policy
overlay.

B B
VXLAN header contains VNID (VXLAN Network C
Identifier) field which allows up to 16 million
VRFs.
VXLAN

VXLAN header also has Group Policy ID for


Scalable Group Tags (SGTs) allowing 64,000
192.168.1.11/32 192.168.1.13/32
SGTs.
172.16.101.11 -> 172.16.101.12
Employee Contracto
SGT rSGT
172.16.101.11/16 172.16.101.12/16

Corporate VN
64
SD-Access Fabric
Cisco TrustSec Policy Plane
Scalable Group Tag (SGT) is a logical construct Cisco DNA Center ISE
defined/identified based on the user and/or
device context.
Automation Analytics Policy

ISE dynamically assign SGTs to the users and


devices coming to the network fabric. B B
C

Nodes add SGTs to the fabric encapsulation


when communicating between the users and
devices.

Edge and border nodes enforce the SGACL


policies and contracts for the SGTs they protect
locally.
Lightin Cameras Employee Developer Contracto Supplier
gSGT SGT SGT SGT rSGT SGT

IoT VN Corporate VN
65
SD-Access Fabric – Virtual Networks
How VNs work in SD-Access

• Fabric Devices (Underlay) connectivity Scope of Fabric

is in the Global Routing Table User-Defined VN(s)

• INFRA_VN is only for Access Points User VN (for Default)


Bord
and Extended Nodes in GRT er
USER VRF(s)
VN (for APs, Extended Nodes)
• DEFAULT_VN is an actual “User VN” DEFAULT_VN
INFRA_VN
provided by default Devices (Underlay) GRT

• User-Defined VNs can be added or


removed on-demand

66
Introduction to Cisco
DNA Center Policy App
and Cisco ISE
AAA/ISE Integration
AAA Server - ISE Integration
Objectives and Key Points
• Single pane of management for all AAA/policy administration between
network devices and ISE
• Automate Radius/TACACS configuration for network devices.
• Support only one ISE cluster.
• Enable secure services between DNA-C and ISE:
o pxGrid Service to pull the info out of ISE (Uni-Directional)
Obtain TrustSec metadata such as SGT, IP-SGT mappings & TrustSec policy.
o ERS APIs (Bi-Directional Communication)
▪ Fetch deployment model from ISE, such as PAN and PSN info
▪ Add devices to ISE as network devices
▪ Create SGT, IP-SGT mappings & TrustSec policy on ISE
AAA Server - ISE Integration
Pre-Requisites

• The minimum supported ISE version is 2.3


• pxGrid service and SSH should be enabled on ISE.
• ISE super admin credential is used for trust establishment for SSH/ERS API
communication.
• ISE CLI and UI user accounts must use the same username and password
• ISE admin certificate must contain ISE IP or FQDN in either CN or SAN.
• DNA-C system certificate must contain DNAC IP or FQDN in either subject
name or SAN.
• pxGrid node should be reachable on eth0 IP of ISE from DNA-C.
AAA Server - ISE Integration
EasyQoS Step3d - Trust and Verify
Add ISE in DNA-C
Shared secret
between ISE and
devices for TACACS
or Radius

FQDN from ISE


deployment

Policy Preview
AAA Server - (Non-ISE) Integration
Key Points:
• Non-ISE Server Definition:
• ISE running 2.2 or below
• ACS or any third-party Radius Server
• Only automate Radius/TACACS configuration
for network devices
• Require to add network devices to AAA server
manually.
• Can have multiples AAA servers
ISE – Cisco DNA Center Operation
Admin/Operate

Network
Devices
DNA-Center

Device REST pxGrid


s

Thing
Config Sync Context
s

ISE-PSN ISE-PAN ISE-


User Lo PXGrid
s Authorization gs
e xt Exchange
Policy nt Topics
Co TrustSecMetaData
if Employee then SGT 10
SGT Name: Employee = SGT 10
User SGT Name: Contractor = SGT 20
s if Contractor then SGT 20 ...
SessionDirectory*
if Things then SGT 30
Bob with Win10 on CorpSSID
ISE-MNT
* Future Plan

73
SD-Access Policy
Two Level Hierarchy - Macro Level

Known Unknown
Networks Networks

SD-Access
VN
“A”
VN
“B”
VN
Fabric
Virtual Network (VN)
“C”
First level Segmentation ensures zero
communication between forwarding
domains. Ability to consolidate multiple
networks into one management plane.

Building Management Campus Users


VN VN

74
SD-Access Policy
Two Level Hierarchy - Micro Level

Known Unknown
Networks Networks

SG
1
SG
4
SG
7
SD-Access Scalable Group (SG)
SG
2
SG
3
SG
5
SG
6
SG
8
SG
9
Fabric
Second level Segmentation ensures
role based access control between
two groups within a Virtual Network.
Provides the ability to segment the
network into either line of businesses or
functional blocks.

Building Management Campus Users


VN VN

75
SD-Access Policy
Two Level Hierarchy - Macro Level

Known Unknown
Networks Networks

SD-Access
VN
“A”
VN
“B”
VN
Fabric
Virtual Network (VN)
“C”
First level Segmentation ensures zero
communication between forwarding
domains. Ability to consolidate multiple
networks into one management plane.

Building Management Campus Users


VN VN

76
SD-Access Policy
Two Level Hierarchy - Micro Level

Known Unknown
Networks Networks

SG
1
SG
4
SG
7
SD-Access Scalable Group (SG)
SG
2
SG
3
SG
5
SG
6
SG
8
SG
9
Fabric
Second level Segmentation ensures
role based access control between
two groups within a Virtual Network.
Provides the ability to segment the
network into either line of businesses or
functional blocks.

Building Management Campus Users


VN VN

77
Cisco SD-Access Fabric – Endpoint Regestration
Host Registration
Control Plane state:

C IP to RLOC MAC to RLOC Address


Table Table Resolution Table

B
Fabric WLC

SDA Fabric

• The control plane node has three In built tables:

• IP to RLOC – stores IP address of a Host and


its corresponding location
FE1 FE2
• MAC to RLOC - stores MAC address of a Host and
its corresponding location

• Address - Data from the above two table are


Resolution collated to form the IP to MAC
bindings (ARP Table)

78
Cisco SD-Access Fabric Architecture
Host Registration

C Wired host attaches to the fabric network on an edge


1
10 node

B
Fabric WLC

SDA Fabric

FE1 FE2

79
Cisco SD-Access Fabric Architecture
Host Registration

C After the host gets an IP address, the fabric edge sends


2
10 a map-register to the control plane node with the IP and
mac address of the host
B
Fabric WLC

SDA Fabric

FE1 FE2

80
Cisco SD-Access Fabric Architecture
Host Registration
3 The control plane node upon receiving the map-register
C 10 populates the database tables for the host

B Control Plane state:


Fabric WLC
IP to RLOC MAC to RLOC Address
Table Table Resolution Table
SDA Fabric
1.1.1.1 FE1 aa.aa.aa.aa
- FE1

FE1 FE2

81
🡪
🡪
Cisco SD-Access Fabric Architecture
Host Registration
4 The Control plane then takes the information from the IP
C 10 and MAC table and populates the ARP table

B Control Plane state:


Fabric WLC
IP to RLOC MAC to RLOC Address
Table Table Resolution Table
SDA Fabric
1.1.1.1 FE1 aa.aa.aa.aa 1.1.1.1 <--
- FE1 aa.aa.aa.aa

FE1 FE2

82
🡪
🡪
🡪
Validating L3 EID
IP address of the Host

Border#sh lisp instance-id 4099 ipv4 server


LISP Site Registration Information
* = Some locators are down or unreachable
# = Some registrations are sourced by reliable transport

Site Name Last Up Who Last Inst EID Prefix


Register Registered ID
site_uci never no -- 4099 0.0.0.0/0
1d01h yes# 41.41.41.66:28795 4099 15.1.1.0/24
1d01h yes# 41.41.41.66:28795 4099 17.17.17.0/30
never no -- 4099 31.31.31.0/24
00:20:25 yes# 41.41.41.68:43813 4099 31.31.31.3/32
00:34:44 yes# 41.41.41.67:50394 4099 31.31.31.4/32
never no -- 4099 36.36.36.0/24

83
Validating L3 EID
IP address of the Host
Control-Plane#sh lisp site
LISP Site Registration Information
* = Some locators are down or unreachable
# = Some registrations are sourced by reliable transport

Site Name Last Up Who Last Inst EID Prefix


Register Registered ID
site_uci never no -- 4097 0.0.0.0/0
never no -- 4097 35.35.35.0/24
02:13:14 yes# 41.41.41.67:50394 4097 35.35.35.2/32
01:56:46 yes# 41.41.41.68:43813 4097 35.35.35.3/32
never no -- 4099 0.0.0.0/0
1d01h yes# 41.41.41.66:28795 4099 15.1.1.0/24
1d01h yes# 41.41.41.66:28795 4099 17.17.17.0/30
never no -- 4099 31.31.31.0/24
00:21:47 yes# 41.41.41.68:43813 4099 31.31.31.3/32
00:36:05 yes# 41.41.41.67:50394 4099 31.31.31.4/32
never no -- 4099 36.36.36.0/24

84
Validating L2 EID
Mac-Address of the Host

Border1Site1#sh lisp instance-id 8188 ethernet server


LISP Site Registration Information
* = Some locators are down or unreachable
# = Some registrations are sourced by reliable transport

Site Name Last Up Who Last Inst EID Prefix


Register Registered ID
site_uci never no -- 8188 any-mac
00:22:50 yes# 41.41.41.68:43813 8188
0000.0c9f.f45c/48
02:17:06 yes# 41.41.41.68:43813 8188
6cdd.30ee.0a75/48
1d01h yes# 41.41.41.67:50394 8188
bcc4.93b2.8f75/48
00:37:41 yes# 41.41.41.67:50394 8188
d8eb.97b3.24d2/48
00:22:52 yes# 85
Validating Address Resolution Record
IP address to MAC

Border#sh lisp instance-id 8188 ethernet server address-resolution

Address-resolution data for router lisp 0 instance-id 8188

L3 InstID Host Address Hardware


Address
4099 31.31.31.3/32
d8eb.97b7.1fca
4099 31.31.31.4/32
d8eb.97b3.24d2
4099 FE80::3086:6444:451F:FADF/128
d8eb.97b7.1fca
4099 FE80::CC11:ADF9:5B2A:1FE2/128
d8eb.97b3.24d2

86
Introduction to Campus Fabric
External Connectivity in SD-
Access (Transit)
Cisco SD-Access: IP as Transit/
Peer Network
IP Transit / Peer Network
Network Plane Analysis Perspectives

1. Control-Plane: How routes / prefixes are communicated


2. Data-Plane: Which encapsulation method is used to carry data
3. Policy Plane: How group and segmentation information is communicated
4. Management Plane: How Management Infrastructure is Integrated

89
Communicating to Peer Network – IP
Control/Data/Policy Plane

1 1
CONTROL-PLANE LIS eBG External Domain(BGP/IGP)
P P
11
DATA-PLANE VXLAN VRF-LITE External Domain(IP/MPLS/VXLAN
1
POLICY-PLANE1 SGT in VXLAN SGT* External Domain ( IP ACL/SGT)
Tagging
C
B
B
B

External/Peer Domain
E E E
• Manual & Every hop needs to support SGT propagation

90
Inter-Connecting Fabrics/Sites
IP Based WAN
MANAGEMENT
&
Cisco DNA-Center POLICY

SGTs in SXP
Per VRF

C C

B
SD-Access Transit B
SD-Access
B B
Fabric Site (WAN) Fabric Site
Border Border Border

BGP BGP
LISP MP-BGP / Other VRF-lite LISP CONTROL-PLANE
VRF-lite

1
VXLAN SGT (16 bits) 802.1Q 802.1Q VXLAN SGT (16 bits)
MPLS DATA-PLANE
Header VNID (24 bits) VLAN ID (12 bits) Labels VPNID (20 bits) VLAN ID (12 bits) Header VNID (24 bits)

91
Inter-Connecting Fabrics/Sites
DMVPN

11
CONTROL-PLANE
LIS DMVPN/GRE LIS
P P
1
DATA/POLICY-PLANE VXLAN+SGT IP+SGT inline tagging VXLAN+SGT

C C

B B IP Network B B
DMVPN Tunnels

E E E E E E

92
Cisco SD-Access: SD-Access as
Transit
Cisco SD-Access Multi-Site
Consistent Segmentation and Policy across sites
Cisco SD-Access Multi-Site
Cloud
Data Advantages:
Center
End-to-end Segmentation andfor policy
Smaller or isolated Failure Domains
Horizontally scaled networks
Metro Single view of Entire Network
Cisco SD-Access H Local breakout at each Site for Direct
Metro Metro Q
Transit Internet Access (DIA) and other Services
Elimination of Fusion router at every site*
Campu
s1
Campu Campus 3
s2

94
Cisco SD-Access Multi-Site
Key Considerations
Cisco SD-Access Multi-Site Key
Cloud
Data Considerations:
Center for
High-bandwidth connection (Ethernet full
port speed with no sub-rate services)

Low latency (less than 10ms as a


Metro
general guideline),
Cisco SD-Access H
Metro Metro Q
Transit Should accommodate the MTU setting
used for SD-Access in the campus
Campu network (typically 9100 bytes).
s1
Campu Campus 3
s2

95
Cisco SD-Access Multi-Site – SD-Access Transit
CONTROL-PLANE

1
LISP LISP LISP

C TC TC C
B B B B
Cisco SD-Access Transit
Border Border

Cisco SD-Access Fabric Site 1 Cisco SD-Access Fabric Site 2

Cisco DNA-Center
DATA+POLICY-PLANE

12
VXLAN+SGT VXLAN+SGT VXLAN+SGT

96
Cisco SD-Access Transit Control Plane for Global Scale

West site Prefixes Only East + West East site Prefixes Only

C Register west Register east


prefixes prefixes C
TC TC

West Site B B
Cisco SD-Access East Site
Transit
BR-W BR-E

• Each site only maintains state for in-site end-points.


• Off site traffic follows default to transit.
• Survivability, each site is a fully autonomous resiliency domain
• Each Site has its own unique subnets

97
Cisco SD-Access Multi-Site
Transit Control Plane Deployment Location

C
C C C
TC TC

West Site B B
Cisco SD-Access East Site
Transit
BR-W BR-E

Device must be dedicated to the transit control plane node role.


Doesn’t have to be physically deployed in Transit Area
Ideally, device should not be in the data forwarding (transit path) between sites.
Requires IP connectivity in the underlay from site borders at all fabric sites
Deploy 2 Transit Control Plane nodes for redundancy and load balancing.

98
Cisco SD-Access Multisite
Fabric Border support Matrix

Cisco SD-Access Border IP-Based


Cisco SD-Access Transit
Node Transit

C9K YES YES

ASR1K/ISR4K YES YES

C6K NO YES

N7K NO YES

99
Cisco SD-Access for Distributed Campus
Cisco SD-Access Transit
Remote Building Remote Building Remote Building
1 2 N Key Decision Points
Site BN • Tends to be like a Metro area
Site B1 Site B2
B E C with multiple buildings or sites
• Requires direct Internet access
at multiple sites
Cisco SD-Access
Transit • Requires local resiliency
and smaller fault domains
MAN
T T
• 2 Transit CP
DNAC

DC 5-7 NCP +
NDP

• 2-4 Site Borders


Cluster
ISE
2 PAN 2 PXG
5-10 PSN

DDI
1 DHCP 1
DNS
(Multiple Exits)
1 IPAM A A E E
B B B B

Site
C
HQ C
P P

HQ
Campus
100
Understanding Handoff in Cisco
SD-Access for External
SD-Access Fabric
How VNs work in SD-Access

• Fabric Devices (Underlay) connectivity Scope of Fabric

is in the Global Routing Table User-Defined VN(s)

• INFRA_VN is only for Access Points User VN (for Default)


Bord
and Extended Nodes in GRT er
USER VRF(s)
VN (for APs, Extended Nodes)
• DEFAULT_VN is an actual “User VN” DEFAULT_VN
INFRA_VN
provided by default Devices (Underlay) GRT

• User-Defined VNs can be added or


removed on-demand

102
SD-Access Fabric – L3 Handoff, Fusion Router and Route Leaking
ip vrf USERS
rd 1:4099
route-target export 1:4099
route-target import 1:4099

SD-Access Designs connecting to existing Global Routing Table !


route-target import 1:4097

should use a “Fusion” router with MP-BGP & VRF import/export.


ip vrf DEFAULT_VN
rd 1:4098
route-target export 1:4098
route-target import 1:4098
route-target import 1:4097

ip vrf GLOBAL
Control Plane rd 1:4097
route-target export 1:4097
route-target import 1:4097
C route-target export 1:4099

VRF B route-target export 1:4098

T5/1
SVI B
ISIS
SVI B
AF VRF B G0/0/0.B
BGP
GR
T5/2 B SVI A
AF VRF A G0/0/0.A
T
T5/8 G0/0/0 G0/0/3
T1/0/1 T5/1 AF IPv4
MP-BGP
Edge Node Border Node Fusion Router Switch
VRF A Shared
SVI A Services

103
Cisco’s Intent-based Networking
Learning

DNA Center
The Network. Intuitive.
Policy Automation Analytics

Intent Context

Network Infrastructure
Powered by Intent.
Informed by Context.
Switching Routers Wireless

Security
Cisco DNA-Center - PnP DeepDive
❑ PnP Overview

❑ What is new in 1.3.1

❑ PnP Workflows = Design + Provision


Agenda ❑ Design Workflow

❑ Provision Workflow
Day 0 Deployment Challenges
w/o Automation

Direct Costs Complexity Security Time/Productivity


• Pre-staging & Shipping • Configuration errors • Manual process
costs • 3rd
party not secure
• Different products, IOS • Shipping , Storage,
• Travel costs • Rogue devices
Releases Travel

Order Staging Manual Technician Deploy


Equipment Site device on
Installer
site

Cisco DNA Center PnP

~50%
Cisco DNA Automation Day 0 OPEX Savings*
With Plug & Play
Order Deploy • Drop Ship devices
Equipment device on • Centralized device discovery
site (DHCP, DNS, Cloud)
• Non-technical installer at site
• Template based configurations
• Secure SUDI Authentication
PnP Solution Components Cisco DNA Center
1 (PnP Server)
Auto-provision device w/
images & configs.

Cisco DNA Center


SSL

PnP Connect
Cloud-based device Policy Automation Analytics
discovery
SSL Customer On-Premise

PnP Connect
4 Redirects devices to On- SSL
Prem Cisco DNA Center
PnP Protocol
3 HTTPs/XML based Open
Schema protocol

SUDI Capable devices

2 PnP Agent
5 PnP Helper App*
Cisco® switches, routers, Delivers bootstrap status
and wireless AP and troubleshooting checks

* Cisco DNA Center Support in Roadmap


PnP Server Discovery Options
Routers
DHCP with options 60 and 43 (ASR, ISR)
1 PnP string: 5A1D;B2;K4;I172.19.45.222;J80 added to DHCP Server
Automated

Wireless
Access Points
DNS lookup
2
pnpserver.localdomain resolves to Cisco DNA Center IP Address
Switches
(Catalyst®)

3 Cloud re-direction https://devicehelper.cisco.com/device-helper


Redirect
Cisco hosted cloud, re-directs to on-prem Cisco DNA Center IP Address

USB-based bootstrapping
router-confg/router.cfg/ciscortr.cfg
4 Routers, Switches-Cat9K Only
Manual discovery
not supported for
Manual

Access Points

Manual - using the Cisco® Installer App*


5 iPhone, iPad, Android

* Cisco DNA Center Support in Roadmap


Day 0 Provisioning Flows
Order Power-On Provision
Admin
1 Unclaimed workflow Workflow securely
pushed to device

Goo
d
Device SN#
2 Pre-provision workflow
Workflow securely
pushed to device

w/o
Smart Account
CCW
Bette Cisco DNA
Order with Center
Smart Account r Provisionin
g
Device SN#
3
PnP Connect Installe Workflow securely
pushed to device

Cloud-based device r cables and


Installer racks,
discovery boots the device at the

Bes
Cloud-sync workflow customer site/branch

t
Day-0 Automation – From Order To Provision
Cisco® Customer Smart
supply chain Device SN Account Device SN
PnP Connect
Cloud-based device
discovery

Label
Cisco DNA Center downloads SN
Device SN added SN per Smart Account from PnP Connect

r
into customer available in PnP

lle
ro
Smart Account Connect

nt
N
Device SN

co
tS
en

s
ise
es

Co
em
Pr
SSL SSL

nt
pr

ro
n-

lle
Cisco DNA Center

to

rI
ac

P
registers its identity

nt
with PnP Connect

co
to
ct
tru
CCW order

s
In
SSL
Deploy image and configuration

Device provisioned upon Cisco DNA Corporate Profile mapped


Customer Smart
discovery and association Center™ HQ to site
Account added as
part of ordering Installe to site
r

Admin
PnP Workflows =
Design + Provision
PnP Workflows = Design + Provision
• PnP is achieved through two major phases, Design and Provision.

• Design Phase

• Define network sites, network settings and device credentials

• Define golden image

• Define network profile and assign to sites

• Provision Phase

• Plan for PnP discovery (not part of Cisco DNAC)

• Claim device via PnP (apply device credentials, image and CLI template in profile)

• Complete provisioning with network settings


Site in Cisco DNA Center- Heart of Simplified
Deployment
Profile mapped to
site Create standardized
network configuration to Desig
apply across sites
n

Network Settings SW Management Automated


mapped to sites software
Desig compliance check Provisio
n and update n
Site/Building

Device mapped to
site Network devices inherit
the properties of the profile Provisio
associated to the site n

Intent-based workflows Automated deployment


Cisco best practices

114
Network Profiles
Intent Based Design

• Key concept in Cisco DNA automation to standardize configurations for Routers,


Switches and WLCs across.
• Types of Network Profiles- NFVIS, routing, switching, wireless and firewall.
• Design and create once and use across multiple sites

• Only one for each type of network profile is allowed per site.

• Maintain mapping of network design to network elements deployed


• Version management of network profiles to track changes
Network Profile - Switching

CLI Templates
Device Credentials
User Defined
Configuration
System Generated Configuration by
Cisco DNA Center UI Orchestration

• Network Settings

• Device Credentials

Network Settings
• AAA (Radius and TACACS)
• DHCP and DNS
• Syslog, SNMP, and Netflow
Collector
• NTP Server
• Message of Day
Device Onboarding Design Workflow
Switching/Routing

Create
Define Network Define Golden Onboarding Define Network Assign Network
Create Sites
Settings Image (Optional) Templates Profile Profile to Sites
(Optional)
Device Onboarding Design Workflow
Create Sites

Area Level
Define Network
Settings
Building Level

Define Golden
Floor Level
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites

Define Network
Settings
2a) AAA Settings
TACACS
- ISE
Define Golden
Image (Optional)
Policy Admin Policy Service
Node Node
Create Onboarding
Templates
(Optional)

Define Network Radius


Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
2b) Non-AAA
Define Network
Common Settings
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites

Define Network
Settings
2c) Site-Level
Inheritance and
Override
Define Golden
Image (Optional)
Inheritance logo

Create Onboarding
Templates
(Optional)

Overridden
Define Network
Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
2d) Device
Credentials
Define Network
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile

Assign Network
Profile to Sites
Note that users MUST select and save the credentials at site or global level after create them. During PnP Onboarding,
Cisco DNA Center will not proceed unless the credentials at site are selected and saved.
Device Onboarding Design Workflow
Create Sites
Image Repository Key Points
Define Network • “Marking Golden Image” is key concept in Cisco DNA Center to standardize the
Settings image version across the enterprise.

Define Golden • Prior to Cisco DNA Center1.2.8 release, “Marking Golden Image” functionality is
Image (Optional) only available when devices are part of inventory.

Create Onboarding • After Cisco DNA Center 1.2.8 release, we extend this functionality to Day-0
Templates device onboarding scenario where devices are not part of inventory yet.
(Optional)
• Only “Golden” image is eligible for Day-0 PnP claim after 1.2.8 release.
Define Network
Profile
• Only routers and switches software upgrade is supported for Day-0 PnP claim.

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
3a) Import image
Define Network
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
3a) Import image
Define Network
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile
A few minutes after importing is completed, the imported image is shown under the new generic Family called
“Imported Images”.
Assign Network
Profile to Sites
Device Onboarding Design Workflow
3b) Assign Device
Create Sites
Family to Image

Define Network
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
3c) Mark Golden
Create Sites
Image

Define Network
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites Template Editor Key Points
• There is new out-of-box template project “Onboarding Configuration”.
Define Network
Settings
• It is intended for Day-0 PnP claim only.

Define Golden • Only templates under it are eligible for Day-0 PnP claim.
Image (Optional)

• User-defined projects and templates under them are for Day-N provisioning.
Create Onboarding
Templates
(Optional) • PnP claim supports only a single template, not composite or multiple templates.

Define Network • Only routing and switching Day-0 onboarding templates are supported.
Profile

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites
Day-N
Project

Define Network
Settings
Variable
Out-of-Box
Project for Day-0
Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)
Day-0
Template

Define Network
Profile

Assign Network
Profile to Sites Only latest committed version of template can be used for provisioning, including PnP claim.
Device Onboarding Design Workflow
How to use desired static IP, not DHCP as management IP for device after
PnP claim?
Create Sites Answer: Use “ip http client source-interface <>” in template which makes device to use this new
IP address to call home to PnP server after config is applied. Then PnP server will give it to
inventory component of Cisco DNA Center as management IP. It is especially useful for
scenarios of multiple IPs or VRFs. Also it is recommended to shut down or remove DHCP from
Define Network original interface for PnP discovery to ensure previous HTTPs session with DHCP IP is cleared.
Settings

Define Golden Why shut down VLAN 1? Would this


Image (Optional) cause PnP failure due to partial config or
loss of connectivity?

Create Onboarding Answer: Not necessary. That is due to key points


Templates below::
1. PnP copy the whole config file from server to
(Optional) device via HTTPs, not line by line as SSH.
Thus, the config is applied completely.
2. Though shutting down VLAN 1 will cause loss of
Define Network connectivity momentarily, PnP will be successful
as long as device can call home to PnP server
Profile with the new IP address within 80 seconds.

Assign Network
Profile to Sites
Note that switch uses VLAN 1 for DHCP and PnP discovery by default.
Device Onboarding Design Workflow
Create Sites

Define Network
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile Users can add Onboarding template
defined in “Template Editor” previously.

Assign Network
Profile to Sites
Device Onboarding Design Workflow
Create Sites

Define Network
Settings

Define Golden
Image (Optional)

Create Onboarding
Templates
(Optional)

Define Network
Profile

Assign Network
Profile to Sites
Note that users can only apply one for each type of network profile per site.
PnP Provision Workflow
Switch
Step 2
Step 0 Step 1
Complete Profile
Plan for PnP Discovery Claim to Site via PnP
Provisioning
Plan DHCP Option 43 or DNS for What are Provisioned? What are Provisioned?
devices to discover Cisco DNA
Center • Part 1- PnP Claim • Network Settings of Profile
• Device Credentials of Profile
• CLI Template(s) of Profile
• Part 2- Add to Inventory
• Device Controllability if it is
enabled

Profile Profile
Device Provision Workflow – Step 0 PnP Discovery (DHCP
Option 43)
Cisco DNA
Center IP Cisco DNA Center
Option 43
5A1D;B2;K4;I192.168.139.151;J80

Policy Automation Analytics


DHCP
Server PnP Server
P
T N A
HT o D
D a c er
di HC c
i
t v l Cis A o me e n t
DN sco P e e l C c C
nn nsta oot be NA
A ver xch o
C d rri d
n D
Ce C a
nt isc nge an ente SSL
P s a isco
er o T C
to C
o HT ” on
t d
i tch aime
Sw ncl
“U
Router Switch AP
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1a) Start Claiming

New 1.3.1 UI

Stack icon
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1b) Site Assignment

Assign to Site

Change device hostname, not


required to define it as
variable in template

New 1.3.1 UI
Switch Example
1c) Configuration

New 1.3.1 UI

If any configuration update


is required, click on the By default, golden image
device and template are
populated automatically
for device.
Device Provision Workflow – Step 1 PnP Claim
Switch Example- Stack
1c) Configuration

Click backspace button to


skip image upgrade

Preview template

Note that we only support stack renumbering for 1B


cabling scheme starting from IOS-XE 16.6.4 or later.

Support two stacking cabling


schemes starting 1.3.1. Default
to 1B cabling scheme.

Select top of rack switch serial number to


renumber the stack. Starting from 1.3.1, if
New 1.3.1 UI the renumbering result matches existing
numbering in stack, PnP is intelligent
enough to not reload the stack.
Device Provision Workflow – Step 1 PnP Claim
Switch Example- Stack
1c) Configuration

Actions to apply the image,


template and license to
multiple devices

New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example- Stack
1c) Configuration

Clear images, templates and


licenses for multiple devices

PnP tasks to do for claiming

New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1d) Advanced Configuration

Input value for


variable

New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example 1e) Summary

Green checkmark
Click on device to review to indicate “Ready
Summary on what will be to Claim”
detailed info
done by PnP

New 1.3.1 UI
Device Provision Workflow – Step 1 PnP Claim
Switch Example 1e) Summary
New 1.3.1 UI

User-defined CLI
Configuration

Day-0 Configuration Generated by Cisco DNA


Center

• Device Credentials (CLI and SNMP)


• Enable SSH v2 and SCP server
• Disable HTTP and HTTPS servers
• For switch, “vtp mode transparent” and “ip
Sample Switch Day-0 routing” are enabled in 1.2.8
Configuration • For switch, only ”vtp mode transparent” is
-Generated by Cisco enabled in 1.2.10 or later.
DNA Center 1.3.1

Note that Day-0 configuration generated by Cisco DNA Center is applied first,
then user-defined configuration.
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1f) PnP Complete-
Provision Success

New 1.3.1 UI

After a few mins, device is


provisioned successfully.
Device Provision Workflow – Step 1 PnP Claim
Switch Example
1f) PnP Complete
-Add to Inventory

Static IP is used as management Note that when “Device Controllability” is on Cisco DNA Center, the following
IP automatically. configurations are added when the switch is added into inventory:
Stack Switch
• IP Device Tracking
• Cisco DNA Center Root CA
• SNMP Server and Enable SNMP Traps Sample Switch
• Syslog Server and Set it to “Critical” Configuration
• SSH Sourcing from Management IP -Add to Inventory
• Cisco TrustSec (CTS) Credential
Device Provision Workflow – Step 2
Complete Provisioning
Switch Example
2a) Provision Device
Device Provision Workflow – Step 2 Complete
Provisioning
Switch Example
2b) Assign to Site
Device Provision Workflow – Step 2 Complete
Provisioning
Switch Example
2c) Advanced
Configuration

Note that there is no day-n template in this specific example.


Device ProvisionSwitch
Workflow
Example– Step 2 Complete Provisioning

2d) Summary

Configuration Generated by Cisco


DNA Center

• TACACS and Radius Sample Switch


• DNS and Domain Name Configuration
• NTP -Provision
• CTS, Service Template and Network Settings
Webauth ACL
• Enable HTTP and HTTPS Servers
Device Provision Workflow – Step 2 Complete
Provisioning Switch Example
2f) Provision Success

Change to “Provision” focus


Device Provision Workflow – Step 2 Complete
Provisioning
Switch Example

2f) Provision Success


ISE

Device is added by Cisco


DNA Center as AAA client
for Radius and TACACS
via ERS API automatically.
Additional Info/
Troubleshooting
Cisco DNA-Center - SWIM
Deepdive
Software Image Management
Intent Based Network Upgrades Request
Identify Software Identify
Captures your upgrade Golden Update Golden
intent to automate process Image Image
and drive consistency

Post
Streamlined Upgrade Process Deploy
Select
Devices
Validations
Upgrade base image,
patches, and other add-ons
in one single flow

Activate
Trustworthiness Integration Software
Create CR
DNA Center
Assures that device images
are not compromised in
any way. Distribute Approve
Software CR
PreCheck
Patching Support Validations

Pre/Post check ensures


updates do not have
adverse effects on network
Automate your software upgrade cycle
• Use case
• Ensure Consistency of Software
for all network devices (by
platform type)
• React to PSIRT and bugs fast
• Deploy software with confidence

SWIM/SMU
• Benefits
• Golden Image based workflows
drive software consistency
• Pre/Post check ensures that
software updates do not have
adverse effects on the network
• Patching provides small updates to
react quickly to security fixes
Visualize Software Images

❖ For a given Device


Family, view :
▪ All images
▪ Image Version
▪ Number of Devices using
a particular image

❖ Image Repository to
centrally store Software
Images, VNF Images and
Network Container Images
Image Repository: Suggested Images- Automatic

Suggested Images:
• Cisco DNA Center can display the Cisco-recommended software
images for the devices that it manages (by device type).
• Cisco Credentials are required
• If the recommended Golden Image is selected as Golden, Cisco DNA
Center automatically downloads from cisco.com.
Manual Software Images Upload

▪ Import Images/SMU
from :
▪ Cisco.com
▪ URL(http/ftp)
▪ Local PC
▪ Another managed
network device

▪ Remote File Server


▪ Localized file server
for software
distribution
▪ File server mapped
Image Standardization - “Golden Images”

Device Family Device Role Site Mapping


• Golden image per • Devices in the same • Site hierarchy provides
device family (includes family classified by role override of golden image
router, switches and • Ex: CAT3850 as a • Ex: Amer uses v16.1 vs
WLC) access switch vs APJC uses v3.8
distribution switch
Devices not Compliant with Golden Image

Built-in
Compliancy
checks to
Automatically
flag devices
Devices not Compliant with Golden Image
Image Update Readiness
Check : Provides a way to
check if network device is
ready for upgrade with Golden
image or not

Which does pre-checks such


as flash check , File transfer
check etc.,

Below are the protocols used


for
file transfer based on what
enabled on network devices

1) HTTPS
2) SCP
3) *SFTP (For WLC’s and Nexus Devices)
Managing Software Lifecycle
Select Software Images
section , so you can get a
Detailed view of devices
and Software images.
SWIM/SMU Workflow Experience with DNA Center

Select device/(s) to update


Image/SMU

1
SWIM/SMU Workflow Experience with DNA Center

2
Distribute the image to Network Devices,
you can do it now or schedule it based
on requirement.
SWIM/SMU Workflow Experience with DNA Center
3

Activate the image, This action may reboot


the network device based on the
software/patch Upgrade.
SWIM/SMU Workflow Experience with DNA Center

4
Confirm the summary of Distribution &
Activation tasks.
SWIM/SMU Workflow Experience with DNA Center

Provides the status of image


Distribution
& Activation.

5 Also provides the pre-checks and


Post-checks scripts which are used
in
Process of Activation
SMU (Software Maintenance Update)
What is SMU ?

▪ Point Fixes for the IOS-XE images (16.x onwards)


▪ Provides the ability to just update what is needed

Why SMU ?
Each device Copy Images to
Reduced IT
update causes New Code site over slow
Staff
network outage VPN tunnels

Requires
Business Slows down
bug Time
Loss & software
analysis, Consuming
Downtime rollouts
certification
SMU (Software Maintenance Upgrade)

▪ SMU Details on DNA-Center

▪ Impact on the Device – Reboot (Yes/No)


Software Upgrade – Integrity Verification

End User Deployment Cisco Development Cycle

Network Integrity Network Device


CCO Known Good Value
Device Verificatio Collection Development
s n

Is the software used by the device authentic?


Includes checks of the software files (Known Good Value) and in-memory (Imprint Value)
contents. Also includes shell access attempts (Event Occurrence)
Software Upgrade – Integrity Verification
• To provide a level of security integrity devices must run authentic and valid software
• Cisco DNA Center Integrity Verification uses a system to compare collected image
integrity data to Known Good Values (KGV) for Cisco software.
• Cisco produces and publishes a KGV Data file that contains KGV’s for many of its
products.
• The MD5 or SHA values of the images are validated against KGV’s.
Software Upgrade – Integrity Verification
• KGV file:
• Standard JSON format and Signed by Cisco
• Can be retrieved from Cisco and uploaded into Cisco DNA Center
Use Case:
ROMMON can now be
ROMMON Upgrade upgraded along with
software image upgrade as
part of the same workflow.
ROMMON Upgrade

▪ CCO Connectivity is mandatory to always


get the latest version of ROMMON images

▪ ROMMON upgrade to the latest


ROMMON firmware only. Users can not
select which ROMMON code upgrade to.

▪ Newer devices like Cat9k have ROMMON


built into their base images. No separate
ROMMON upgrade required.
ROMMON Upgrade

▪ Only support ROMMON firmware


upgrade together with base IOS
image, not alone

▪ Two reboots : first reboot for


ROMMON, then reboot for base image
upgrade
ROMMON Upgrade

▪ Show Tasks showing the device


upgrade status for both ROMMON
and software Image
Supported Devices for Routers/ Switches
PID
C4500-E/X (SUP 7E|7LE|8LE)
PID Standalone ISSU
ISR 4431 YES NA C4500-X (SUP 7E|7LE|8LE
ISR 4221 YES NA
C4507R+E (SUP 7E|7LE|8LE)
ISR 4351 YES NA
ISR 4451-X YES NA C4503/6E (Sup 8E | 9E)
ASR 1001-X YES NA
C4507R+E (Sup 8E | 9E)
ASR 1002-X YES NA
ASR 1006-X (RP2) YES NA C4510R+E (Sup 8E | 9E)

ASR 1006-X (RP3) YES NA


C4500X Fixed Chassis
ASR 1009-X (RP2) YES NA
C6503/4/6/9E (Sup 2T|6T)
ASR 1009-X (RP3) YES NA
ASR 1001-HX YES NA C6513E (Sup 2T|6T)

ASR 1002-HX YES NA C6807-XL (Sup 2T|6T)

C6840-X and C6880-X


Use Case: SWIM Platform
REST API’s for Automation
SWIM API’s/Events & SWIM Events for Service
Now .
SWIM REST API’s
SWIM Provides a wide range of API’s which customers can use to Automate
software image
management using API’s.
SWIM REST API’s Example

API’s can be used


to
Automate the entire
Workflow of SWIM
SWIM Events
▪ SWIM Bundles can be
used to configure
integration with ITSM
Tools like Service Now or
else for any Rest End
Points.

This Integration helps in


publishing the SWIM events such
as compliance, operations etc.,
which in turn helps in Automated
ticket creation in Service Now.

This Integration helps in


publishing the SWIM events to
REST End Points, which
customers can in turn use to
create SWIM Events report
SWIM Events Example
Events Source is Cisco DNA Center,
which is doing Compliance Check for
Software Images

Cisco DNA Center providing the details of


Alert , which in this case the Software
image of Network device is outdated
Cisco DNA Assurance
Network Quality is a Complex, End-to-End Problem
Affects Join/Roam

Affects Quality/Throughput

Client firmware Affects Both*


WAN Uplink usage End-User services

Client density AP coverage Configuration

WLC Capacity WAN QoS, Routing, ... Authentication


RF Noise/Interf.
Addressing
CUCM
ISE

What
WAN is the problem?
There are 100+
DHCP
points of failure Office site Where is the Network
problem?
services DC
between user
Mobile clients
APs Cisco Prime™

and app Local WLCs

How can I fix the problem fast?


* Both = Join/roam and quality/throughput
Ramit Kanda
184
IT Challenge: 43% of IT Time spent in Troubleshooting

4x Replication
challenge
Slow resolution

Network operators Impossible for IT to Half of WiFi issues take


spend more time collecting troubleshoot if they cannot more than 30min to resolve
data than analyzing replicate the issue or see it
while troubleshooting real time

. 1 McKinsey Study of Network Operations for Cisco – 2016 Ramit Kanda


185
Cisco DNA Assurance
From network data to business insights

Network telemetry Complex event Correlated Suggested


contextual data processing insights remediation

Traceroute
Complex
Syslog NetFlow correlation Clients Baseline

AAA Router DHCP


Metadata
Telnet Wireless CLI extraction
DNS
OID IPSLA Ping
MI Steam
SNMP IPAM Processing
B Application Network
AppD
CMX

Over 150 actionable insights


Everything as a sensor Clients | Applications | Wireless | Switching | Routing
Cisco DNA Assurance and Analytics
Is my network healthy, are my users happy…and if not, why not?
Cisco DNA Center Assurance and Analytics

Common Use Cases


• How are my clients, applications, and
network infrastructure doing?

• Has anything changed in my network?

• How can I troubleshoot a problem I am


having now?

• How can I prevent problems from


happening in the future?

• How am I doing compared to others in my


industry?
Cisco DNA Assurance – Strategic Investment Pillars
Cisco DNA Assurance – Investment Pillars

1 Full Stack Health Visibility

2 Accelerated Issue Remediation

3 Ecosystem Integration

Access to More Kinds of Data for End-to-End


4 AI-Driven Network Analytics Visibility Wireless (ex. WiFi6) + Wired
Application Experience
Security Health
Synthetic: Wired and Wireless Sensors
5 Enterprise Robustness
Cross-domain integration (future)
0
1.
Application Usage data from Switches and WLCs
3.
1.

Not just routers anymore

Within Device 360 for Switches


and WLCs….

…you now get Application


Experience data (usage and
average throughput)
2+
3.

New Wi-Fi 6 Dashboard


1.

• Wi-Fi 6 Readiness

• Wi-Fi 6 Benefits

• Advanced Troubleshooting

• Industry Trends and Adoption


Cisco DNA Assurance – Investment Pillars
Streamline issue detection thru resolution

1 Full Stack Health Visibility

2 Accelerated Issue Remediation

3 Ecosystem Integration

Coverage, Flexibility, and


4 AI-Driven Network Analytics Consumption
Increasing the types of issues and insights detected
Customized Thresholds and Trigger Conditions
Optimized Dashboards and Issue Display
Better Root Cause Analysis
5 Enterprise Robustness Closed Loop Remediation
Better Analytics and Reporting
0
1.
New Issue Dashboard
3.
1.

• Show top sites needing


attention

• New visual timeline showing


when problems occurred

• Navigate directly to AI-


Driven issues
2
3.

Health Score Customization


1.

• Control which KPIs are


included in the health score

• Adjust parameters to control


good/poor health
Cisco DNA Assurance – Investment Pillars

1 Full Stack Health Visibility

2 Accelerated Issue Remediation

3 Ecosystem Integration

Cisco Apps, 3rd Party Apps/Devices/Infrastructure,


4 AI-Driven Network Analytics … DNA Spaces, Webex, AppD, Stealthwatch, …
Infoblox, Bluecat, S4B, …
Samsung, Apple, Microsoft, …
SDKs for HP/Aruba, Juniper, Huawei,
5 Enterprise Robustness

Data as a Service via external Assurance APIs
External Notifications, Webhooks for Slack, Webex Teams, …
0 Event Notification
1.
3.
1.

Get event
notifications via email
or webhooks
2
Device Ecosystem Integration (Samsung Analytics)
3.
1.

Get client classification


details from the
Samsung Client
2
Device Ecosystem Integration (Samsung Analytics)
3.
1.

Get 20+ Onboarding Error


States from the Samsung
Client
Cisco DNA Assurance – Investment Pillars

1 Full Stack Health Visibility

2 Accelerated Issue Remediation

3 Ecosystem Integration

Helping Humans Work


4 AI-Driven Network Analytics
Smarter
AI-driven, On-Prem and Cloud Network Analytics
Machine Reasoning Engine
Device Classification Service
Conversational Interface/Natural Language Processing (CI/NLP)
5 Enterprise Robustness
….
0
1.
3.
1.

Making DNA Center


In Production Smarter
Now
Cisco AI Network Analytics
Key Customer Benefits AI-Driven Predictive Analytics
Anticipate and Prevent Failures
Highly See Solve Cut out
personalized problems problems unwanted
sooner faster noise
AI-Driven Comparative Analytics
Compare KPIs Internally and to Peers
Predictive
Comparative
Issues Insight AI-Driven Proactive Insights
Find Global Patterns and Systemic Trends
Baseline

DNA Center AI-Driven Anomaly Detection


AI Network Analytics
Surface and Root Cause Complex Issues
Model Training: 16K Radios, 1.2M hrs
Radio traffic, 17M Onboarding attempts
seen weekly
AI-Driven Baselining
Infrastructure Define Normal for a Given Network
Physical | Virtual | Programmable | App Hosting
2
Visualization and Navigation Improvements
3.
1.

Integrate new visualizations with


Device 360 pages
(direct link from Network Heatmap and Network Insights)
Cisco DNA Assurance – Investment Pillars

1 Full Stack Health Visibility

2 Accelerated Issue Remediation Performance High Availability and


and Disaster Recovery
Scalability

3 Ecosystem Integration

4 AI-Driven Network Analytics


Quality, Serviceability, Fit and Finish /
Telemetry Usability

5 Enterprise Robustness

Deployment Flexibility
(On-Prem, Cloud, Hybrid)
Cisco DNA Assurance – Strategic Investment Pillars
DNA Center – Maglev Logical Architecture
App Stack 1 App Stack 2 App Stack N

APIs, SDK & Packaging APIs, SDK & Packaging


Standards Standards

Maglev Services

IaaS
(Baremetal, ESXi, AWS, OpenStack etc)
Cisco SD-Access (Fusion) Package Services
Trap events, host discovery we leverage ipam-service IP Address manager
apic-em-event-service
snmp traps so they are handled here.
Critical during Provisioning
apic-em-inventory- Provides communication service between network-orchestration-service
orchestation.
manager-service inventory and discovery service
Certificate authority and enables controller orchestration-engine-service Orchestration Service
apic-em-jboss-ejbca
authority on the DNAC. pnp-service PNP Tasks
apic-em-network- Configure devices. Critical service to check
programmer-service during provisioning. policy-analysis-service Policy related
apic-em-pki-broker-
PKI Certificate authority policy-manager-service Policy related
service
command-runner- Responsible for Command Runner related Core database management
postgres
service task system
distributed-cache- rbac-broker-service RBAC
Infrastructure
service
sensor-manager Sensor Related
dna-common-service DNAC-ISE integration task
site-profile-service Site Profiling
dna-maps-service Maps Related services Core service during Provisioning
spf-device-manager-service
phase
dna-wireless-service Wireless Core service during Provisioning
spf-service-manager-service
phase
identity-manager-
DNAC-ISE integration task swim-service SWIM
pxgrid-service
Assurance Services Base Services
cassandra Database cassandra Core Database
collector-agent Collector Agents
catalogserver Local Catalog Server for update
collector-manager Collector Manager
elasticsearch Search elasticsearch Elastic Search Container
ise ISE data collector glusterfs-server Core Filesystem
kafka Communication service
identitymgmt Identity Managenent container
mibs-container SNMP MIBs
netflow-go Netflow data collector influxdb Database
pipelineadmin kibana-logging Kibana Logging collector
pipelineruntime-jobmgr
kong Infrastructure service
pipelineruntime-taskmgr
pipelineruntime-taskmgr maglevserver Infrastructure
pipelineruntime- mongodb Database
taskmgr-data
pipelineruntime- rabbitmq Communication service
taskmgr-timeseries Various Pipelines and Task nanager workflow-server
snmp SNMP Colelctor
workflow-ui
syslog Syslog Collector
trap Trap Collector workflow-worker Various Update workflow task
Most Commonly Used Maglev CLI
$ maglev $ magctl
Usage: maglev [OPTIONS] COMMAND [ARGS]... Usage: magctl [OPTIONS] COMMAND [ARGS]...
Tool to manage a Maglev deployment Tool to manage a Maglev deployment
Options: Options:
--version Show the version and exit. --version Show the version and exit.
-d, --debug Enable debug logging -d, --debug Enable debug logging
-c, --context TEXT Override default CLI context --help Show this message and
--help Show this message and exit. exit.
Commands:
backup Cluster backup operations
Commands:
catalog Catalog Server-related management api API related operations
operations appstack AppStack related operations
completion Install shell completion completion Install shell completion
context Command line context-related operations disk Disk related operations
cronjob Cluster cronjob operations glusterfs GlusterFS related
job Cluster job operations operations
login Log into the specified CLUSTER iam Identitymgmt related
logout Log out of the cluster operations
maintenance Cluster maintenance mode operations job Job related operations
managed_service Managed-Service related runtime operations logs Log related operations
node Node management operations maglev Maglev related commands
package Package-related runtime operations node Node related operations
restore Cluster restore operations service Service related operations
service Service-related runtime operations tenant Tenant related operations
system System-related management operations token Token related operations
system_update_addon System update related runtime operations user User related operations
system_update_package System update related runtime operations workflow Workflow related operations
Logging
Live Log - Service
Log Files:
• To follow/tail the current log of any service:
magctl service logs –r -f <service-name>
EX: magctl service logs -r -f spf-service-manager-service

Note: remove -f to display the current logs to the terminal


• To get the complete logs of any service:
• Get the container_id using:
docker ps | grep <service-name> | grep -v pause | cut -d' ' -f1
• Get logs using: docker logs <container_id>

BRKCRS-2813
Check Service Log in GUI
Click on Kibana Icon

Click on Service Counts


Monitoring / Log Explorer / Workflow
System Settings System360: Tools
https://<dnacenter_ip>/dna/systemSettings
🡪
🡪
DNA Center’s Monitoring Dashboard
Monitoring DNA Center Memory, CPU & Bandwidth

BRKCRS-2813
Check Service Log using Log Explorer

Log Messages

BRKCRS-2813
Changing DNA Center Logging Levels
How to Change the Logging Level
• Navigate to the Settings Page: System Settings Settings Debugging Levels
• Select the service of interest
• Select the new Logging Level
• Set the duration DNA Center should
keep this logging level change
• Intervals: 15 / 30 / 60 minutes or forever

BRKCRS-2813 39
🡪
🡪
🡪
Required information to report an issue
• RCA file
[Sun Feb 11 14:26:00 UTC] [email protected] (maglev-
• SSH to server using maglev user master-1)
$ rca
ssh –p 2222 maglev@<dnacenter_ip_address>
=============================================================
• rca ==
Verifying ssh/sudo access
• Generated file can be copied using scp/sftp from external =============================================================
==
server [sudo] password for maglev: <passwd>
Done
scp –P 2222 mkdir: created directory '/data/rca'
changed ownership of '/data/rca' from root:root to
maglev@<dnacenter_ip_address>:<rca_filena maglev:maglev
me> =============================================================
==
Verifying administration access
=============================================================
• Error Screenshot from UI ==
[administration] password for 'admin': <passwd>
User 'admin' logged into 'kong-frontend.maglev-
• API Debug log using system.svc.cluster.local' successfully
browser debugging mode =============================================================
==
RCA package created on Sun Feb 18 14:26:14 UTC 2018
=============================================================
==
BRKCRS-2813
Package Update
Upgrade time comparison – DNAC 1.3.1.0
3-Node Summary Security Guide for Port and Hosts To be opened
DNAC1.2.8.0 to 1.2.10.4
on1.3.0.3
DNAC Firewall
To
DNAC 1.2.10.4 To 1.3.0.0
1.3.1.0
System Update 2Hr 30Min 3Hr 50 Min 5Hr

Package download 1Hr 10min 1Hr 10Min 1Hr 10Min

Package upgrade 2Hr 55Min 2Hr 40Min 3Hr

Total time 6Hr 35Min 7Hr 40Min 9Hr 10Min

Single Node Summary


DNAC 1.3.0.3 To
DNAC1.2.8.0 to 1.2.10.4 DNAC 1.2.10.4 To 1.3.0.0
1.3.1.0
System Update 1Hr 25Min 2Hr 20 Min 4Hr

Package download 1Hr 10min 1Hr 10Min 1Hr 10Min

Package upgrade 2Hr 40Min 2Hr 5 Min 2Hr 35 Min

Total time 5Hr 25Min 5Hr 35Min 7Hr 45Min


Monitoring Upgrade
Check the logs for
following Services
• system-updater
• workflow-server Monitor Workflow
• workflow-worker
• Catalogserver
DNAC-ISE Integration
Cisco DNA Center – ISE Integration
Administration pxGrid Services
• Pxgrid service should be enabled on ISE.
• SSH needs to be enabled on ISE.
• Superadmin credentials will be used for trust establishment for SSH/ERS
communication. By default ISE Super admin has ERS credentials
• ISE CLI and UI user accounts must use the same username and password
• ISE admin certificate must contain ISE IP or FQDN in either subject name or
SAN.
• DNAC system certificate must contain DNAC IP or FQDN in either subject name
or SAN.
• Pxgrid node should be reachable on eth0 IP of ISE from DNAC.
• Bypass Proxy for DNAC on ISE server

221
🡪
Trust Status on Cisco DNA Center
• Identity source status: (Under System360)
• AAA server Status (Settings – Auth/Policy Server) • INIT
• INPROGRESS • Available/Unavailable (PxGRID state)
• ACTIVE • TRUSTED/UNTRUSTED
• FAILED
• RBAC_FAILURE
Discovery/Inventory/
Provisioning Debugging
Troubleshooting – Discovery/Inventory
• Check for IP address reachability from DNAC
to the device
• Check username/password configuration in
Settings
• Check whether telnet/ssh option is properly
selected
• Check using manual telnet/ssh to the
device from DNAC or any other client
• Check SNMP community configuration
matches on switch and DNA-C
• Discovery View will provide additional
information.

Services Involved on DNA:


apic-em-inventory-manager-service
SDA Provisioning – Workflow
Services Involved Start Provisioning from UI

NB API Pre-Process-Cfs-Step Determine all the namespaces this config applies to

Validate whether this config is consistent and conflict free


SPF Service Validate-Cfs-Step

Process-Cfs-Step
Persist the data and take snapshot for all namespaces in a
single transaction

Target-Resolver-Cfs-Step Determine the list of devices this config should go to


Orchestration
SPF Device
Engine Translate-Cfs-Step Per device convert the config to the config that needs to go to the device
Messaging
Deploy-Rfs-Task Convert the config to Bulk Provisioning Message to send it to NP
Network
Programmer Rfs-Status-Updater-Task
Update the Device config Status based on response from NP

Rfs-Merge-Step
Complete Update the task with an aggregate merged message
SDA Provisioning – Task Status Check
Click on View Target Device List
Click on Show task
Status Check the status

Click on See Details


Troubleshooting – Device / Fabric Provision Issues
Services involved:
• orchestration-engine-service • spf-device-manager-service
• spf-service-manager-service • apic-em-network-programmer-service
Cisco SD-Access Fabric
Troubleshooting DHCP
DHCP Packet Flow in Campus Fabric
B DHCP
1 The DHCP client generates a
DHCP request and broadcasts it
on the network
FE1 BDR
1 2 FE uses DHCP Snooping to add
it’s RLOC as the remote ID in
Option 82 and sets giaddress the
2 Anycast SVI
Using DHCP Relay the request is
forwarded to the Border.
4 3 DHCP Server replies with offer
3
5 to Anycast SVI.
4 Border uses the remote ID in
option 82 to forward the packet.
5 FE installs the DHCP binding
and forwards the reply to client
Cisco SD-Access Fabric
Troubleshooting
Host Onboarding
Typical SD-Access Environment
Control Plane Node
(CP) ▪ Underlay Network
C ▪ Routing ID (RLOC) – IP address of
the LISP router facing ISP

B 10.2.100.1
B ▪ Overlay Network

Border Node ▪ Endpoint Identifier(EID) - IP address


(BDR) of a host
10.2.100.2 10.2.100.3
▪ VRF - Campus

▪ Instance Id - 4099
10.2.120.1 10.2.120.2 10.2.120.3
▪ Dynamic EID – 10_2_1_0-Campus
Fabric Edge 1 Fabric Edge 3 ▪ VLAN – 1021
(FE1) (FE3)
10.2.1.99 Fabric Edge 2 10.2.1.89
(FE2)

Fabric Domain
(Overlay)
Here Is How You Begin

Host Registration Host Resolution

External Connectivity

East West Traffic Host Mobility


Case 1: Host Registration – Wired Client
CP
C
10.2.120.1 10.2.100.1
IP Network

FE1
router lisp
10.2.1.99 site site_sjc
...
eid-prefix instance-id 10.2.1.0/24 accept-more-specifics
exit

router lisp
...
eid-table Campus instance-id
dynamic-eid 10_2_1_0-Campus
database-mapping 10.2.1.0/24 locator-set campus_fabric
exit
Registration
Message flow

C
1 Client send ARP, DHCP or
DATA pkt
FE1 CP
1 2 FE saves the host info in
local database. Send the
2 registration message to
CP (Map–server)
3 3 CP receives the
registration message
saves the host tracking
database and send the
reply
C

MAC B B
Address ?
1 FE1#show mac address
1021 0013.a91f.b2b0 DYNAMIC Te1/0/23

If you don’t see the MAC address entry, then it’s a SILENT HOST.
ARP
Entry ?
2 FE1#show arp vrf Campus
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.2.1.99 0 0013.a91f.b2b0 ARPA Vlan1021

IP Device
Tracking ?
3 FE1#show device-tracking database
Network Layer Address Link Layer Address Interface vlan
ARP 10.2.1.99 0013.a91f.b2b0 Te1/0/23 1021

Fabric Edge Fabric Edge can learn the IP address from ARP, DHCP or DATA pack. If device tracking entry is
missing then check if client got an IP
C

B B

LISP local
database ?
FE1#show ip lisp instance-id 4099 database
4 LISP ETR IPv4 Mapping Database for EID-table vrf Campus (IID 4099)
LSBs: 0x1 Entries total 3, no-route 0, inactive 0

10.2.1.99/32, dynamic-eid 10_2_1_0-Campus, locator-set rloc_021


Fabric Edge Locator Pri/Wgt Source State
10.2.120.1 10/10 cfg-intf site-self, reachable

Instance
Enable debug if the database entry is missing ID
EID

FE1 RLOC
C

B B

If No Local Database Entry ?


debug lisp control-plane local-eid-database
*Jan 17 01:47:15.101: LISP-0: Local EID IID 4099 prefix 10.2.1.99/32, Setting state to
active (state: inactive, rlocs: 0/0, sources: NONE).

debug lisp control-plane dynamic-eid


*Jan 17 01:47:15.102: LISP-0: Local dynEID 10_2_1_0-Campus IID 4099 prefix 10.2.1.99/32
RLOC 10.2.120.1 pri/wei=10/10, Created (IPv4 intf RLOC Loopback0) (state: active, rlocs: 1/1, sources: dynamic).

debug lisp forwarding data-signal-discover-dyn-eid


*Jan 17 01:47:15.102: LISP-0: DynEID IID 4099 10.2.1.99 [10_2_1_0-Campus:Vlan1021] Created.

Dynamic EID
FE1 RLOC EID

Instance
ID
C

B B

LISP Control
Plane Entry ?
5 CP#show lisp site instance-id 4099
Site Name Last Up Who Last Inst EID Prefix
C Register Registered ID
site_sjc never no -- 4099 10.2.1.0/24
10.2.1.99/32 3d23h yes# 10.2.120.1 4099

Enable debug on FE and Control Plane if the database entry is missing

FE1 RLOC Instance EID


ID
C

Check if FE has sent the registration B B

message ?
debug lisp control map-request
*Jan 17 01:56:01.045: LISP: Send map request for EID prefix IID 4099 10.2.1.99/32

debug lisp forwarding data-signal-map-request


*Jan 17 01:56:02.204: LISP-0: EID-AF IPv4, Sending map-request from 10.2.1.99 to 10.2.1.99 for EID
10.2.1.99/32, ITR-RLOCs 1, nonce 0x0B5B0D11-0x5110DF55 (encap src 10.2.120.1, dst 10.2.100.1).

FE1 RLOC Control


Plane
C

B B

Verification for registration message


debug lisp control-plane map-server-registration
*Jan 17 01:57:27.716: LISP-0: MS EID IID 4099 prefix 10.2.1.99/32 site site_sjc, Forwarding map request to
ETR RLOC 10.2.120.1
FE1 RLOC

debug lisp forwarding eligibility-process-switching C

*Jan 17 01:56:02.209: LISP: Processing received Map-Reply(2) message on B B

TenGigabitEthernet1/0/1 from 10.2.100.1:4342 to 10.2.120.1:4342

Control
Plane FE1 RLOC
Verification at the FEs
4b FE1#show ip lisp instance-id 4099 database 10.2.120.1
10.2.1.99/32, locator-set rloc_021a8c01-5c45-4529-addd-b0d626971a5f
Locator Pri/Wgt Source State
10.2.120.1 10/10 cfg-intf site-self, reachable

FE1#show ip lisp map-cache instance-id 4099


10.2.1.89/32, uptime: 00:00:06, expires: 23:59:53, via map-reply, complete
Locator Uptime State Pri/Wgt
10.2.120.3 00:00:06 up 10/10

10.2.120.3
4c FE3#show ip lisp instance-id 4099 database
10.2.1.89/32, locator-set rloc_021a8c01-5c45-4529-addd-b0d626971a5f
Locator Pri/Wgt Source State
10.2.120.3 10/10 cfg-intf site-self, reachable

FE3#show ip lisp map-cache instance-id 4099


10.2.1.99/32, uptime: 00:00:06, expires: 23:59:53, via map-reply, complete
Locator Uptime State Pri/Wgt
10.2.120.1 00:00:06 up 10/10
Thanks

You might also like