0% found this document useful (0 votes)
312 views84 pages

Cisco SD Access deployment guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
312 views84 pages

Cisco SD Access deployment guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Deploying Your First

SD-Access Project
Based on LISP\VXLAN stack

Sergey Nasonov, Solutions Engineer


CCIE R&S 62572
BRKENS-2824

#CiscoLive
Cisco Webex App
https://ciscolive.ciscoevents.com/
ciscolivebot/#BRKENS-2824

Questions?
Use Cisco Webex App to chat
with the speaker after the session

How
1 Find this session in the Cisco Live Mobile App

2 Click “Join the Discussion”

3 Install the Webex App or go directly to the Webex space

4 Enter messages/questions in the Webex space

Webex spaces will be moderated Enter your personal notes here

by the speaker until June 7, 2024.

BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 2
Introduction
• The session assumes fundamental knowledge of SD-Access solution:
• BRKENS-2810 – Cisco SD-Access Solution Fundamentals
• BRKENS-2811 – Connecting Cisco SD-Access to the External World
• BRKENS-2814 – Role of ISE in SD-Access
• BRKENS-2827 – Cisco SD-Access Migration Tools and Strategies

• Practical session, no textbook examples.


• My opinion, different people will have different opinions based on their own
experiences.
• “Ok, I’ve watched all these videos and read the CVD, where do I start?”

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
• Planning SD-Access
Deployment
• Designing SD-Access
Agenda Deployment
• Implementing or Migrating to
SD-Access
• Take-aways or What’s Next?

BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Nomenclature

• Catalyst Center. Cisco network management solution, formerly known as


DNA Center.
• Endpoint (EP). Connected device that is not routing the traffic. Can be
laptop, workstation, server, printer, BMS system and so on.
• SD-Access Fabric. For the purposes of this presentation, an overlay—based
connectivity solution implemented by a SD-Access Border Nodes, Control
Plane Nodes, Edge Nodes and optionally Fabric-Enabled wireless controllers
and Access Points using LISP\VXLAN stack.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Planning
SD-Access
Deployment
Cisco Catalyst Center

Collect the Requirements


Policy Automation Assurance

Cisco SD-Access has a few considerations that network designer needs to be aware of:

• Deployment wide - Catalyst Center:


• Number of endpoints (EPs – concurrent/transient), number of network devices,
number of interfaces, IP pools and L2 overlays.

• Site level: Border and/or Control Plane nodes and Catalyst Center:
• Logical: Number of concurrent EPs (v4/v6, wired/wireless), RTT to controllers, IP
pools, L2 handoffs.
• Physical: Number of fabric devices per site.

All scalability limits are well documented in Cisco Catalyst Center Data Sheet, but it’s hard to apply
those to the design when doing it for the first time.
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Meet ACME Corporation
Large manufacturing organization – legacy network refresh.
Main site – 3 sub-areas interconnected via dark fibre in ring topology:
• 25,000 users with 45,000 concurrent devices.
• 2100 x WS-C2960X access switches in 1300 access switch cabinets.
• 5200 x AIR-CAP3702I wireless access points.
• 700 VLANs for users and device segmentation.
• L3 boundary at distribution layer, MPLS for segmentation, DC firewall as enforcement point.
• Multiple business units are sharing the same network.

Two onsite active/active data centers with applications, Internet access and public cloud peering.

Remote sites – 70 small sites, currently connected via MPLS network:


• 1 switch per site.
• 1-2 APs per site.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
ACME Diagram
WWW

Data Center

Remote Sites

Site 1

Campus Area 1 Site 2


MPLS
Site 3

Campus Area 3 Layer 3 link


Layer 2 link
Campus Area 2
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Cisco SD-Access Design Tool
• Cisco SD-Access Design tool is used once high-level
requirements (number of sites, number of EPs, Catalyst Center,
ISE, wired/wireless, etc) are collected.

• Input the requirements in the tool and it generates HLD.

• Available for everyone with


Cisco.com account at
http://cs.co/sda-design-tool.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
ACME Business Drivers for Cisco Campus Fabric

Network Resiliency Unified Wired/Wireless Segmentation Network Automation

Eliminate Layer 2 User or device Centrally defined All network


protocols from the security level, security policy that operations are
network, all traffic is visibility, QoS policy is enforced line rate performed from
routed. and traffic path is at access switches, central management
independent from without tunnelling controller, CLI is no
access medium. traffic to the more.
centralised firewalls.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
Designing
SD-Access
Deployment
External Dependencies
ISE DHCP/DNS Catalyst Center
Before you spin up your first SD-Access fabric site,
you will need:
• Catalyst Center – automation engine for SD-
Access.
• DHCP / DNS – if you intend to provide these IP WAN
services to users connecting to SD-Access
network.
• Cisco ISE – if you want to authenticate and
authorize users or devices.
• Cisco WLC – if you want to provide wireless
access. WLC can enable fabric-enabled wireless
for a single site only.
• Fusion device (typically a firewall) to implement
VRF route-leaking and enforce security policy at
the leaking point.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
External Dependencies
ISE DHCP/DNS Catalyst Center

All external dependencies reside outside the


fabric site and just need IP (Layer 3)
connectivity to fabric devices. Latency 100ms 200ms
IP WAN
requirements:
• Catalyst Center to fabric devices - 200ms RTT.
• ISE to fabric devices - 100ms RTT.
• Fabric WLC to fabric APs - 20ms RTT (put it
onsite).
20ms

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
How Would Your Carve Your Fabric Sites?
Fabric Site 1
• Fabric site is an instance of an SD-Access Fabric.

• A collection of Edge Node switches using the same


set of CP/BN switches.
• Typically defined by disparate geographical
locations, but not always.
• Can also be defined by:
IP WAN

• Endpoint scale.
• Failure domain scoping.
• Underlay connectivity attributes (MTU, multicast).
• Typically interconnected by a “Transit”.

Fabric Site 2 Fabric Site 3


#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Site Limits – Endpoint Scale
WWW DC
Control Plane Node keeps information about all site
endpoints in RAM and uses CPU to process it (including
wireless roaming events).

• C9300\L switches can support up to 16,000 EPs as CP node.


• C9500-32C / C9500-48Y4C / C9500-24Y4C switches can
support up to 80,000 EPs as CP node.
• Other C9K switches are possible in CP role, sizing values are
documented in Catalyst Center Data Sheet.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Site Limits – Endpoint Scale
WWW DC

Border Node keeps all EP information in TCAM as host


routes. If EP has multiple IP addresses (v4 + multiple v6),
each address is counted as individual entry.

• C9300\L switches can support up to 16,000 IP host routes


(/32 or /128) as Border Node.
• C9500-32C / C9500-48Y4C / C9500-24Y4C switches can
support up to 150,000 IP host routes as Border Node.

Full border node sizing values for all SD-Access platforms are documented in Catalyst Center Data Sheet.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Site Limits – Failure Domain Scoping
WWW DC
• All Edge Nodes in the site are sharing the same set of Control
Plane and Border Nodes. If all CP or BN nodes fail, the site is
failed*. SD-Access site with fabric wireless can have 2 CP
nodes max.
• A lot of configuration elements (VRF, VLAN, multicast,
wireless, default switchport policy) are applied at the site level,
to all** fabric site switches at the same time.
• Fabric site is underpinned by a single instance of underlay
routing protocol (IGP) as well as overlay routing protocol (LISP)
and is visible as single BGP AS from the outside world.

*During a total CP failure, no new endpoints can be onboarded into the fabric and roaming events won’t work.
Existing traffic flows will be cached for 24 hours.
**Some changes can be scoped to a limited subset of switches via Fabric Zones, see BRKENS-3833 for details.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Site Limits – Underlay Connectivity Attributes
WWW DC

• Avoid mixing different underlay connectivity attributes,


such as MTU or multicast support because you will
end up dropping to the lowest common denominator
within a fabric site.

Dark fibre links


(9000 MTU, multicast)

Radio link
(1500 MTU, no multicast)

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Multiple Fabric Sites vs Single Fabric Site?
Fabric Site 1

WAN vs Fabric Site 1

Fabric Site 2 Fabric Site 3

Make large single fabric site within single geographical area until:
• You hit fabric device (1200 logical switches for –XL Catalyst Center) or endpoint limit
(~100,000 EPs).
• Links between parts of your fabric site can support increased MTU (from 1550 to 9000
bytes) and can be multicast-enabled.
• Part of your fabric site needs to be online even if the rest of your site is offline.
• Part of your fabric site needs to provide Direct Internet Access for users in the overlay.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Multiple Fabric Sites vs Single Fabric Site for
ACME?
Requirement:
• 2100 x WS-C2960X access switches in 1300 switch cabinets.

Solution:
• Three fabric sites in main campus because of 1300 switch cabinets (max fabric
site is 1200 fabric devices).

Caveats:
Data
• No seamless wireless roaming as IP subnet can exist only in one site.
Center
• Each site needs its own set of WLCs and BN/CP nodes.
• Extra switching hardware for SDA Transit CP nodes. Fabric site 1

Fabric site 2 Fabric site 3

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
SD-Access Transit
Allows SD-Access fabric sites to communicate to each other using VXLAN
tunnels between Border Nodes leveraging plain IP network between each other.

Why VXLAN?
• VXLAN carries VRF and SGT in the header over plain VXLAN tunnel
IP network.
• Transit network just need to provide IP connectivity
between BN Looback0 interfaces.
Data
Center
Requirements: Fabric site 1
• MTU > 1550 bytes.
• Dedicated Transit Control Plane(s).
• Multicast in the transit network*
Fabric site 2 Fabric site 3
*If overlay multicast is required

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
What About 70 Small Sites?
Individual site “Stretched” site

Device SD-Access Fabric in a box (FIAB) Edges Nodes onsite,


roles set of CP+BN nodes
Two main options: in central location
• 70 small individual Management High – need to Low – all changes
fabric sites. overhead manage 70 sites are performed on a
individually (VRF, single site
• 1 ”Stretched” fabric BGP, subnets are
site. defined per site)
Survivability High – each site is Low – all sites are
running its own set running shared set of
Can always mix and match. of CP/BN nodes CP/BN nodes
Flexibility High – each site can Low – all sites have
have DIA and unique single egress point –
routing policy BN/CP at central
location
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
WAN SD-Access Site for
ACME
VXLAN tunnels over
MPLS between site
Edge Nodes and WAN
site BN/CP switches.

Data Center SDA Fabric – “Stretched” Site

Remote site 1

MPLS WAN

Remote site 2

Remote site N

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
WAN SD-Access Site for
ACME

Data Center SDA Fabric – “Stretched” Site

Remote site 1

MPLS WAN

Remote site 2

MPLS can be the SD-Access underlay, but:


• Consider MTU - VXLAN adds 50 bytes (ip tcp adjust-mss if required)
• Consider underlay multicast support, required for L2 flooding Remote site N

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Control Plane – Pub/Sub or Not?

LISP Pub/Sub
• Released in 2022 with Catalyst Center 2.2.3.X and IOS-XE 17.6.X.
• Reliable and stable.
• Less Control Plane load.
Greenfield: deploy LISP Pub/Sub.
• Faster convergence.
• Requires default route (0.0.0.0/0) from upstream to work in External Border capacity.
• No longer need per-VN iBGP peering between Border Nodes.
• All sites connected via SDA Transit need to be on the same CP architecture (Pub/Sub or LISP/BGP).

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Control Plane – Colocate with Border or Not?
+ or ?

Scaling parameter: TCAM* CPU + RAM TCAM + CPU + RAM

• Border Node downloads all fabric host routes in switch TCAM.


• Control Plane RAM is non-issue from scale perspective.
• Main CPU stress for CP is handling wireless roaming for Fabric Enabled Wireless endpoints.
• It is safe to colocate until 50,000 EPs**, even in wireless-heavy environment.
• Can split BN and CP for architectural reasons (fault isolation, network modularity), rather than technical
(scale).
• Avoid using routing platforms (C8K) as Control Plane and/or Border Nodes if possible.
*Number of host (/32 or /128) routes. **C9500H or above

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Underlay Design Options – LAN Automation vs
DIY
Underlay build:
• Configure Loopback0 interface (/32) on each SD-Access BN, CP, and Edge node.
• Set increased MTU to accommodate VXLAN header overhead, vtp transparent and enable multicast
routing.
• Configure point to point routed links between each switch in the topology.
• Enable routing protocol so that each switch in the topology can reach the Loopback0 of each other
in the topology.
• Enable PIM on each point-to-point link, Loopback0 and configure anycast ASM RP on CP/BN nodes.
• Configure SNMP and SSH credentials and that’s it!

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
Underlay Design Options – LAN Automation vs
DIY
LAN Automation DIY

Solution approach Turnkey automation CLI template or CLI

Routing Protocol IS-IS (single Level-2 area) Any (most organisations


deploy OSPFv2)
IPv4 address allocations Separate pools for Anything is possible (as long
loopbacks and P2P as it’s IPv4)
interfaces (in CC 2.3.5 and
later)
Multicast configuration Yes Yes
BFD configuration Yes Yes

STP configuration Yes Yes


MTU configuration Yes Yes

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
OSPF or IS-IS for SD-Access Underlay?
LAN Automation DIY

Routing Protocol IS-IS (single Level 2 area) Any (most organisations


deploy OSPF)

1. LISP needs /32 host route for destination VTEP Loopback0 to be present in
forwarding table.
2. Maximum tested/supported L3 switches in link-state protocol area is 250.
3. More than 250 switches in the network will require multi-area deployment.
4. IS-IS Level1 areas filter all inter-area prefixes, including Loopback0 host routes
(injects 0/0 route instead). OSPF areas allow inter-area routes by default.
5. Solution?
a) Implement IS-IS multi-area design and configure Level2->Level1 route
leaking (manually).
b) Implement OSPF multi-area design (manually).

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Underlay
Automation
Demo
Demo Topology Catalyst Center

10.66.181.10
IP WAN
• C9500-1: automation pool 10.250.15.0/24
• C9500-2: automation pool 10.250.16.0/24
• Loopback range – 10.250.255.0/24
Distribution Switches
C9500

Access Switches
C9300
Template sources:
https://github.com/sergeynasonov/sda-underlay-templates Factory default

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
For your
reference

Underlay Multicast
Multicast in underlay is no longer optional. It is required for:
• Layer 2 flooding (broadcasts) in user overlays – most
deployments have this.
• Layer 2 border functionality – most deployments have this. RP RP

• Multicast support in overlays.

Where should fabric underlay RPs go?


• Configure underlay Anycast RPs for the SDA site on BN/CP nodes.
o Use separate Loopback (not Loopback0) interfaces for RP source
o Setup MSDP between two Border Nodes / RPs
o Configure static RPs (no BSR / Auto-RP)
o Enable PIM sparse on all P2P links and Loopback interfaces

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Multicast with SD-Access Transit – For your
reference

Underlay

Underlay requirements:
• Underlay links between fabric sites support PIM-SSM.
• All fabric sites use the same set of underlay RPs.
Data Center
10.0.0.1
10.0.0.1
• RPs outside the fabric (external) are highly recommended.
• Minimum SW version is 17.10.1 / Catalyst Center 2.3.5.X+.
Fabric site 1
RP 10.0.0.1

Fabric site 2
Fabric site 3
RP 10.0.0.1
RP 10.0.0.1

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Overlay Unicast Per-VRF BGP peering

• Broadcasts are supressed by default in SD-Access -> make


large subnets for users (10k hosts in IP pool is fine).
• Avoid migrating subnets “as-is” into the fabric.*
• Sum of subnets and pure L2 overlays cannot exceed 1000
CORP
per fabric site with Catalyst Center –XL (100 and 300 in IOT VN
smaller appliance versions). GUEST VN
INFRA_VN VN
• Catalyst Center has deployment-wide 1.5m physical + logical
(GRT)
interface limit. Each IP pool creates 2 interfaces (SVI + LISP
tunnel) on each switch in the fabric.
• 700 IP pools will require (2*700*1300+48*2100) 1,920,800
interfaces, which is above 1.5m limit.
• 100 IP pools in 1300 stacked-switch fabric** will contribute
(2*100*1300+48*2100) 360,800 ports to that limit.
*Requirement reference: 700 VLANs for users and device segmentation.
**Requirement reference: 2100 x WS-C2960X access switches in 1300 switch cabinets.
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
Overlay Multicast
RP
• Overlay multicast requires multicast-enabled underlay (avoid head-end
replication).
• Overlay multicast is enabled per Virtual Network (VRF) rather than per IP 10.0.0.1 GRT
pool (subnet) and needs an IP pool per multicast-enabled VN.
• Both internal and external RPs are supported (use external RP if possible).
• Multicast route-leaking is not supported on C9K platform.
• If you have sources/receivers in different Virtual Networks, use external RP
and perform route-leaking outside of fabric (e.g. on Fusion). CORP VN
10.2.0.0/16
• As of now, SDA fabric supports all multicast flow variations in overlays: IOT VN RP = 10.0.0.1
• ASM and SSM (concurrently) 10.1.0.0/16
RP = 10.0.0.1
• Sources and Receivers inside fabric
• Sources inside fabric, Receivers outside fabric
• Sources outside fabric, Receivers inside fabric

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Overlay Multicast in SD-Access Transit
Multicast over SDA Transit (in VXLAN) is supported when:
• Multicast-enabled VNs in all sites are configured with the
same set of RPs (per VN).
• All sites are configured to use native multicast (head-end Data Center
replication is not supported). 10.0.0.1
10.0.0.1

• Links between fabric sites support PIM-SSM.


• Pub/Sub only, LISP/BGP is not supported. Fabric site 1
CORP VN
RP 10.0.0.1

Fabric site 2 Fabric site 3


CORP VN CORP VN
RP 10.0.0.1 RP 10.0.0.1

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
Upstream Connectivity – Fusion Firewall
Active/Active Borders with two uplinks to HA firewalls (active/passive pair)

Problem: Destination Interface Next-hop


• Each BN will register itself as active gateway for fabric. 10.1.0.0/24 Eth1/1 BN Red

• Each BN will advertise fabric subnets via BGP with the


identical AS-PATH length (and other BGP attributes) to the A P
firewall.
• The firewall will receive two equal routes via two next-
hops and will only install one (by default). 10.1.0.0/24

• Inevitably half the traffic will arrive to firewall via the other
10.1.0.0/24
interface (facing BN Green) and will get dropped.

Solution?

10.1.0.0/24

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Upstream Connectivity – Fusion Firewall
Solution 1 – Make Border Nodes Active/Passive too.
1. Configure Border Red to have better LISP priority as fabric exit
(smaller the better, default value is 10). Destination Interface Next-hop
10.1.0.0/24 Eth1/1 BN Red

A P

10.1.0.0/24
2. Configure Border Green to add AS-PATH prepend while advertising
fabric subnets to the firewall.

10.1.0.0/24

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Upstream Connectivity – Fusion Firewall
Solution 2. ECMP on firewall cluster.

Destination Interface Next-hop


10.1.0.0/24 Eth1/1 BN Red
10.1.0.0/24 Eth1/2 BN Green
Configure Equal-Cost Multipathing (ECMP) on the firewall so that A P
both next-hops are installed in the firewall forwarding table:
• Each mainstream firewall vendor supports this functionality.
• Cisco FTD firewalls support this from FTD 6.5. 10.1.0.0/24

• Requires interaction with firewall team (I know!). 10.1.0.0/24


• Pay attention to Multicast and ECMP interaction.

10.1.0.0/24

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Upstream Connectivity – Fusion Firewall
Solution 3. Intermediate hop.

Destination Interface Next-hop


Make only single interface on the firewall by inserting another L3 10.1.0.0/24 Po1 SVL stack
hop (typically stacked switch) between BNs and the firewall pair. A P
Repeat the configuration per fabric VN (VRF).

This approach: 10.1.0.0/24


• Creates single logical point of failure in otherwise highly-
available network.
• Requires extra hardware to procure and configure. 10.1.0.0/24 10.1.0.0/24

• Adds more moving parts, making ongoing operational


changes lengthy and more complex, ultimately driving down
the network uptime.

10.1.0.0/24

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Upstream Connectivity – Fusion Firewall
Solution 4. Stack Border Nodes.

Destination Interface Next-hop


Make only single interface on the firewall by stacking Border 10.1.0.0/24 Po1 BN Stack
Nodes. Please avoid.

A P
• Single point of failure, especially if you collocate CP and BN
roles.
• Hardware changes require SVL reboot (=fabric outage).
• No In-Service Software Upgrade (ISSU) for SVL in SD- 10.1.0.0/24
Access.

10.1.0.0/24

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
For your
reference

Upstream Connectivity – Fusion Firewall


General Observations

Destination Interface Next-hop


10.1.0.0/24 Eth1/1 BN Red
10.1.0.0/24 Eth1/2 BN Green
• Network link and per-VN iBGP peering between Border Nodes A P
is no longer required with Pub/Sub fabric control plane.
• Configure BFD to speed up BGP convergence.
• Make sure to research firewall vendor High Availability BFD BFD
implementation, to make sure BFD does not trigger BGP
adjacency drop during firewall failover.
iBGP
• Catalyst Center still provisions iBGP peering between BNs in
GRT. Configure “bgp neighbor fall-over” on that peering to
speed up upstream BGP convergence.

10.1.0.0/24

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
For your
reference

Upstream Connectivity – Fusion Firewall


General Observations

Destination Interface Next-hop


10.1.0.0/24 Eth1/1 BN Red
10.1.0.0/24 Eth1/2 BN Green
• Network link and per-VN iBGP peering between Border Nodes A P
is no longer required with Pub/Sub fabric control plane.
• Configure BFD to speed up BGP convergence.
0.0.0.0/0 0.0.0.0/0
• Make sure to research firewall vendor High Availability
implementation, to make sure BFD does not trigger BGP
adjacency drop during firewall failover.
• Default route (0.0.0.0/0) advertisement is required from the
firewall to enable External Border Node functionality.

10.1.0.0/24

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Switchport Access Policy
• Closed authentication – 802.1X + MAB (IBNS 2.0
template). No DHCP/ARP before authentication.

• Open Authentication – 802.1X + MAB. Even if you


fail authentication, you are still allowed.

• None – no authentication, all ports are statically


configured.

• Can always start with None, then change later.

• Migrate the existing switchport policy as part of


fabric rollout.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
For your
reference

Wireless Considerations

• Wireless configuration needs to be managed by Catalyst


OTT FEW
Center.
• Can have Fabric-Enabled Wireless (FEW) and Centrally
Switched/Flex (OTT in SDA lingo) mode for the same SSID
across different sites.
• Can have mix of SSIDs (FEW vs OTT) on the same AP.
• Can have fabric APs and non-fabric APs on the WLC.
• If multicast is required on OTT SSID, AP pool in INFRA_VN
needs to be multicast-enabled via CLI template (“ip pim
sparse” under AP pool SVI).
Ctrl: CAPWAP Ctrl: CAPWAP
Data: CAPWAP Data: VXLAN
Data: Ethernet Data: Ethernet

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
For your
reference

Wireless Considerations – My Take?

OTT FEW

1. Keep wireless “as is”.


2. Finalise switching migration first (block by block).
3. Then convert wireless to FEW in one go.

Ctrl: CAPWAP Ctrl: CAPWAP


Data: CAPWAP Data: VXLAN
Data: Ethernet Data: Ethernet

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Key Design Decisions
Design Decision Rationale
D1 Divide main campus into 3 fabric Cannot implement single fabric site, number of fabric devices is >1200.
sites. Three geographical sub-sites align with proposed fabric site structure.
D2 Implement SDA Transit between 3 Need to maintain unified macro- and micro-segmentation policy across all
fabric sites in the main campus. three fabric sites that make up ACME campus.
D3 Use colocated BN/CP roles. Each individual site will not exceed more than 50,000 EP. Two BN/CP
switches will provide adequate level of resilience of the fabric site.
D4 Implement one “Stretched” fabric 1. MPLS sites do not have local server resources or DIA and are accessing
site for 70 small branch sites all resources via the centralised data center.
across the WAN. 2. MPLS carrier can support MTU > 1550.
3. Small branch sites do not have overlay multicast and L2F requirements.
D5 Use OSPFv2 as underlay routing 1. LAN Automation (with IS-IS) cannot be used due the scale of the
protocol for the fabric. deployment, necessitating multi-area design.
2. ACME IT team has a lot of experience with OSPF and is not comfortable
with IS-IS manual deployment.
D6 Use the external set of multicast ACME has multicast sources in IoT VN (AppleTV & Printers) and receivers in
RPs for overlay VNs. Corp VN.
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
Final BOM

• Catalyst Center: 3 x DN3-HW-APL-XL.


• 4 fabric sites: 8 x C9500-32C core switches for each (running BN+CP roles), 2 per fabric
site (include “stretched” site).
• 4 fabric sites: 8 x C9800-40-K9 WLCs, 2 per fabric site (“stretched” site still needs
WLC).
• SDA Transit Control Plane: 2 x C9500-24Y4C (per deployment).
• Existing ISE (make sure it has ISE Advantage Licenses for expected concurrent EP
quantity).
• Distribution and access switches follow the traditional networking pattern.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Implementing SD-
Access
Project Flow
M1. Build management stack (Catalyst Center).

M2. Integrate Catalyst Center with existing ISE.

M3. Deploy new core switches in parallel to the


existing (new Border Nodes).

M4. Migrate switching infrastructure – per distribution


block (building), keeping existing L2 switchport policy
(802.1X, MAB, open).

M5. Migrate wireless once wired network is fully


converted.
#CiscoLive © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public
Deep Dive: BRKNMS-2426 and BRKOPS-2161

M1. Building Management Stack


Catalyst Center Catalyst Center

SD-Access requires Catalyst Center as automation engine.


• If high availability (HA) is required – deploy 3 node cluster.
DC1 DC2
• Avoid splitting 3 cluster nodes across 2 separate locations.
• Deploy Catalyst Center in 1:1 or 3:3 mode if disaster
recovery (DR) is required. IP WAN
• Virtual (AWS or ESXi) Catalyst Center appliance does not
If you lose DC1, single Catalyst Center
have native HA or DR capabilities as of today (June 2024). node in DC2 will shut down automatically.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
M2. Integrate with Existing ISE
ISE Catalyst Center
• One Catalyst Center cluster can only be integrated with a single
PxGrid, ERS, SSH
ISE cluster.
• Reuse existing authentication flows and add new SD-Access
IP WAN
specific authorization profiles.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
M2. Integrate with Existing ISE
ISE Catalyst Center
• One Catalyst Center cluster can only be integrated with a single
PxGrid, ERS, SSH
ISE cluster.
• Reuse existing authentication flows and add new SD-Access
IP WAN
specific authorization profiles.
• Changing already-integrated ISE cluster requires removal of all
SD-Access fabric sites in Catalyst Center.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
M3. Parallel Core
4

1. Deploy new core switches in parallel to the old.


1-3
2. Add to new switches to the Catalyst Center
and enable BN + CP roles for the new fabric
MPLS site.
3. Configure required VNs (=VRF) in Catalyst
Center and assign to the new fabric site.
4. Configure BGP peerings for underlay and new
VNs between new Border Nodes and the fusion
firewall.

Campus Area 1
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
M3. Test Configuration Before Migration

At this point, fabric configuration complexity does


not increase with the growth of access switches:
• Create all final-state subnets/Anycast
MPLS gateways.
• Bring and test all endpoint classes, focusing
on authentication, multicast, exotic use-cases
(PC imaging, Wake-on-LAN, etc).
• Test fabric failover (shutdown border, unplug
links, etc.). Border configuration does not
change if fabric has 2 Edge Nodes or 200
Edge Nodes.

Campus Area 1
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
M3. Reuse Existing Distribution

5. Establish routable connection from


distribution switches to new core.
6. Adjust MTU and enable multicast if required.
MPLS
7. Decision point:
5
a) Parallel access build
6 b) Convert existing access switches

Campus Area 1
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
For your
reference

Parallel vs Incremental Access Migration


Parallel Build Incremental Switch Conversion
Nature of the Build SD-Access switch next to Convert existing switch to be
change traditional and repatch. SD-Access enabled.
Hardware Can migrate from previous Need C9K switch with DNA
requirements generation of Cisco switching Advantage licenses already
(e.g., C2K, C3K). installed.
Extra Space, Power Need additional space and None.
and Cabling and power outlets for at least one
Requirements switch, as well as additional
fibre runs (usually 2 per
switch).
Risk Low – switch build and testing Medium – switch build, testing
happen outside the and EP migration happen inside
maintenance window. EP the maintenance window.
migration can be incremental. Rollback requires device wipe
Simple incremental rollback. and loading old config.
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
M4. Parallel Build

8. Deploy new C9K switch in parallel with the old


C2K:
• Routed P2P links, Loopback0 interface.
MPLS Advertise in OSPFv2.
• MTU >1550.
5
• PIM sparse on interfaces and multicast RP
6
configuration.
• SNMP and SSH credentials.
8
9. Discover new switch in Catalyst Center and assign
EN role.
10. Assign switchports to user VLANs if not using
dynamic authentication via ISE.

Campus Area 1 11. Repatch endpoints.


#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
M4. Parallel Build

12. Continue deploying new Edge Node


switches and migrating endpoints until all
access switches are replaced.
MPLS

12

Campus Area 1
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
M4. Remove Legacy Configuration

13. Once all access switches are migrated in the


15
distribution block, remove MPLS
configuration from distribution switches.
13
14. Remove stacking configuration from
14
distribution switches.
15. Once all distribution blocks are ”migrated” to
the fabric, legacy core switches can be
removed.

Campus Area 1
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Layer 2 Border – Gateway Inside the Fabric
Use-case: Stretch VLAN between fabric and traditional network

There are endpoints with static IP addresses (such as CCTV


cameras)
1. Create anycast gateway inside the fabric.
2. Shutdown corresponding SVI in traditional
network.
3. Configure BN with Layer 2 handoff (gateway
inside the fabric) – L2 BN. MPLS
4. Optional: configure external VLAN ID if not
matching fabric VLAN ID. 4,5 3
5. Allow VLAN on trunk between traditional 2
network and L2 BN.
6. Cameras on old network will use SVI on L2 1

BN to reach the fabric and egress out.


7. A maximum of 6000 EPs can be connected
outside the fabric. CCTV subnet 10.1.0.0/22

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Layer 2 Border – Gateway Outside the Fabric
Two use-cases:
• Endpoints that are not using IP (Profinet, Bacnet, Modbus and other industrial
protocols) and relying on MAC layer / broadcasts for communication.
• Overlapping IP addresses in the overlay (multi-tenancy).

VLAN 300 SVI


SDA Fabric VLAN 300

L2-only overlay Dedicated L2 BN reduces risk that is created by attaching fabric to


external L2 domain:
• L2 forwarding loop.
• Link-local multicast flooding.
L2 BN requires Layer 2 flooding to be enabled for the stretched
segment.
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Layer 2 Border – Deployment Model
STP BPDUs are not tunnelled inside the fabric, but broadcasts are -> Same VLAN on two
L2 BN handoffs will create a L2 forwarding loop.

VLAN 300
SDA Fabric

VLAN 300

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Layer 2 Border – Deployment Model
Dual-homing from single L2 BN is supported.

VLAN 300
STP root
SDA Fabric
STP blocking
VLAN 300
Dedicated L2 BN reduces risk that is created by attaching fabric to
external L2 domain:
• L2 forwarding loop.
• Link-local multicast flooding.
L2 BN requires Layer 2 flooding to be enabled for the stretched
segment.
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Layer 2 Border – Deployment Model
Multi-chassis EtherChannel from stacked L2 BN is supported.

StackWise
StackWise Virtual

SDA Fabric
VLAN 300

Dedicated L2 BN reduces risk that is created by attaching fabric to external


L2 domain:
• L2 forwarding loop.
• Link-local multicast flooding.
L2 BN requires Layer 2 flooding to be enabled for the stretched segment.
#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Broadcast Traffic in Fabric
Also known as Layer 2 Flooding (L2F)

• Disabled by default as not having flooding enables large


number of hosts in the same Layer 2 segment.
• Automatically enabled for segments stretched via L2
Border Node with gateways outside the fabric.
• Can enable manually (per subnet) – broadcast rules
apply.
• Enabling L2F floods Ethernet broadcast and link-local
multicast (TTL=1) in overlay.
• Requires multicast in underlay.
• Every deployment will have hosts that need L2F, so put them to separate VLAN/VNI
and enable L2F there. Do not enable L2F on main VLANs with conventional
endpoints.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
For your
reference

Fragmentation in VXLAN
RFC 7348 “Virtual eXtensible Local Area Network (VXLAN): A Framework for Overlaying Virtualized
Layer 2 Networks over Layer 3 Networks” says:

“4.3 VTEPs MUST NOT fragment VXLAN packets. Intermediate routers may fragment encapsulated
VXLAN packets due to the larger frame size. The destination VTEP MAY silently discard such VXLAN
fragments.”
Solution?
• Increase link MTU - within the campus.
• Adjust TCP MSS - over WAN (1300 is the magic number).
14 bytes 20 bytes 8 bytes 8 bytes

ETHERNET IP UDP VXLAN ETHERNET IP PAYLOAD

50 bytes – VXLAN overhead 1500 bytes - payload


#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
For your
reference

Fragmentation in VXLAN
ip tcp adjust-mss
• Per VLAN - pushed to all Edge Nodes within fabric site.
• Adjust if site links’ MTU cannot be set to >1550 bytes.
• Only helps with TCP traffic.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
For your
reference

Fragmentation in VXLAN
ip tcp adjust-mss

• Per BN - pushed to all L3 VRF handoff interfaces.


• Adjust if WAN MTU cannot be adjusted and you use SDA Transit.
• Only helps with TCP traffic.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Lessons Learned From Previous Migrations

• Most struggles during SDA deployments are found with underlay routing design (IGP, BGP)
and misbehaving endpoints - iron those out before the deployment.

• Using IS-IS without experience - do you really want to learn new IGP while troubleshooting
fabric operations?

• Migrating subnets into the fabric “as is” - quickly


reach subnet limit in Catalyst Center.

• Trying to approach the project as


“transformational”: HW refresh + Fabric + Fabric
Wireless + Transition to 802.1x + Micro-
segmentation + changes in shared services (WLC,
DHCP, authentication) in a single project. Better
split into multiple phases.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
What’s Next for
ACME?
What’s Next for ACME?
Switchport Authentication Policy: None

SDA Fabric
Authentication: None

G1/0/4

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
What’s Next for ACME?
Upgrade Switchport Policy to 802.1X + MAB

SDA Fabric
Authentication: Closed

G1/0/38

Authorize endpoint into VLAN


“Campus_VN_Users” and assign SGT 4

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
What’s Next for ACME?
Upgrade Switchport Policy to 802.1X+MAB

Authorize endpoint into VLAN


“Campus_VN_Users” and assign SGT 4

SDA Fabric
Authentication: Closed

G1/0/38

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
Cisco AI Endpoint Analytics
What if endpoint does not support 802.1X?

SDA Fabric
Authentication: Closed
Authorize endpoint into VLAN
“Guest_VN_Users” and assign SGT 2

G1/0/13

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
Cisco AI Endpoint Analytics
What if endpoint does not support 802.1X?

PxGrid
SDA Fabric
Authentication: Closed

G1/0/13

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Cisco AI Endpoint Analytics
What if endpoint does not support 802.1X?

SDA Fabric
Authentication: Closed

G1/0/13

RADIUS CoA: Authorize endpoint into


VLAN “INFRA_VN_AP”

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Cisco AI Endpoint Analytics
What if endpoint does not support 802.1X?

RADIUS CoA: Authorize endpoint into


VLAN “INFRA_VN_AP”

SDA Fabric
Authentication: Closed

G1/0/13

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
What’s Next for ACME?
Implement micro-segmentation

Micro-segmentation “gotchas”:
• Default deny will deny everything including
broadcasts/ARP/DHCP traffic.
• Micro-segmentation policy is applied to unicast traffic
only:
• Broadcast (including DHCP) traffic is not filtered.
• Multicast traffic is not filtered.
• Statically assigned SGTs (in switch CLI) are not shown in
Policy Analytics visualisation (classified as “Unknown”).
• Avoid “default deny” unless you have very specific
reasons.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
Summary

• Thank you! Without you SD-Access will remain in the CVD ☺

• Keep sharing feedback. We are listening.

• Go deep, check DGTL-BRKENS-3822 for brownfield migration details.

• Engage with Cisco Sales – we will always help you.

• Get virtual Catalyst Center and try SD-Access next week!

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
Complete Your Session Evaluations

Complete a minimum of 4 session surveys and the Overall Event Survey to be


entered in a drawing to win 1 of 5 full conference passes to Cisco Live 2025.

Earn 100 points per survey completed and compete on the Cisco Live
Challenge leaderboard.

Level up and earn exclusive prizes!

Complete your surveys in the Cisco Live mobile app.

#CiscoLive BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
• Visit the Cisco Showcase
for related demos

• Book your one-on-one


Meet the Engineer meeting

Continue • Attend the interactive education


with DevNet, Capture the Flag,
your education and Walk-in Labs

• Visit the On-Demand Library


for more sessions at
www.CiscoLive.com/on-demand

Contact me at: [email protected] via email

BRKENS-2824 © 2024 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
Thank you

#CiscoLive

You might also like