0% found this document useful (0 votes)
72 views50 pages

M03 - Application Policy Infrastructure Controller

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views50 pages

M03 - Application Policy Infrastructure Controller

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Cisco ACI

Application Policy Infrastructure


Controller (APIC)
www.lumoscloud.com
[email protected]
Agenda

 APIC Introduction
 APIC Overview
 APIC Web UI
APIC Introduction
What is APIC?
APIC is the policy controller
•It’s not the control plane
•It’s not in the data path

•It’s a highly redundant cluster of 3-7 Servers (N+2)

4
ACI Goal: Common Policy and Operations Framework
Cloud

Cloud Admin
Web App DB
Tier Tier Tier

Application Admin APPLICATION


External
Zone

DMZ Trusted
DB
Zone
Tier
Security Admin
SECURITY

Network Admin
ACI Goal: Common Policy and Operations Framework

Cloud

APIC Cloud Admin

Application Admin APPLICATION


External Zone

Trusted
DMZ DB
Zone
Tier
Security Admin
SECURITY

Network Admin
COMMON POOL OF RESOURCES
APIC Overview
APIC
HTML5 Interface NX-OS style CLI

Management API
Management Console
Policy Topology Observer Boot
• Policy definition • Condensed • Statistics • DHCP
• Policy consumption state/health • Fault/alert • Fabric node
• “Who can see who • Physical topology • State images
and where” • Logical topology • Health • APIC images
• Endpoint registry • Logs/history • Catalog

Fast distributed datastore


APIC Cluster: 3 – 31 appliances: Scale-out processing, N+2 replication

Policy rendering and instrumentation

Fabric
Element

Node

Policy
• Policy enforcement
The Observer Functionality
Statistics Faults, Events

67%
Link
Utilization
Unicast pkt
drops

OBSERVER
Health Scores Logs, Forensics
Diagnostics
Multi-tenancy

• Local & External AAA


(TACACS+, RADIUS, LDAP) Universe
Authentication & Authorization
Tenant: Coke Tenant: Pepsi Fabric
• RBAC to control READ and
WRITE for ALL Managed Objects App Profile App Profile Switch

EPGs EPGs Line Cards


• RBAC to enforce Fabric Admin
and per-Tenant Admin separation L3 Networks L3 Networks Ports
APIC controller is attached in-band

Topology discovery Loopback and VTEP IP


through LLDP using addresses allocated from
ACI-specific TLVs (ACI “infra VRF” through DHCP
OUI) from APIC

APIC APIC APIC APIC Cluster

• ACI Fabric supports discovery, boot, inventory, and systems maintenance processes through the APIC
‒ Fabric discovery and addressing
‒ Image management
‒ Topology validation through wiring diagram and systems checks
Designed around Open APIs & Open Source
Object Oriented
Python SDK
•Comprehensive access to underlying
information model
•Consistent object naming directly mapped
REST API to URL
•Supports object, sub-tree and class-level
queries
RESTFul over HTTP(s)
ACI-enabled L4-7 •JSON + XML
fabric devices SCRIPTING •Unified: automatically delegates request
to corresponding components
APIS
•Transactional
•Single Management Entity yet fully
independent components
APIC Hardware
APIC Hardware
• 1st generation
– C220 M3
• 2nd generation Cisco UCS C-Series C220
– C220 M4
• Two models: Medium and Large VIC 1225 (2 x 10GBase-T)
– Medium (M2): 1.9 GHz 6 cores Xeon E5
– Large (L2): 2.4 GHz 6 cores Xeon E5
• Hardware ordered as Cisco ACI appliance, not Cisco UCS C-series servers
• Cannot re-purpose a C220 M3/M4
– TPM module and certificates used to secure boot image
– You can order a spare
• VIC cards used to connect to leafs
– VIC 1225T (10GBase-T)
– VIC 1225 (SFP+) VIC 1225 (2 x SFP+)
Sizing the Cluster
• Hardware configuration in two sizes
– Medium
• 1000 edge ports or less
– Large
• 1000+ ports
• Other considerations for cluster hardware
– Number of changes per day/hour/minute/second
• When more than 3 APIC nodes are required:
– Up to 80 leafs: 3 APIC nodes
– Up to 300 leafs (multi-pod): 5 APIC nodes
– Up to 400 leafs (multi-pod): 7 APIC nodes
MODERN DATABASE DESIGN
Database Evolution
Traditional Database Modern Database
• Active-standby • All-active configuration
configuration • Scale-out (add more boxes
• Scale-up (bigger boxes) for more capacity)
• One copy of data for • Multiple copies of data
redundancy (standby)
Active Active Active
Active Standby ...
Scale out
APIC Clustering
 Shard is a unit of data mgmt Allows horizontal (scale-out) scaling.
• Data is placed into shards Simplifies replications scope.
• Each shard has a primary & 2 replicas
• Shards are evenly distributed  Shard data assignments are based
on pre-determined hash function.
 Static shard layout determines the

APIC Node
Policy Topology Observer Boot

assignment of shards to
appliances
3 Node Cluster

APIC Node
ACI Fabric
shard

shard

shard
shard
 Each replica in the shard has use
preference (1..3)

APIC Node
 Writes happen to the highest
preference reachable
 In case of split-brain, automatic
Each APIC Node has all APIC functions, reconciliation is performed
however, processing is evenly distributed
Data Sharding
A-Z

A-G H-M N-Z


Shard A-G Shard H-M Shard N-Z
Shard Replication

A-G H-M N-Z


A-G A-G A-G H-M H-M H-M N-Z N-Z N-Z

“Never go to sea with two


chronometers: Bring one or
three” -Mythical Man Month
Replication Placement
A-G A-G A-G N-Z N-Z N-Z

H-M H-M H-M

Shelf 1 Shelf 2 Shelf 3


A-G N-Z A-G N-Z A-G N-Z
H-M H-M H-M
Data Resilience
The loss of one shelf still The loss of two shelves
leaves two copies of still leaves one copy of
everything, no data loss. each shard, no data loss.

Shelf 1 Shelf 2 Shelf 3


A-G N-Z A-G N-Z A-G N-Z
H-M H-M H-M
Data Resilience
With three replicas, and shelves, With the third
only 2 shelves can be lost. If failure, A-G shard is
more are lost some shards may lost
be lost entirely

Shelf 1 Shelf 2
A-G N-Z
H-M

Shelf 3 Shelf 4 Shelf 5


A-G N-Z A-G N-Z
H-M
H-M
Shard Replicas Rules
Shard Replica Rules
1. There should be three or more

?
Conflict with two disagreeing replicas
2. It should be an odd number
3. Disputes between replicas
solved by majority rule Replica 1: Replica 2:
1001 1110

Replica #1 disagrees, is thrown out as the “minority report”

Replica 1: Replica 2: Replica 3:


1001 1110 1110
APIC Database
Managed Object Model
interface Vlan2029
no shutdown
mtu 9216
router(config)# show run vrf member T02
interface Vlan2019
no shutdown
ip address 10.2.3.2/24
mtu 9216
interface Vlan2019
vrf member T01 no shutdown
ip address 10.1.3.2/24 mtu 9216
interface Vlan2028 vrf member T01
no shutdown ip address 10.1.3.2/24
mtu 9216
vrf member T02
ip address 10.2.5.2/24
interface Vlan2029
no shutdown
mtu 9216
vrf member T02
ip address 10.2.3.2/24

Instead of a single “show run”


configuration file, the configuration is
sharded/split into thousands of individual
components, known as managed objects
Cisco APIC and Sharding
• Cisco APIC shards configuration data
• Shards are evenly distributed among
available nodes
• All nodes are active and can take
configuration changes
• Three replicas per shard (not
configurable at this time)
• Maximum supported nodes is five,
typical is three, lab environments
can do one (no redundancy)
• A spare node is supported, one-click APIC Cluster
promoted as of 2.3 release
Shard Precedence
Modify entry
Shard 1 Shard 1 Shard
“Shard1 3”
Writes are done to
1 2 3 the primary shard
for shard 3

Acknowledgement
Shard 2 Shard 2 Shard 2
happens only after
2 all 1 3
three replicas
updated
Shard 3 Shard 3 Shard 3
Then replicated to 3 2 1
secondary shards
Shard Proxy
Modify entry
“Shard 3”
APIC 1 forwards
Shard 1 Shard 1 Shard 1 command to
1 2 3 primary replica
Configuration is holder APIC 3
changed via HTML5, Shard 2 Shard 2 Shard 2
API, or CLI on APIC 2 1 3
1, but primary
replica is on APIC 3
Shard 3 Shard 3 Shard 3
3 2 1

APIC 1 APIC 2 APIC 3


APIC Web UI
Basic Elements – Main Navigation

Two Level Top Navigation


• Main Sections
• System
• Tenants
• Fabric
• Virtual Networking
• L4-L7 Services
• Admin
• Operations
• Apps
APIC GUI – System Dashboard
Most workflow views in the APIC
GUI are separated into navigation
panes and content panes
The navigation pane is on the
left and allows navigation to
all configuration elements in
the given tab at the top
Element properties
are shown in the
work pane

Select element
in the navigation
pane
Basic Elements – Tree (Explorer)
• Hierarchical Organization
• Folders/Tree Nodes
• Context Menu
• Workspace syncs with
navigation tree
• Consistency right-click on tree
and Action button
Basic Elements – Summary View

• Summary of items configured


• PC, vPC, SVI, etc
• Shows CPU/MEM and Temp for physical
hardware
• Status of sensors and links
Basic Elements – Table

• Pagination Controls
• Sort + Filter (same control location as Windows Tables)
• Auto update (websocket)
• Download to XML
Basic Elements – Properties • Consistent set of tabs
Basic Elements – Properties

• Properties page always up-to date (Websockets)


• Refresh (for peace of mind)
• Download Object
• Actions
Basic Elements – Stats

• Selectable Properties & Report Interval


• Table / Graph View
• Download Data as XML
Basic Elements – Stats

• Selectable Properties & Report Interval


• Table / Graph View
• Download Data as XML
Basic Elements – Stats
Basic Elements – Stats
Basic Elements – Health

• Explore Health Information


• Drill down to cause
• Examine Problematic Object(s)
Basic Elements – History

Historical Records for:


• Faults (faults raised/cleaned/etc)
• Events (when the system did what)
• Health (when the object health score changed)
• Audit Logs (who did what)
APIC CLI (NX-OS style)
NX-OS Style CLI
• Beginning with Cisco ACI 1.2, an NX-OS style
command line interface (CLI) was introduced
• Like the GUI, it’s a front-end to the REST API
• Provides a third configuration option
• Can be used in conjunction with GUI and REST API
• Can access any APIC node in cluster to CLI (changes
to objects are sync’d automatically)
• “conf” command, not “conf t”
CLI Example

You might also like