M03 - Application Policy Infrastructure Controller
M03 - Application Policy Infrastructure Controller
APIC Introduction
APIC Overview
APIC Web UI
APIC Introduction
What is APIC?
APIC is the policy controller
•It’s not the control plane
•It’s not in the data path
4
ACI Goal: Common Policy and Operations Framework
Cloud
Cloud Admin
Web App DB
Tier Tier Tier
DMZ Trusted
DB
Zone
Tier
Security Admin
SECURITY
Network Admin
ACI Goal: Common Policy and Operations Framework
Cloud
Trusted
DMZ DB
Zone
Tier
Security Admin
SECURITY
Network Admin
COMMON POOL OF RESOURCES
APIC Overview
APIC
HTML5 Interface NX-OS style CLI
Management API
Management Console
Policy Topology Observer Boot
• Policy definition • Condensed • Statistics • DHCP
• Policy consumption state/health • Fault/alert • Fabric node
• “Who can see who • Physical topology • State images
and where” • Logical topology • Health • APIC images
• Endpoint registry • Logs/history • Catalog
Fabric
Element
Node
•
Policy
• Policy enforcement
The Observer Functionality
Statistics Faults, Events
67%
Link
Utilization
Unicast pkt
drops
OBSERVER
Health Scores Logs, Forensics
Diagnostics
Multi-tenancy
• ACI Fabric supports discovery, boot, inventory, and systems maintenance processes through the APIC
‒ Fabric discovery and addressing
‒ Image management
‒ Topology validation through wiring diagram and systems checks
Designed around Open APIs & Open Source
Object Oriented
Python SDK
•Comprehensive access to underlying
information model
•Consistent object naming directly mapped
REST API to URL
•Supports object, sub-tree and class-level
queries
RESTFul over HTTP(s)
ACI-enabled L4-7 •JSON + XML
fabric devices SCRIPTING •Unified: automatically delegates request
to corresponding components
APIS
•Transactional
•Single Management Entity yet fully
independent components
APIC Hardware
APIC Hardware
• 1st generation
– C220 M3
• 2nd generation Cisco UCS C-Series C220
– C220 M4
• Two models: Medium and Large VIC 1225 (2 x 10GBase-T)
– Medium (M2): 1.9 GHz 6 cores Xeon E5
– Large (L2): 2.4 GHz 6 cores Xeon E5
• Hardware ordered as Cisco ACI appliance, not Cisco UCS C-series servers
• Cannot re-purpose a C220 M3/M4
– TPM module and certificates used to secure boot image
– You can order a spare
• VIC cards used to connect to leafs
– VIC 1225T (10GBase-T)
– VIC 1225 (SFP+) VIC 1225 (2 x SFP+)
Sizing the Cluster
• Hardware configuration in two sizes
– Medium
• 1000 edge ports or less
– Large
• 1000+ ports
• Other considerations for cluster hardware
– Number of changes per day/hour/minute/second
• When more than 3 APIC nodes are required:
– Up to 80 leafs: 3 APIC nodes
– Up to 300 leafs (multi-pod): 5 APIC nodes
– Up to 400 leafs (multi-pod): 7 APIC nodes
MODERN DATABASE DESIGN
Database Evolution
Traditional Database Modern Database
• Active-standby • All-active configuration
configuration • Scale-out (add more boxes
• Scale-up (bigger boxes) for more capacity)
• One copy of data for • Multiple copies of data
redundancy (standby)
Active Active Active
Active Standby ...
Scale out
APIC Clustering
Shard is a unit of data mgmt Allows horizontal (scale-out) scaling.
• Data is placed into shards Simplifies replications scope.
• Each shard has a primary & 2 replicas
• Shards are evenly distributed Shard data assignments are based
on pre-determined hash function.
Static shard layout determines the
APIC Node
Policy Topology Observer Boot
assignment of shards to
appliances
3 Node Cluster
APIC Node
ACI Fabric
shard
shard
shard
shard
Each replica in the shard has use
preference (1..3)
APIC Node
Writes happen to the highest
preference reachable
In case of split-brain, automatic
Each APIC Node has all APIC functions, reconciliation is performed
however, processing is evenly distributed
Data Sharding
A-Z
Shelf 1 Shelf 2
A-G N-Z
H-M
?
Conflict with two disagreeing replicas
2. It should be an odd number
3. Disputes between replicas
solved by majority rule Replica 1: Replica 2:
1001 1110
Acknowledgement
Shard 2 Shard 2 Shard 2
happens only after
2 all 1 3
three replicas
updated
Shard 3 Shard 3 Shard 3
Then replicated to 3 2 1
secondary shards
Shard Proxy
Modify entry
“Shard 3”
APIC 1 forwards
Shard 1 Shard 1 Shard 1 command to
1 2 3 primary replica
Configuration is holder APIC 3
changed via HTML5, Shard 2 Shard 2 Shard 2
API, or CLI on APIC 2 1 3
1, but primary
replica is on APIC 3
Shard 3 Shard 3 Shard 3
3 2 1
Select element
in the navigation
pane
Basic Elements – Tree (Explorer)
• Hierarchical Organization
• Folders/Tree Nodes
• Context Menu
• Workspace syncs with
navigation tree
• Consistency right-click on tree
and Action button
Basic Elements – Summary View
• Pagination Controls
• Sort + Filter (same control location as Windows Tables)
• Auto update (websocket)
• Download to XML
Basic Elements – Properties • Consistent set of tabs
Basic Elements – Properties