CVD DataCenterDesignGuide AUG14
CVD DataCenterDesignGuide AUG14
Table of Contents
Preface.........................................................................................................................................1
CVD Navigator..............................................................................................................................2
Use Cases................................................................................................................................... 2
Scope.......................................................................................................................................... 2
Proficiency................................................................................................................................... 3
Introduction..................................................................................................................................4
Technology Use Cases................................................................................................................ 4
Use Case: Flexible Ethernet Network Foundation for Growth and Scale.................................. 4
Use Case: Virtual Machine Mobility within the Data Center...................................................... 5
Use Case: Secure Access to Data Center Resources............................................................. 5
Design Overview.......................................................................................................................... 6
Data Center Foundation........................................................................................................... 6
Data Center Services.............................................................................................................. 6
User Services.......................................................................................................................... 7
Ethernet Infrastructure............................................................................................................. 8
Storage Infrastructure.............................................................................................................. 8
Compute Connectivity............................................................................................................. 8
Network Security..................................................................................................................... 9
Physical Environment..................................................................................................................10
Design Overview........................................................................................................................ 10
Power.................................................................................................................................... 10
Cooling.................................................................................................................................. 10
Equipment Racking.................................................................................................................11
Summary................................................................................................................................11
Ethernet Infrastructure................................................................................................................12
Design Overview.........................................................................................................................12
Resilient Data Center Core.....................................................................................................14
Ethernet Fabric Extension...................................................................................................... 15
Quality of Service.................................................................................................................. 16
Table of Contents
Deployment Details.....................................................................................................................17
Configuring Ethernet Out-of-Band Management................................................................... 18
Configuring the Data Center Core Setup and Layer 2 Ethernet............................................. 26
Configuring the Data Center Core IP Routing........................................................................ 43
Configuring Fabric Extender Connectivity.............................................................................. 52
Storage Infrastructure.................................................................................................................60
Design Overview........................................................................................................................ 60
IP-based Storage Options..................................................................................................... 60
Fibre Channel Storage........................................................................................................... 60
VSANs................................................................................................................................... 61
Zoning .................................................................................................................................. 61
Device Aliases ...................................................................................................................... 62
Storage Array Tested ........................................................................................................... 62
Deployment Details.................................................................................................................... 63
Configuring Fibre Channel SAN on Cisco Nexus 5500UP..................................................... 63
Configuring Cisco MDS 9148 Switch SAN Expansion............................................................ 75
Configuring FCoE Host Connectivity..................................................................................... 82
Cisco Nexus 5500UP Configuration for FCoE....................................................................... 84
Compute Connectivity................................................................................................................88
Design Overview........................................................................................................................ 88
Cisco Nexus Virtual Port Channel.............................................................................................. 89
Cisco Nexus Fabric Extender..................................................................................................... 90
Cisco UCS System Network Connectivity.................................................................................. 92
Cisco UCS B-Series Blade Chassis System Components..................................................... 92
Cisco UCS Manager.............................................................................................................. 92
Cisco UCS B-Series System Network Connectivity.............................................................. 93
Cisco UCS C-Series Network Connectivity........................................................................... 93
Single-Homed Server Connectivity............................................................................................ 95
Server with Teamed Interface Connectivity................................................................................ 96
Enhanced Fabric Extender and Server Connectivity.................................................................. 96
Third-Party Blade Server System Connectivity.......................................................................... 98
Summary................................................................................................................................... 99
Table of Contents
Network Security......................................................................................................................100
Design Overview...................................................................................................................... 100
Security Topology Design ....................................................................................................101
Security Policy Development .............................................................................................. 102
Deployment Details.................................................................................................................. 104
Configuring Cisco ASA Firewall Connectivity....................................................................... 104
Configuring the Data Center Firewall................................................................................... 109
Configuring Firewall High Availability..................................................................................... 114
Evaluating and Deploying Firewall Security Policy................................................................. 117
Promiscuous versus Inline Modes of Operation....................................................................125
Design Considerations..........................................................................................................125
Deploying Firewall Intrusion Prevention Systems (IPS)..........................................................125
Appendix A: Product List..........................................................................................................138
Appendix B: DeviceConfigurationFiles.....................................................................................140
Appendix C: Changes............................................................................................................... 141
Table of Contents
Preface
Cisco Validated Designs (CVDs) present systems that are based on common use cases or engineering priorities.
CVDs incorporate a broad set of technologies, features, and applications that address customer needs. Cisco
engineers have comprehensively tested and documented each design in order to ensure faster, more reliable,
and fully predictable deployment.
CVDs include two guide types that provide tested design details:
Technology design guides provide deployment details, information about validated products and
software, and best practices for specific types of technology.
Solution design guides integrate existing CVDs but also include product features and functionality
across Cisco products and sometimes include information about third-party integration.
Both CVD types provide a tested starting point for Cisco partners or customers to begin designing and deploying
systems.
Preface
CVD Navigator
The CVD Navigator helps you determine the applicability of this guide by summarizing its key elements: the use cases, the
scope or breadth of the technology covered, the proficiency or experience recommended, and CVDs related to this guide.
This section is a quick reference only. For more details, see the Introduction.
Use Cases
This guide addresses the following technology use cases:
Flexible Ethernet Network Foundation for Growth and
ScaleOrganizations can prepare for the ongoing transition
of server connectivity from 1-Gigabit Ethernet attachment
to 10-Gigabit Ethernet by building a single-tier switching
backbone to cleanly scale high-speed server and appliance
connectivity from a single equipment rack to multiple racks.
Virtual Machine Mobility within the Data CenterMost
organizations are migrating to hypervisor technology and
using virtual machines to reduce costs, improve resiliency, and
provide flexibility. The data center infrastructure must facilitate
virtual machine moves from one server to another for Ethernet
and storage connectivity.
VALIDATED
DESIGN
VALIDATED
DESIGN
Scope
This guide covers the following areas of technology and products:
The data center Ethernet backbone using Cisco Nexus 5500
switches and fabric extension to extend Ethernet connectivity
to server racks
Virtual port channel technology for providing a hub-and-spoke
topology for VLAN extension across the data center without
spanning tree loops and the associated complexity
Connectivity to centralized storage arrays using Fibre Channel,
Fibre Channel over Ethernet, or IP transport
Firewalls and intrusion detection and prevention with secure
VLANs
For more information, see the "Design Overview" section in this
guide.
CVD Navigator
Proficiency
This guide is for people with the following technical proficienciesor equivalent experience:
CCNP Data Center3 to 5 years designing, implementing, and troubleshooting data centers in all their components
CCNP Routing and Switching3 to 5 years planning, implementing, verifying, and troubleshooting local and wide-area
networks
CCNP Security3 to 5 years testing, deploying, configuring, maintaining security appliances and other devices that
establish the security posture of the network
CVD Navigator
Introduction
Technology Use Cases
Organizations encounter many challenges as they work to scale their information-processing capacity to keep up
with demand. In a new organization, a small group of server resources may be sufficient to provide necessary
applications such as file sharing, email, database applications, and web services. Over time, demand for
increased processing capacity, storage capacity, and distinct operational control over specific servers can cause
a growth explosion commonly known as server sprawl.
Server virtualization technologies help to more fully utilize the organizations investment in processing capacity,
while still allowing each virtual machine to be viewed independently from a security, configuration, and
troubleshooting perspective. Server virtualization and centralized storage technologies complement one another,
allowing rapid deployment of new servers and reduced downtime in the event of server hardware failures. Virtual
machines can be stored completely on the centralized storage system, which decouples the identity of the
virtual machine from any single physical server. This allows the organization great flexibility when rolling out new
applications or upgrading server hardware. In order to support the virtualization of computing resources in the
data center, the underlying network must be able to provide a reliable, flexible, and secure transport.
Use Case: Flexible Ethernet Network Foundation for Growth and Scale
As an organization outgrows the capacity of the basic server-room Ethernet stack of switches, it is important to
be prepared for the ongoing transition of server connectivity from 1-Gigabit Ethernet attachment to 10-Gigabit
Ethernet. Using a pair of Cisco Nexus 5500 switches to form a single-tier of switching, this design provides the
ability to cleanly scale high speed server and appliance connectivity from a single equipment rack to multiple
racks, connected back to a pair of data center core switches.
This design guide enables the following network capabilities:
High density rackmount server connectivityServers in a data center rack need only be wired to the
top of the rack where fabric extenders that connect to the data center core switches are located, for
Ethernet connectivity.
Blade server system integrationBlade server systems requiring higher density 10-Gigabit trunk
connectivity can connect directly to the non-blocking data center core Ethernet switches.
Migration to high speed connectivityIn-rack Ethernet connectivity to fabric extenders can
accommodate the older Fast Ethernet connections as well as 1-Gigabit and 10-Gigabit Ethernet
connectivity.
Resilient core EthernetA pair of multiprotocol data center core Ethernet switches provide sub-second
failover in case of an unexpected outage.
Simplified network configuration and operationConfiguration and monitoring of the data center
Ethernet is done centrally on the data center core switches.
Server connectivity optionsA single-homed, network adapter teaming, and EtherChannel provide a
wide range of options to connect a server to the data center Ethernet.
Introduction
Introduction
Design Overview
The data center architecture consists of three primary modular layers with hierarchical interdependencies: data
center foundation, data center services, and user services. Figure 1 illustrates the data center architecture
layered services.
Figure 1 - Data center pyramid of service layers
User
Services
Data Center
Services
Voice,
Email,
CRM, ERP
Security,
Virtualization,
Application Resilience
Routing, Switching,
Storage, Compute
2196
Data Center
Foundation
The ultimate goal of the layers of the architecture is to support the user services that drive the organizations
success.
Introduction
User Services
User services sit at the top of the pyramid and rely on the data center foundation and services to work. User
services are those applications that allow a person to do their job and ultimately drive productivity for the
organization. In the context of a building, this may be the elevator that takes you to your office floor, the lights in
the office, or the message button on the phone that allows you to check messages. User services in the data
center include email, order processing, and file sharing. Other applications in the data center that rely on the data
center foundation and services like data base applications, modeling, and transaction processing, also sit at the
top of the pyramid of services.
This data center design is designed to allow organizations to take an existing server room environment to the
next level of performance, flexibility, and security. Figure 2 provides a high-level overview of this architecture.
Figure 2 - Data center design overview
Third-party
Rack Servers
Cisco UCS
C-Series Servers
LAN Core
Nexus 5500 Layer 2/3 Ethernet
and SAN Fabric
Cisco
ASA Firewalls
with IPS
Ethernet
Fibre Channel
Expanded
Cisco MDS 9100
Storage Fabric
Fibre Channel
Storage Array
SAN A
SAN B
Introduction
Fibre Channel
Storage Array
2216
Data
Center
This data center design is designed to stand alone, if deployed at an offsite facility, or to connect to the Layer-3
Ethernet core LAN. The following technology areas are included within this reference architecture. Included
within each of the chapters of this design guide is a deeper and more comprehensive look at the technologies
and features used in that section of the overall design.
Ethernet Infrastructure
The Ethernet infrastructure forms the foundation for resilient Layer 2 and Layer 3 communications in the data
center. This layer provides the ability to migrate from your original server farm to a scalable architecture capable
of supporting Fast Ethernet, Gigabit Ethernet, and 10-Gigabit Ethernet connectivity for hundreds of servers in a
modular approach.
The core of the data center is built on the Cisco Nexus 5500UP series switches. Cisco Nexus 5500UP series is a
high-speed switch capable of Layer 2 and Layer 3 switching with the Layer 3 daughter card tested in this design.
Cisco Nexus 5500UP series 48-port and 96-port models are suitable for use in this design based on data center
port density requirements. Cisco Nexus 5500UP supports Fabric Extender (FEX) technology, which provides a
remote line card approach for fan out of server connectivity to top of rack for Fast Ethernet, Gigabit Ethernet, and
10-Gigabit Ethernet requirements. The physical interfaces on the Cisco FEX are programmed on the Cisco Nexus
5500UP switches, simplifying the task of configuration by reducing the number of devices you have to touch to
deploy a server port.
The Cisco Nexus 5500UP series features Virtual Port Channel (vPC) technology, which provides a loop-free
approach to building out the data center in which any VLAN can appear on any port in the topology without
spanning-tree loops or blocking links. The data center core switches are redundant with sub-second failover so
that a device failure or maintenance does not prevent the network from operating.
Storage Infrastructure
Storage networking is key to solving the growing amount of data storage that an organization has to struggle
with. Centralized storage reduces the amount of disk space trapped on individual server platforms and eases the
task of providing backup to avoid data loss. The data center design uses Cisco Nexus 5500UP series switches
as the core of the network. The importance of this model switch is that it has universal port (UP) capabilities. A
universal port is capable of supporting Ethernet, Fibre Channel, and Fibre Channel over Ethernet (FCoE) on any
port. This allows the data center core to support multiple storage networking technologies like Fibre Channel
storage area network (SAN), Internet Small Computer System Interface (iSCSI), and network attached storage
(NAS) on a single platform type. This not only reduces costs to deploy the network but saves rack space in
expensive data center hosting environments.
Cisco Nexus 5500UP Fibre Channel capabilities are based on the Cisco NX-OS operating system and
seamlessly interoperate with the Cisco MDS Series SAN switches for higher-capacity Fibre Channel
requirements. This chapter includes procedures for interconnecting between Cisco Nexus 5500UP series and
Cisco MDS series for Fibre Channel SAN. Cisco MDS series can provide an array of advanced services for Fibre
Channel SAN environments where high-speed encryption, inter-VSAN routing, tape services, or Fibre Channel
over IP extension might be required.
Compute Connectivity
There are many ways to connect a server to the data center network for Ethernet and Fibre Channel transport.
This chapter provides an overview of connectivity ranging from single-homed Ethernet servers to a dual-homed
Fabric Extender, and dual-homed servers that might use active/standby network interface card (NIC) teaming
or EtherChannel for resiliency. Servers that use 10-Gigabit Ethernet can collapse multiple Ethernet NICs and
Fibre Channel host bus adapters (HBAs) onto a single wire using converged network adapters (CNAs) and
FCoE. Dual-homing the 10-Gigabit Ethernet servers with FCoE provides resilient Ethernet transport and Fibre
Channel connections to SAN-A/SAN-B topologies. This chapter also provides an overview of how the integrated
connectivity of Cisco Unified Computing System (UCS) blade server systems work and considerations for
connecting a nonCisco blade server system to the network.
Introduction
Network Security
Within a data center design, there are many requirements and opportunities to include or improve security for
customer confidential information and the organizations critical and sensitive applications. The data center
design is tested with the Cisco ASA 5585-X series firewall. Cisco ASA 5585-X provides high-speed processing
for firewall rule sets and high bandwidth connectivity with multiple 10-Gigabit Ethernet ports for resilient
connectivity to the data center core switches. Cisco ASA 5585-X also has a slot for services, and in this design
provides an IPS module to inspect application layer data, to detect attacks and snooping, and to block malicious
traffic based on the content of the packet or the reputation of the sender. The Cisco ASA 5585-X firewalls with
IPS modules are deployed in a pair, which provides an active/standby resiliency to prevent downtime in the event
of a failure or platform maintenance.
Introduction
Physical Environment
Design Overview
This data center design provides a resilient environment with redundant platforms and links; however, this cannot
protect your data center from a complete failure resulting from a total loss of power or cooling. When designing
your data center, you must consider how much power you will require, how you will provide backup power
in the event of a loss of your power feed from your provider, and how long you will retain power in a backup
power event. You also need to consider that servers, networking equipment, and appliances in your data center
dissipate heat as they operate, which requires that you develop a proper cooling design that includes locating
equipment racks to prevent hotspots.
Power
Know what equipment will be installed in the area. You cannot plan electrical work if you do not know what
equipment is going to be used. Some equipment requires standard 110V outlets that may already be available.
Other equipment might require much more power.
Does the power need to be on all the time? In most cases where servers and storage are involved, the answer
is yes. Applications dont react very well when the power goes out. To prevent power outages, you need an
uninterruptable power supply (UPS). During a power interruption, the UPS will switch over the current load to
a set of internal or external batteries. Some UPSs are online, which means the power is filtered through the
batteries all the time; others are switchable, meaning they use batteries only during power loss. UPSs vary by
how much load they can carry and for how long. Careful planning is required to make sure the correct UPS is
purchased, installed, and managed correctly. Most UPSs provide for remote monitoring and the ability to trigger a
graceful server shutdown for critical servers if the UPS is going to run out of battery.
Distributing the power to the equipment can change the power requirements as well. There are many options
available to distribute the power from the outlet or UPS to the equipment. One example would be using a power
strip that resides vertically in a cabinet that usually has an L6-30 input and then C13/C19 outlets with the output
voltage in the 200240V range. These strips should beat a minimummetered so one does not overload the
circuits. The meter provides a current reading of the load on the circuit. This is critical, because a circuit breaker
that trips due to being overloaded will bring down everything plugged into it with no warning, causing business
downtime and possible data loss. For complete remote control, power strips are available with full remote control
of each individual outlet from a web browser. These vertical strips also assist in proper cable management of the
power cords. Short C13/C14 and C19/C20 power cords can be used instead of much longer cords to multiple
110V outlets or multiple 110V power strips.
Cooling
With power comes the inevitable conversion of power into heat. To put it simply: power in equals heat out.
Planning for cooling of one or two servers and a switch with standard building air conditioning may work. Multiple
servers and blade servers (along with storage, switches, etc.) need more than building air conditioning for proper
cooling. Be sure to at least plan with your facilities team what the options are for current and future cooling.
Many options are available, including in-row cooling, overhead cooling, raised floor with underfloor cooling, and
wall-mounted cooling.
Physical Environment
10
Equipment Racking
Its important to plan where to put the equipment. Proper placement and planning allow for easy growth. After
you have evaluated power and cooling, you need to install racking or cabinets. Servers tend to be fairly deep
and take up even more space with their network connections and power connections. Most servers will fit in
a 42-inch deep cabinet, and deeper cabinets give more flexibility for cable and power management within the
cabinet. Be aware of what rails are required by your servers. Most servers now come with rack mounts that use
the square holestyle vertical cabinet rails. Not having the proper rails can mean that you have to use adapters or
shelves, which makes managing servers and equipment difficult if not sometimes impossible without removing
other equipment or sacrificing space. Data center racks should use the square rail mounting options in the
cabinets. Cage nuts can be used to provide threaded mounts for such things as routers, switches, shelves, etc.
that you may need.
Summary
The physical environmental requirements for a data center require careful planning to provide for efficient use
of space, scalability, and ease of operational maintenance. For additional information on data center power,
cooling, and equipment racking, contact Cisco partners in the area of data center environmental products such
as Panduit and APC.
Physical Environment
11
Ethernet Infrastructure
Design Overview
The foundation of the Ethernet network in this data center design is a resilient pair of Cisco Nexus 5500UP
Series switches. These switches offer the ideal platform for building a scalable, high-performance data center
supporting both 10-Gigabit and 1-Gigabit Ethernet attached servers. The data center is designed to allow
easy migration of servers and services from your original server room to a data center that can scale with your
organizations growth.
The Cisco Nexus 5500UP switches with universal port (UP) capabilities provide support for Ethernet, Fibre
Channel over Ethernet (FCoE), and Fibre Channel ports on a single platform. The Nexus 5500UP can act as the
Fibre Channel SAN for the data center and connect into an existing Fibre Channel SAN. The Cisco Nexus 5000
Series also supports the Cisco Nexus 2000 Series Fabric Extenders. Fabric Extenders allow the switching fabric
of the resilient switching pair to be physically extended to provide port aggregation in the top of multiple racks,
reducing cable management issues as the server environment expands.
This data center design leverages many advanced features of the Cisco Nexus 5500UP Series switch family to
provide a central Layer 2 and Layer 3 switching fabric for the data center environment:
The Layer 3 routing table can accommodate up to 8000 IPv4 routes.
The Layer 3 engine supports up to 8000 adjacencies or MAC addresses for the Layer 2 domain.
The solution provides for up to 1000 IP Multicast groups when operating in the recommended Virtual
Port Channel (vPC) mode.
A second generation of the Layer 3 engine for the Cisco Nexus 5548 and 5596 switches is now available. This
second generation hardware version of the Layer 3 module doubles the scalability for routing and adjacencies
when you are running Cisco NX-OS software release 5.2(1)N1(1) or later.
Reader Tip
More specific scalability design numbers for the Cisco Nexus 5500 Series platform can
be found at:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5500/sw/
Verified_Scalability/702N11/b_5500_Verified_Scalability_702N11/b_5500_Verified_
Scalability_702N11_chapter_01.html
Ethernet Infrastructure
12
The Layer 3 data center core connects to the Layer 3 LAN core as shown in Figure 3.
Figure 3 - Data center core and LAN core change control separation
Data Center Servers
and Services
Data
Center
Core
Separate
Change Control
Domains
LAN
Internet
and DMZ
WAN
2198
LAN Core
The result of using Layer 3 to interconnect the two core layers is:
A resilient Layer 3 interconnect with rapid failover.
A logical separation of change control for the two core networks.
A LAN core that provides a scalable interconnect for LAN, WAN, and Internet Edge.
A data center core that provides interconnect for all data center servers and services.
Intra-data center Layer 2 and Layer 3 traffic flows between servers and appliances that are switched
locally on the data center core.
A data center that has a logical separation point for moving to an offsite location while still providing core
services without redesign.
This section provides an overview of the key features used in this topology and illustrates the specific physical
connectivity that applies to the example configurations provided in the Deployment Details section.
Ethernet Infrastructure
13
VLAN 148
Spanning Tree
Root Switch
2052
Spanning Tree
Blocked Link
The Cisco Nexus 5500UP Series switch pair providing the central Ethernet switching fabric for the data center
is configured using vPC. The vPC feature allows links that are physically connected to two different Cisco Nexus
switches to appear to a third downstream device to be coming from a single device, as part of a single Ethernet
port channel. The third device can be a server, switch, or any other device or appliance that supports IEEE
802.3ad port channels. This capability allows the two data center core switches to build resilient, loop-free Layer
2 topologies that forward on all connected links instead of requiring Spanning Tree Protocol blocking for loop
prevention.
Cisco NX-OS Software vPC used in the data center design and Cisco Catalyst Virtual Switching Systems (VSS)
used in LAN deployments are similar technologies in that they allow the creation of Layer 2 port channels that
span two switches. For Cisco EtherChannel technology, the term multichassis EtherChannel (MCEC) refers
to either technology interchangeably. MCEC links from a device connected using vPC to the data center core
and provides spanning-tree loopfree topologies, allowing VLANs to be extended across the data center while
maintaining a resilient architecture.
A vPC consists of two vPC peer switches connected by a peer link. Of the vPC peers, one is primary and one is
secondary. The system formed by the switches is referred to as a vPC domain.
Figure 5 - Cisco NX-OS vPC design
VLAN 148
VLAN 148
Layer 2
EtherChannels
2217
vPC
Domain
This feature enhances ease of use and simplifies configuration for the data center-switching environment.
Ethernet Infrastructure
14
Reader Tip
For more information on vPC technology and design, refer to the documents Cisco
NX-OS Software Virtual PortChannel: Fundamental Concepts and Spanning-Tree
Design Guidelines for Cisco NX-OS Software and Virtual PortChannels, here:
http://www.cisco.com/c/en/us/support/switches/nexus-5000-series-switches/
products-implementation-design-guides-list.html
This data center design uses Hot Standby Router Protocol (HSRP) for IP default gateway resiliency for data center
VLANs. When combining HSRP with vPC, there is no need for aggressive HSRP timers to improve convergence,
because both gateways are always active and traffic to either data center core will be locally switched for
improved performance and resiliency.
Single-homed
FEX with
VLAN 148
Nexus 5500UP
Ethernet vPC
Switch Fabric
2218
Single-homed
FEX with
VLAN 148
Our reference architecture example shown in Figure 7 illustrates single-homed and dual-homed Cisco FEX
configurations with connected servers. Each Cisco FEX includes dedicated fabric uplink ports that are designed
to connect to upstream Cisco Nexus 5500UP Series switches for data communication and management. Any
10-Gigabit Ethernet port on the Cisco Nexus 5500UP switch may be used for a Cisco FEX connection.
Ethernet Infrastructure
15
Tech Tip
When the Cisco Nexus 5500UP Series switches are configured for Layer 3 and vPC
operation, they support up to sixteen connected Cisco FEXs as of Cisco NX-OS
release 5.2(1)N1(1). The Cisco FEX will support up to four or eight uplinks to the Cisco
Nexus 5500UP parent switches, depending on the model of Cisco FEX in use and the
level of oversubscription you want in your design. It is recommended, when possible, to
configure the maximum number of fabric uplinks leveraging either twinax (CX-1) cabling
or the Fabric Extender Transceiver (FET) and OM3 multi-mode fiber. At least two Cisco
FEX uplinks to the data center core are recommended for minimum resiliency.
Dual-homed
Servers
Dual-homed
FEX 2248
Single-homed
FEX 2232
To LAN Core
2219
Nexus 5500UP
Ethernet vPC
Switch Fabric
Quality of Service
To support the lossless data requirement of FCoE on the same links as IP traffic, the Nexus 5500 switches and
the Nexus 2000 fabric extenders as a system implement an approach that uses Quality of Service (QoS) with
a data center focus. Much of the QoS for classification and marking in the system is constructed through the
use of the IEEE 802.1Q Priority Code Point, also known as Class of Service (CoS) bits in the header of the Layer
2 frame from hosts supporting FCoE and other trunked devices. As IP traffic arrives at an Ethernet port on the
Cisco Nexus 5500 Series switch, it can also be classified at Layer 3 by differentiated services code point (DSCP)
bits and IP access control lists (ACLs).
The traffic classifications are used for mapping traffic into one of six hardware queues, each appropriately
configured for desired traffic handling. One queue is predefined for default traffic treatment, while one hardware
queue is assigned for use by lossless FCoE traffic. The remaining four queues are available for use to support
queuing of multimedia and data center traffic. For example, a priority queue will be defined for jitter-intolerant
multimedia services in the data center.
Ethernet Infrastructure
16
Lacking the guarantee that all non-FCoE devices in the data center can generate an appropriate CoS marking
required for application of QoS policy at ingress to a FEX, this data center design takes the following QoS
approach:
FCoE traffic, as determined by Data Center Bridging Exchange (DCBX) negotiation with hosts, is given
priority and lossless treatment end-to-end within the data center.
Non-FCoE traffic without CoS classification for devices connected to a FEX is given default treatment
over available links on ingress toward the Cisco Nexus 5500 switch, with suitable aggregated link
bandwidth available to mitigate oversubscription situations. Traffic in the reverse direction toward the FEX
is handled by the QoS egress policies on the Cisco Nexus 5500 switch.
Classification by DSCP is configured at the port level and applied to IP traffic on ingress to the Cisco
Nexus 5500 switch, either directly or after traversing a FEX connection. This classification is used to map
traffic into the default queue or into one of the four non-FCoE internal queues to offer a suitable QoS
per-hop behavior.
To ensure consistent policy treatment for traffic directed through the Layer 3 engine, a CoS marking is
also applied per Cisco Nexus 5500 internal queue. The CoS marking is used for classification of traffic
ingress to the Layer 3 engine, allowing application of system queuing policies.
Non-FCoE devices requiring DSCP-based classification with guaranteed queuing treatment can be connected
directly to the Cisco Nexus 5500 switch, versus taking the default uplink treatment when connected to a Cisco
FEX port.
The QoS policy is also the method for configuring jumbo frame support on a per-class basis. Consistent per-CoS
maximum transmission unit (MTU) requirements are applied system-wide for FCoE, as opposed to the portbased MTU configuration typical of devices used outside of the data center. Increasing MTU size can increase
performance for bulk data transfers.
Deployment Details
How to Read Commands
This guide uses the following conventions for
commands that you enter at the command-line
interface (CLI).
Ethernet Infrastructure
17
The following configuration procedures are required to configure the Ethernet switching fabric for this data center
design.
PROCESS
An increasing number of switching platforms, appliances, and servers utilize discrete management ports for
setup, monitoring, and keepalive processes. The typical mid-tier data center is an ideal location for an Ethernet
out-of-band management network, because the equipment is typically contained within in a few racks and does
not require fiber-optic interconnect to reach far-away platforms.
This design uses a fixed-configuration Layer 2 switch for the out-of-band Ethernet management network. A
switch like the Cisco Catalyst 3750-X Series Switch is ideal for this purpose because it has dual power supplies
for resiliency.
The out-of-band network provides:
A Layer 2 path, independent of the data path of the Cisco Nexus 5500UP data center core switches, for
vPC keepalive packets running over the management interface
A path for configuration synchronization between Cisco Nexus 5500UP switches via the management
interfaces
A common connection point for data center appliance management interfaces like firewalls and load
balancers
A connectivity point for management ports on servers
Although the Layer 2 switch does provide a common interconnect for packets inside the data center, it needs to
provide the ability for IT management personnel outside of the data center to access the data-center devices.
The options for providing IP connectivity depend on the location of your data center.
If your data center is at the same location as your headquarters LAN, the core LAN switch can provide Layer 3
connectivity to the data center management subnet.
Figure 8 - Core LAN switch providing Layer 3 connectivity
Out of Band
Ethernet Switch
Mgmt 0
2220
Mgmt 0
Ethernet Infrastructure
18
If your data center is located at a facility separate from a large LAN, the WAN router can provide Layer 3
connectivity to the data center management subnet.
Figure 9 - WAN router providing Layer 3 connectivity
WAN
Out of Band
Ethernet Switch
Mgmt 0
2221
Mgmt 0
A third option for providing Layer 3 connectivity to the data center management subnet is to use the data center
core Cisco Nexus 5500UP switches, as illustrated in Figure 10. This is the configuration described in this guide.
Figure 10 - Providing Layer 3 connectivity by using core Cisco Nexus 5500UP switches
Out of Band
Ethernet Switch
Mgmt 0
2222
Mgmt 0
Tech Tip
When you use the data center core Cisco Nexus 5500UP switches for Layer 3
connectivity, the Layer 2 path for vPC keepalive packets will use the Ethernet outof-band switch, because the Nexus 5500UP management ports are in a separate
management Virtual Routing and Forwarding (VRF) path than the global packet
switching of the Cisco Nexus 5500UP switches. Also, the management ports are in
the same IP subnet, so they do not need a Layer 3 switch for packets between the
data center core switches. The Layer 3 switched virtual interface (SVI) will provide
connectivity for access outside of the data center.
Ethernet Infrastructure
19
Procedure 1
This procedure configures system settings that simplify and secure the management of the switch. The values
and actual settings in the examples provided will depend on your current network configuration.
Table 1 - Common network services used in the design examples
Service
Address
Domain name
cisco.local
10.4.48.10
10.4.48.15
10.4.48.17
Step 1: Configure the device host name to make it easy to identify the device.
hostname [hostname]
Step 2: Configure VLAN Trunking Protocol (VTP) transparent mode. This design uses VTP transparent mode
because the benefits of the alternative modedynamic propagation of VLAN information across the networkare
not worth the potential for unexpected behavior that is due to operational error.
VTP allows network managers to configure a VLAN in one location of the network and have that configuration
dynamically propagate out to other network devices. However, in most cases, VLANs are defined once during
switch setup with few, if any, additional modifications.
vtp mode transparent
Step 3: Enable Rapid Per-VLAN Spanning-Tree (PVST+). PVST+ provides an instance of RSTP (802.1w) per
VLAN. Rapid PVST+ greatly improves the detection of indirect failures or linkup restoration events over classic
spanning tree (802.1D).
Although this architecture is built without any Layer 2 loops, you must still enable spanning tree. By enabling
spanning tree, you ensure that if any physical or logical loops are accidentally configured, no actual Layer 2 loops
will occur.
spanning-tree mode rapid-pvst
Step 4: Enable Unidirectional Link Detection (UDLD) Protocol. UDLD is a Layer 2 protocol that enables devices
connected through fiber-optic or twisted-pair Ethernet cables to monitor the physical configuration of the cables
and detect when a unidirectional link exists. When UDLD detects a unidirectional link, it disables the affected
interface and alerts you. Unidirectional links can cause a variety of problems, including spanning-tree loops,
black holes, and non-deterministic forwarding. In addition, UDLD enables faster link failure detection and quick
reconvergence of interface trunks, especially with fiber-optic cables, which can be susceptible to unidirectional
failures.
udld enable
Step 5: Set EtherChannels to use the traffic source and destination IP address when calculating which link to
send the traffic across. This normalizes the method in which traffic is load-shared across the member links of
the EtherChannel. EtherChannels are used extensively in this design because they contribute resiliency to the
network.
port-channel load-balance src-dst-ip
Ethernet Infrastructure
20
Caution
If you configure an access list on the vty interface, you may lose the ability to use SSH
to log in from one router to the next for hop-by-hop troubleshooting.
Ethernet Infrastructure
21
Ethernet Infrastructure
22
Procedure 2
2093
Access-Layer
Switch
User-Installed
Low-End Switch
Procedure 3
To make configuration easier when the same configuration will be applied to multiple interfaces on the switch,
use the interface range command. This command allows you to issue a command once and have it apply to
many interfaces at the same time, which can save a lot of time because most of the interfaces in the access
layer are configured identically. For example, the following command allows you to enter commands on all 24
interfaces (Gig 0/1 to Gig 0/24) simultaneously.
interface range Gigabitethernet 1/0/1-24
Ethernet Infrastructure
23
Step 1: Configure switch interfaces to support management console ports. This host interface configuration
supports management port connectivity.
interface range [interface type] [port number][port number]
switchport access vlan [vlan number]
switchport mode access
Step 2: Configure the switch port for host mode. Because only end-device connectivity is provided for the
Ethernet management ports, shorten the time it takes for the interface to go into a forwarding state by enabling
PortFast, disable 802.1Q trunking, and disable channel grouping.
switchport host
vlan 163
name DC_ManagementVLAN
!
interface vlan 163
description in-band management
ip address 10.4.63.5 255.255.255.0
no shutdown
!
ip default-gateway 10.4.63.1
!
spanning-tree portfast bpduguard default
!
interface range GigabitEthernet 1/0/122
switchport access vlan 163
switchport mode access
switchport host
Procedure 4
As described earlier, there are various methods to connect to Layer 3 for connectivity to the data center out-ofband management network. The following steps describe configuring an EtherChannel for connectivity to the
data center core Cisco Nexus 5500UP switches.
Out of Band
Ethernet Switch
Mgmt 0
2222
Mgmt 0
Step 1: Configure two or more physical interfaces to be members of the EtherChannel and set LACP to active
on both sides. This forms a proper EtherChannel that does not cause any issues.
interface [interface type] [port 1]
description Link to DC Core port 1
interface [interface type] [port 2]
description Link to DC Core port 2
Ethernet Infrastructure
24
Reader Tip
The configuration on the data center core Cisco Nexus 5500UP switches for Layer 3
connectivity to the out-of-band management network will be covered in Procedure 5,
Configure management switch connection, in the Configuring the Data Center Core
process later in this chapter.
Example
Ethernet Infrastructure
25
PROCESS
Procedure 1
Complete the physical connectivity of the Cisco Nexus 5500UP Series switch pair according to the illustration
below.
Single-homed
FEX
vPC Peer
Keepalive
Dual-homed
FEX
Single-homed
FEX
Nexus 5500UP
Ethernet vPC
Switch Fabric
2223
LAN Core
Ethernet Infrastructure
26
Step 1: Connect two available Ethernet ports between the two Cisco Nexus 5500UP Series switches.
These ports will be used to form the vPC peer-link, which allows the peer connection to form and supports
forwarding of traffic between the switches if necessary during a partial link failure of one of the vPC port
channels. It is recommended that you use at least two links for the vPC peer-link resiliency, although you can add
more to accommodate higher switch-to-switch traffic.
Step 2: Connect two available Ethernet ports on each Cisco Nexus 5500UP Series switch to the LAN core.
Four 10-Gigabit Ethernet connections will provide resilient connectivity to the LAN core with aggregate
throughput of 40 Gbps to carry data to the rest of the organization.
Step 3: Connect to a dual-homed FEX.
The data center design uses pairs of dual-homed FEX configurations for increased resilience and uniform
connectivity. To support a dual-homed FEX with single-homed servers, connect fabric uplink ports 1 and 2 on
the Cisco FEX to an available Ethernet port, one on each Cisco Nexus 5500UP Series switch. These ports will
operate as a port channel to support the dual-homed Cisco FEX configuration.
Depending on the model Cisco FEX being used, up to four or eight ports can be connected to provide more
throughput from the Cisco FEX to the core switch.
Step 4: Connect to a single-homed FEX.
Support single-homed FEX attachment by connecting fabric uplink ports 1 and 2 on each FEX to two available
Ethernet ports on only one member of the Cisco Nexus 5500UP Series switch pair. These ports will be a port
channel, but will not be configured as a vPC port channel because they have physical ports connected to only
one member of the switch pair.
Single-homed FEX configurations are beneficial when FCoE connected servers will be connected.
Depending on the model Cisco FEX being used, you can connect up to four or eight ports to provide more
throughput from the Cisco FEX to the core switch.
Step 5: Connect to the out-of-band management switch.
This design uses a physically separate, standalone switch for connecting the management ports of the Cisco
Nexus 5500 switches. The management ports provide out-of-band management access and transport for vPC
peer keepalive packets, which are a part of the protection mechanism for vPC operation.
Ethernet Infrastructure
27
Procedure 2
This procedure configures system settings that simplify and secure the management of the solution. The values
and actual settings in the examples provided will depend on your current network configuration.
Table 2 - Common network services used in the design examples
Service
Address
Domain name
cisco.local
10.4.48.10
10.4.48.15
NTP server
10.4.48.17
100
10.4.63.10
10.4.63.11
Step 1: Connect to the switch console interface by connecting a terminal cable to the console port of the first
Cisco Nexus 5500UP Series switch (switch-A), and then powering on the system in order to enter the initial
configuration dialog box.
Step 2: Run the setup script and follow the Basic System Configuration Dialog for initial device configuration of
the first Cisco Nexus 5500UP Series switch. This script sets up a system login password, SSH login, and the
management interface addressing. Some setup steps will be skipped and covered in a later configuration step.
Do you want to enforce secure password standard (yes/no): y
Enter the password for "admin":
Confirm the password for "admin":
---- Basic System Configuration Dialog ---This setup utility will guide you through the basic configuration of the system.
Setup configures only enough connectivity for management of the system.
Please register Cisco Nexus 5000 Family devices promptly with your supplier.
Failure to register may affect response times for initial service calls. Nexus
devices must be registered to receive entitled support services.
Press Enter at any time to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): y
Create another login account (yes/no) [n]: n
Configure read-only SNMP community string (yes/no) [n]: n
Configure read-write SNMP community string (yes/no) [n]: n
Enter the switch name : DC5548UPa
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y
Ethernet Infrastructure
28
udld
interface-vlan
lacp
vpc
eigrp
fex
August 2014 Series
29
feature hsrp
feature pim
feature fcoe
Tech Tip
Although it is not used in this design, if the Fibre Channelspecific feature NPV
is required for your network, you should enable it prior to applying any additional
configuration to the switch. The NPV feature is the only feature that when enabled or
disabled will erase your configuration and reboot the switch, requiring you to reapply
any existing configuration commands to the switch.
Step 4: Configure the name server command with the IP address of the DNS server for the network. At the
command line of a Cisco IOS device, it is helpful to be able to type a domain name instead of the IP address.
ip name-server 10.4.48.10
Step 5: Set local time zone for the device location. NTP is designed to synchronize time across all devices in a
network for troubleshooting. In the initial setup script, you set the NTP server address. Now set the local time for
the device location.
clock timezone PST -8 0
clock summer-time PDT 2 Sunday march 02:00 1 Sunday nov 02:00 60
Step 6: Define a read-only and a read/write SNMP community for network management.
snmp-server community [SNMP RO string] group network-operator
snmp-server community [SNMP RW string] group network-admin
Step 7: If you want to reduce operational tasks per device, configure centralized user authentication by using the
TACACS+ protocol to authenticate management logins on the infrastructure devices to the AAA server.
As networks scale in the number of devices to maintain, the operational burden to maintain local user accounts
on every device also scales. A centralized AAA service reduces operational tasks per device and provides an
audit log of user access for security compliance and root-cause analysis. When AAA is enabled for access
control, all management access to the network infrastructure devices (SSH and HTTPS) is controlled by AAA.
TACACS+ is the primary protocol used to authenticate management logins on the infrastructure devices to the
AAA server. A local AAA user database is also defined in the setup script on each Cisco Nexus 5500 switch to
provide a fallback authentication source in case the centralized TACACS+ server is unavailable.
feature tacacs+
tacacs-server host 10.4.48.15 key [key]
aaa group server tacacs+ tacacs
server 10.4.48.15
use-vrf default
aaa authentication login default group tacacs
Ethernet Infrastructure
30
Step 8: If operational support is centralized in your network, you can increase network security by using an
access list to limit the networks that can access your device. In this example, only devices on the 10.4.48.0/24
network will be able to access the device via SSH or SNMP.
ip access-list vty-acl-in
permit tcp 10.4.48.0/24 any eq 22
line vty
ip access-class vty-acl-in in
!
ip access-list snmp-acl
permit udp 10.4.48.0/24 any eq snmp
snmp-server community [SNMP RO string] use-acl snmp-acl
snmp-server community [SNMP RW string] use-acl snmp-acl
Caution
If you configure an access list on the vty interface, you may lose the ability to use SSH
to log in from one router to the next for hop-by-hop troubleshooting.
Step 9: Configure port operation mode. In this example, you enable ports 28 through 32 on a Cisco Nexus
5548UP switch as Fibre Channel ports.
slot 1
port 28-32 type fc
The Cisco Nexus 5500UP switch has universal ports that are capable of running Ethernet+FCoE or Fibre Channel
on a per-port basis. By default, all switch ports are enabled for Ethernet operation. Fibre Channel ports must
be enabled in a contiguous range and be the high numbered ports of the switch baseboard and/or the high
numbered ports of a universal port expansion module.
Ethernet Ports
Slot 2 GEM
FC Ports
Ethernet
FC
2224
Slot 1 (Baseboard)
Tech Tip
Changing port type to FC requires a reboot to recognize the new port operation. Ports
will not show up in the configuration as FC ports if you did not enable the FCoE feature
in Step 3.
Step 10: Save your configuration, and then reload the switch. Because the Cisco Nexus switch requires a reboot
to recognize ports configured for Fibre Channel operation, this is a good point for you to reload the switch. If you
are not enabling Fibre Channel port operation, you do not need to reload the switch at this point.
copy running-config startup-config
reload
Step 11: On the second Cisco Nexus 5500UP Series switch (switch-B), repeat all of the steps of this procedure
(Procedure 2). In Step 2, use a unique device name (DC5548UPb) and IP address (10.4.63.11) for the mgmt0
interfaceotherwise, all configuration details are identical.
Ethernet Infrastructure
31
Procedure 3
QoS policies have been created for the data center to align with the QoS configurations in the CVD LAN
and WAN to protect multimedia streams, control traffic, and FCoE traffic, that flow through the data center.
This is intended to be a baseline that you can customize to your environment if needed. At a minimum, it is
recommended that FCoE QoS be configured to provide no-drop protected behavior in the data center.
QoS policies in this procedure are configured for Cisco Nexus 5500 and 2200 systems globally, and then later
defined for assignment to Ethernet port and Ethernet port-channel configurations. Cisco Nexus FEX ports can
use Layer 2 CoS markings for queuing. The Cisco Nexus 5500 ports can use Layer 2 CoS or Layer 3 DSCP
packet marking for queue classification.
The system default FCoE policies are integrated into the overall CVD policies, to allow for the integration of FCoEcapable devices into the data center without significant additional configuration. If there is not a current or future
need to deploy FCoE in the data center, the QoS policy can be adapted to use the standard FCoE qos-group for
other purposes.
The following configurations will be created:
Overall system classification via class-map type qos and policy-map type qos configurations will be
based on CoS to associate traffic with the system internal qos-groups.
Interface classification will be based on Layer 3 DSCP values via class-map type qos and policy-map
type qos configurations to associate specific IP traffic types with corresponding internal qos-groups.
System queue attributes based on matching qos-group are applied to set Layer 2 MTU, buffer queuelimit, and CoS mapping (for a Layer 3 daughter card) via class-map type network qos and policy-map
type network-qos.
System queue scheduling, based on matching qos-group, will be applied to set a priority queue for jittersensitive multimedia traffic and to apply bandwidth to weighted round-robin queues via class-map type
queuing and policy-map type queuing. The bandwidth assignment for FCoE queuing should be adapted
to the deployment requirements to guarantee end-to-end lossless treatment. For example, reallocating
bandwidths to allow FCoE to assign bandwidth percent 40 would be more appropriate for 4Gbps fibre
channel traffic over a 10Gbps Ethernet link to a server or storage array.
System-wide QoS service-policy will be configured in the system QoS configuration.
Interface QoS service-policy will be defined for later use when configuring Ethernet end points for
connectivity.
Apply the same QoS map to both data center core switches.
Step 1: Configure class-map type qos classification for global use, to match specific CoS bits. There is an
existing system class-default which will automatically match any unmarked packets, unmatched CoS values, and
packets marked with a CoS of zero. The FCoE class-map type qos class-fcoe is pre-defined and will be used in
the policy map for FCoE traffic to ensure correct operation.
class-map type
match cos 5
class-map type
match cos 4
class-map type
match cos 2
class-map type
match cos 1
Ethernet Infrastructure
32
Step 2: Configure policy-map type qos policy for global use. This creates the CoS-to-internal-qos-group
mapping. The system-defined qos-group 0 is automatically created and does not require definition.
policy-map type qos DC-FCOE+1P4Q_GLOBAL-COS-QOS
class type qos PRIORITY-COS
set qos-group 5
class type qos CONTROL-COS
set qos-group 4
class type qos class-fcoe
set qos-group 1
class type qos TRANSACTIONAL-COS
set qos-group 2
class type qos BULK-COS
set qos-group 3
Step 3: Configure class-map type qos classification for Ethernet interface use. This allows for the mapping of
traffic based on IP DSCP into the internal qos-groups of the Cisco Nexus 5500 switch. The match cos is used
to match inbound Layer 2 CoS marked traffic, and also to map traffic destined for Cisco Nexus 5500 Layer
3 engine for traffic prioritization. All non-matched traffic will be handled by the system-defined class-default
queue.
class-map type qos match-any
match dscp ef
match dscp cs5 cs4
match dscp af41
match cos 5
class-map type qos match-any
match dscp cs3
match cos 4
class-map type qos match-any
match dscp af21 af22 af23
match cos 2
class-map type qos match-any
match dscp af11 af12 af13
match cos 1
PRIORITY-QUEUE
CONTROL-QUEUE
TRANSACTIONAL-QUEUE
BULK-QUEUE
Step 4: Configure policy-map type qos policy to be applied to interfaces, for mapping DSCP classifications into
internal qos-group. Interface policies are created to classify incoming traffic on Ethernet interfaces which are not
members of a port-channel. These policies will also be assigned to port-channel virtual interfaces, but not the
port-channel member physical interfaces.
policy-map type qos DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
class PRIORITY-QUEUE
set qos-group 5
class CONTROL-QUEUE
set qos-group 4
class TRANSACTIONAL-QUEUE
set qos-group 2
class BULK-QUEUE
set qos-group 3
Ethernet Infrastructure
33
Step 5: Configure class-map type queuing classification for global use, and in order to match to a specific
internal qos-group for setting queue attributes. Five internal qos groups are available for assignment, plus an
additional system qos-group 0 which is automatically created for default CoS traffic. The internal qos-group
number is arbitrarily assigned, and does not necessarily match an equivalent CoS value. The FCoE class-map
type queuing class-fcoe is pre-defined and will be used in the policy map for FCoE traffic to ensure correct
operation.
class-map type queuing
match qos-group 5
class-map type queuing
match qos-group 4
class-map type queuing
match qos-group 2
class-map type queuing
match qos-group 3
PRIORITY-GROUP
CONTROL-GROUP
TRANSACTIONAL-GROUP
BULK-GROUP
Step 6: Configure policy-map type queuing policy for global use. This creates appropriate system-wide qosgroup attributes of bandwidth, priority, or weight, and FCoE lossless scheduling.
policy-map type queuing DC-FCOE+1P4Q_GLOBAL-GROUP-QUEUING
class type queuing PRIORITY-GROUP
priority
class type queuing CONTROL-GROUP
bandwidth percent 10
class type queuing class-fcoe
bandwidth percent 20
class type queuing TRANSACTIONAL-GROUP
bandwidth percent 25
class type queuing BULK-GROUP
bandwidth percent 20
class type queuing class-default
bandwidth percent 25
Step 7: Configure class-map type network-qos class-maps for global use. This matches traffic for queue
scheduling on a system-wide basis. As with the type queuing class-maps, the type network-qos classmaps can use one of five internal groups, along with an additional system configured qos-group 0 which is
automatically created for default CoS. The internal qos-group number is arbitrarily assigned and does not
necessarily match an equivalent CoS value. The FCoE class-map type network-qos class-fcoe is pre-defined
and will be used in the policy map for FCoE traffic to ensure correct operation.
class-map type network-qos
match qos-group 5
class-map type network-qos
match qos-group 4
class-map type network-qos
match qos-group 2
class-map type network-qos
match qos-group 3
Ethernet Infrastructure
PRIORITY-SYSTEM
CONTROL-SYSTEM
TRANSACTIONAL-SYSTEM
BULK-SYSTEM
34
Step 8: Configure a policy-map type network-qos policy for global use. This applies system-wide queue
scheduling parameters. The required FCoE queue behavior is configured with the recommended MTU of 2158,
no-drop treatment, and the default buffer size of 79,360 bytes. The remaining queues take the default queuelimit of 22,720 bytes with an MTU of 1500, with two exceptions: the BULK-SYSTEM queue is assigned additional
buffer space and a jumbo MTU of 9216 to improve performance for iSCSI and large data transfer traffic; by
default, the class-default queue is assigned all remaining buffer space.
The Layer 3 routing engine requires CoS bits to be set for QoS treatment on ingress to and egress from the
engine. Setting CoS ensures that traffic destined through the engine to another subnet is handled consistently,
and the network-qos policy is where the CoS marking by system qos-group is accomplished.
policy-map type network-qos DC-FCOE+1P4Q_GLOBAL-SYSTEM-NETWORK-QOS
class type network-qos PRIORITY-SYSTEM
set cos 5
class type network-qos CONTROL-SYSTEM
set cos 4
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos TRANSACTIONAL-SYSTEM
set cos 2
class type network-qos BULK-SYSTEM
mtu 9216
queue-limit 128000 bytes
set cos 1
class type network-qos class-default
multicast-optimize
set cos 0
Step 9: Apply the created global policies.
system qos
service-policy
service-policy
service-policy
service-policy
type
type
type
type
The output queuing applied with system qos defines how the bandwidth is shared among different queues
for Cisco Nexus 5500 and Cisco Nexus FEX interfaces, and also defines how the bandwidth is shared among
different queues on Cisco Nexus 5500 Layer 3 engine.
Step 10: If iSCSI is being used, additional classification and queuing can be added to map iSCSI storage traffic
into the appropriate queue for bulk data. Classification of iSCSI traffic can be matched by well-known TCP ports
through an ACL. The iSCSI class of traffic can then be added to the existing policy map to put the traffic into the
correct qos-group.
ip access-list ISCSI
10 permit tcp any eq 860 any
20 permit tcp any eq 3260 any
30 permit tcp any any eq 860
40 permit tcp any any eq 3260
!
Ethernet Infrastructure
35
Tech Tip
Use only permit actions in the ACLs for matching traffic for QoS policies on Cisco
Nexus 5500. For more details on configuring QoS policies on Cisco Nexus 5500
please refer to:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/qos/b_
Cisco_Nexus_5000_Series_NX-OS_Quality_of_Service_Configuration_Guide.html
Step 11: On the second Cisco Nexus 5500UP Series switch (switch-B), apply the same QoS configuration as
you did in Step 1 through Step 10.
Use the show queuing interface command to display QoS queue statistics.
The Interface QoS service-policy DC-FCOE+1P4Q_INTERFACE-DSCP-QOS created in Step 4 will be assigned
later in this guide to:
Non-FEX Ethernet interfaces on Cisco Nexus 5500.
Example
interface Ethernet1/1-27
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Ethernet port-channel interfaces on Cisco Nexus 5500. The port-channel member physical links do not
require the policy; they will inherit the service policy from the logical port-channel interface. This service
policy is not required on port-channels connected to FEX network uplinks.
Example
interface port-channel 2-3 , port-channel 5 , port-channel 9
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
FEX host port Ethernet interfaces, which are not port-channel members.
Example
Ethernet Infrastructure
36
Procedure 4
Before you can add port channels to the switch in virtual port channel (vPC) mode, basic vPC peering
must be established between the two Cisco Nexus 5500UP Series switches. The vPC peer link provides a
communication path between the data center core switches that allows devices that connect to each core switch
for resiliency to do so over a single Layer 2 EtherChannel.
Mgmt 0
Mgmt 0
2225
Layer 2
Ether Channels
Step 1: Define a vPC domain number on switch-A. This identifies the vPC domain to be common between the
switches in the pair.
vpc domain 10
Step 2: Define a lower role priority for switch-A, the vPC primary switch.
role priority 16000
The vPC secondary switch, switch-B, will be left at the default value of 32,667. The switch with lower priority will
be elected as the vPC primary switch. If the vPC primary switch is alive and the vPC peer link goes down, the
vPC secondary switch will suspend its vPC member ports to prevent potential looping while the vPC primary
switch keeps all of its vPC member ports active. If the peer link fails, the vPC peer will detect the peer switchs
failure through the vPC peer keepalive link.
Step 3: Configure vPC peer keepalive on Cisco Nexus 5500 switch-A.
peer-keepalive destination 10.4.63.11 source 10.4.63.10
The peer-keepalive is ideally an alternate physical path between the two Cisco Nexus 5500UP switches running
vPC to ensure that they are aware of one anothers health even in the case where the main peer link fails. The
peer-keepalive source IP address should be the address being used on the mgmt0 interface of the switch
currently being configured. The destination address is the mgmt0 interface on the vPC peer.
Step 4: Configure the following vPC commands in the vPC domain configuration mode. This will increase
resiliency, optimize performance, and reduce disruptions in vPC operations.
delay restore 360
auto-recovery
graceful consistency-check
peer-gateway
ip arp synchronize
The auto-recovery command has a default timer of 240 seconds. This time can be extended by adding the
reload-delay variable with time in seconds. The auto-recovery feature for vPC recovery replaces the need for
the original peer-config-check-bypass feature.
Ethernet Infrastructure
37
Step 5: Create a port channel interface on switch-A to be used as the peer link between the two vPC switches.
The peer link is the primary link for communications and for forwarding of data traffic to the peer switch, if
required.
interface port-channel 10
switchport mode trunk
vpc peer-link
spanning-tree port type network
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Step 6: On Cisco Nexus 5500UP switch-A, configure the physical interfaces that connect the two Cisco Nexus
5500 switches together to the port channel. A minimum of two physical interfaces is recommended for link
resiliency. The channel-group number must match the port-channel number used in the previous step. Different
10-Gigabit Ethernet ports (as required by your specific implementation) may replace the interfaces shown in the
example.
interface Ethernet1/17
description vpc peer link
switchport mode trunk
channel-group 10 mode active
no shutdown
interface Ethernet1/18
description vpc peer link
switchport mode trunk
channel-group 10 mode active
no shutdown
Step 7: Configure the corresponding vpc commands on Cisco Nexus 5500UP switch-B. Change the destination
and source IP addresses for Cisco Nexus 5500UP switch-B.
vpc domain 10
peer-keepalive destination 10.4.63.10 source 10.4.63.11
delay restore 360
auto-recovery
graceful consistency-check
peer-gateway
ip arp synchronize
!
interface port-channel 10
switchport mode trunk
vpc peer-link
spanning-tree port type network
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
!
interface Ethernet1/17
description vpc peer link
switchport mode trunk
channel-group 10 mode active
no shutdown
Ethernet Infrastructure
38
interface Ethernet1/18
description vpc peer link
switchport mode trunk
channel-group 10 mode active
no shutdown
Step 8: Ensure that the vPC peer relationship has formed successfully by using the show vpc command.
DC5548UPa# show vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id
:
Peer status
:
vPC keep-alive status
:
Configuration consistency status:
Per-vlan consistency status
:
Type-2 consistency status
:
vPC role
:
Number of vPCs configured
:
Peer Gateway
:
Dual-active excluded VLANs
:
Graceful Consistency Check
:
10
peer adjacency formed ok
peer is alive
success
success
success
primary
55
Enabled
Enabled
Tech Tip
Do not be concerned about the (*) - local vPC is down, forwarding via vPC peerlink statement at the top of the command output at this time. After you have defined
vPC port channels and if one of its member links is down or not yet configured, this
information becomes a legend that shows the meaning of an asterisk next to your port
channel in the listing.
Ethernet Infrastructure
39
Procedure 5
The data center core requires basic core operational configuration beyond the setup script.
Table 3 - Data center VLANs
VLAN
VLAN name
IP address
Comments
148
Servers_1
10.4.48.0/24
149
Servers_2
10.4.49.0/24
150
Servers_3
10.4.50.0/24
153
FW_Outside
10.4.53.0/25
154
FW_Inside_1
10.4.54.0/24
155
FW_Inside_2
10.4.55.0/24
156
PEERING_VLAN
10.4.56.0/30
161
VMotion
10.4.61.0/24
162
iSCSI
10.4.62.0/24
163
DC-Management
10.4.63.0/24
Ethernet Infrastructure
40
Procedure 6
Rapid Per-VLAN Spanning-Tree (PVST+) provides an instance of RSTP (802.1w) per VLAN. Rapid PVST+ greatly
improves the detection of indirect failures or linkup restoration events over classic spanning tree (802.1D). Cisco
Nexus 5500UP runs Rapid PVST+ by default.
Although this architecture is built without any Layer 2 loops, it is a good practice to assign spanning-tree root
to the core switches. This design assigns spanning-tree root for a range of VLANs that may be contained in the
data center.
BPDU Guard
Spanning tree edge ports are interfaces which are connected to hosts and can be configured as either an
access port or a trunk port. The edge port configured interface immediately transitions to the forwarding state,
without moving through the spanning tree blocking or learning states. (This immediate transition is also known as
Cisco PortFast.) BPDU Guard protects against a user plugging a switch into an access port, which could cause a
catastrophic, undetected spanning-tree loop.
An edge port configured interface receives a BPDU when an invalid configuration exists, such as when an
unauthorized device is connected. The BPDU Guard feature prevents loops by moving a nontrunking interface
into an errdisable state when a BPDU is received on an interface when PortFast is enabled.
VLAN 148
Layer 2
EtherChannels
Ethernet Infrastructure
vPC
Domain
2217
41
If you will have a hybrid of vPC and spanning-tree-based redundancy connection, as shown in the figure below,
the peer-switch feature is not supported and you should follow Option 2: Configure standard spanning tree
operation, below
Spanning Tree
Redundant Connected
vPC EtherChannel
Connected
VLAN 148
VLAN 148
Layer 2
EtherChannel
Spanning Tree
Blocked Link
vPC
Domain
2283
If the vPC peer-link fails in a hybrid peer-switch configuration, you can lose traffic. In this scenario, the vPC peers
use the same STP root ID as well same bridge ID. The access switch traffic is split in two with half going to the
first vPC peer and the other half to the second vPC peer. With the peer link failed, there is no impact on north/
south traffic but east-west traffic will be lost (black-holed).
Ethernet Infrastructure
42
This data center design configures IP routing on the Cisco Nexus 5500 core switches to allow the core to
provide Layer 2 and Layer 3 switching for the data center servers and services.
Cisco Nexus 5500UP Series requires the N55-LAN1K9 license in order to enable full EIGRP routing support of
Layer 3 switching as used in the Cisco Validated Design. For more information about licensing, see the Cisco
NX-OS Licensing Guide, here:
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/licensing/guide/b_Cisco_NX-OS_Licensing_
Guide.html
Procedure 1
43
Procedure 2
Every VLAN that needs Layer 3 reachability between VLANs or to the rest of the network requires a Layer 3
switched virtual interface (SVI) to route packets to and from the VLAN.
Step 1: Configure the SVI.
interface Vlan [vlan number]
Step 2: Configure the IP address for the SVI interface.
ip address [ip address]/mask
Step 3: Disable IP redirect on the SVI. It is recommended that Internet Control Message Protocol (ICMP) IP
redirects in vPC environments be disabled on SVIs for correct operation.
no ip redirects
Step 4: Configure the EIGRP process number on the interface. This advertises the subnet into EIGRP.
ip router eigrp 100
Step 5: Configure passive mode EIGRP operation. To avoid unnecessary EIGRP peer processing, configure
server VLANs as passive.
ip passive-interface eigrp 100
Ethernet Infrastructure
44
Step 6: Configure HSRP. The Cisco Nexus 5500UP switches use HSRP to provide a resilient default gateway
in a vPC environment. For ease of use, number the HSRP group number the same as the SVI VLAN number.
Configure a priority greater than 100 for the primary HSRP peer, and leave the second switch at the default
priority of 100.
hsrp [group number]
priority [priority]
ip [ip address of hsrp default gateway]
Tech Tip
Both data center core Cisco Nexus 5500UP switches can process packets for the
assigned ip address of their SVI and for the HSRP address. In a vPC environment,
a packet to either switch destined for the default gateway (HSRP) address is
locally switched and there is no need to tune aggressive HSRP timers to improve
convergence time.
The following is an example configuration for the Cisco Nexus 5500UP switch-A.
interface Vlan148
no ip redirects
ip address 10.4.48.2/24
ip router eigrp 100
ip passive-interface eigrp 100
ip pim sparse-mode
hsrp 148
priority 110
ip 10.4.48.1
no shutdown
description Servers_1
The following is an example configuration for the peer Cisco Nexus 5500UP switch-B.
interface Vlan148
no ip redirects
ip address 10.4.48.3/24
ip router eigrp 100
ip passive-interface eigrp 100
ip pim sparse-mode
hsrp 148
ip 10.4.48.1
no shutdown
description Servers_1
Ethernet Infrastructure
45
Procedure 3
The CVD Foundation LAN network enables IP Multicast routing for the organization by using pim sparse-mode
operation. The configuration of IP Multicast for the rest of the network can be found in the Campus Wired LAN
Design Guide.
Step 1: Configure the data center core switches to discover the IP Multicast rendezvous point (RP) from the LAN
core. Every Layer 3 switch and router must be configured to discover the IP Multicast RP. The ip pim auto-rp
forward listen command allows for discovery of the RP across ip pim sparse-mode links.
ip pim auto-rp forward listen
Step 2: Configure an unused VLAN for IP Multicast replication synchronization between the core Cisco Nexus
5500UP switches.
vpc bind-vrf default vlan 900
Tech Tip
The VLAN used for the IP Multicast bind-vrf cannot appear anyplace else in the
configuration of the Cisco Nexus 5500UP switches. It must not be defined in the VLAN
database commands and does not get included in the VLAN allowed list for the vPC
core. It will automatically program packet replication across the vPC peer link trunk
when needed.
Step 3: Configure IP Multicast to only be replicated across the vPC peer link when there is an orphan port of a
vPC.
no ip igmp snooping mrouter vpc-peer-link
Step 4: Configure all Layer 3 interfaces for IP Multicast operation with the pim sparse-mode command.
ip pim sparse-mode
It is not necessary to configure IP Multicast on the management VLAN interface (interface vlan 163).
Procedure 4
Virtual Port Channel does not support peering to another Layer 3 router across a vPC. This design will use
dual-homed point-to-point Layer 3 EtherChannel interfaces between each data center core Cisco Nexus
5500UP switch to the Cisco Catalyst 6000 Series VSS core LAN switches for data to and from the data center
to the rest of the network. If your design has a single resilient Cisco Catalyst 4500 with redundant supervisors
and redundant line cards, you will instead connect each data center Cisco Nexus 5500UP switch to each of the
redundant line cards.
Ethernet Infrastructure
46
EtherChannel 41
10.4.40.50/30
10.4.40.54/30
10.4.40.49/30
10.4.40.53/30
It is recommended you have at least two physical interfaces from each switch connected to the network core,
for a total port channel of four resilient physical 10-Gigabit Ethernet links and 40 Gbps of throughput. Each data
center core switch will use an EtherChannel link configured as a point-to-point Layer 3 link with IP multicast,
EIGRP routing, and QoS.
Data Center
Core
LAN Core
6000 Series
VSS
2227
Layer 3 Links
Step 1: On data center core Cisco Nexus 5500UP switch-A, configure a point-to-point Layer 3 EtherChannel
interface.
interface port-channel40
description EtherChannel link to VSS Core Switch
no switchport
ip address 10.4.40.50/30
ip router eigrp 100
ip pim sparse-mode
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Step 2: On data center core Cisco Nexus 5500UP switch-A, configure the physical interfaces that belong to the
EtherChannel Interface.
interface Ethernet1/19
description EtherChannel link to VSS Core Switch Te1/4/7
channel-group 40 mode active
no shutdown
interface Ethernet1/20
description EtherChannel link to VSS Core Switch Te2/4/7
channel-group 40 mode active
no shutdown
Ethernet Infrastructure
47
Step 3: On data center core Cisco Nexus 5500UP switch-B, configure a point-to-point Layer 3 EtherChannel
interface.
interface port-channel41
description EtherChannel link to VSS Core Switch
no switchport
ip address 10.4.40.54/30
ip router eigrp 100
ip pim sparse-mode
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Step 4: On data center core Cisco Nexus 5500UP switch-B, configure the physical interfaces that belong to the
EtherChannel Interface.
interface Ethernet1/19
description EtherChannel link to VSS Core Switch Te1/4/8
channel-group 41 mode active
no shutdown
interface Ethernet1/20
description EtherChannel link to VSS Core Switch Te2/4/8
channel-group 41 mode active
no shutdown
Step 5: On the Cisco LAN Core 6000 Series VSS switch pair, configure a corresponding point-to-point Layer 3
EtherChannel to Data Center Switch-A.
Configure the Layer 3 EtherChannel interface.
interface Port-channel40
description EtherChannel link to Data Center Switch DC5500-A
no switchport
ip address 10.4.40.49 255.255.255.252
ip pim sparse-mode
no shutdown
48
no ip address
logging event link-status
logging event trunk-status
logging event bundle-status
carrier-delay msec 0
macro apply EgressQoSTenOrFortyGig
channel-group 40 mode active
no shutdown
Step 6: On the Cisco LAN Core 6000 Series VSS switch pair, configure a corresponding point-to-point Layer 3
EtherChannel to Data Center Switch-B.
Configure the Layer 3 EtherChannel interface.
interface Port-channel41
description EtherChannel link to Data Center Switch DC5500-B
no switchport
ip address 10.4.40.53 255.255.255.252
ip pim sparse-mode
no shutdown
At this point, you should be able to see the IP routes from the rest of the network on the core Cisco Nexus
5500UP switches.
Ethernet Infrastructure
49
Procedure 5
The first process of this Ethernet Infrastructure chapter covered deploying an out-of-band Ethernet
management switch. In that process, you configured the switch for Layer 2 operation and uplinks to the data
center core as the option of providing Layer 3 access to the management VLAN to provide access beyond the
data center. If you have selected this option to provide Layer 3 access to the out-of-band Ethernet VLAN, follow
this procedure to program the uplinks and the Layer 3 SVI on the Cisco Nexus 5500UP switches.
For resiliency, the Ethernet out-of-band management switch will be dual-homed to each of the data center core
switches by using a vPC port channel.
Out of Band
Ethernet Switch
Mgmt 0
2222
Mgmt 0
Step 1: Configure the Ethernet out-of-band management VLAN. You will configure the same values on each
data center core Cisco Nexus 5500UP switch.
vlan 163
name DC_Management
Step 2: Configure vPC port channel to the Ethernet management switch. You will configure the same values on
each data center core Cisco Nexus 5500UP switch.
interface port-channel21
description Link to Management Switch for VL163
switchport mode trunk
switchport trunk allowed vlan 163
speed 1000
vpc 21
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Step 3: Configure the physical ports to belong to the port channel. You will configure the same values on each
data center core Cisco Nexus 5500UP switch.
interface Ethernet1/21
description Link to Management Switch for VL163 Routing
switchport mode trunk
switchport trunk allowed vlan 163
speed 1000
channel-group 21 mode active
Step 4: Configure an SVI interface for VLAN 163.
Configure the data center core Cisco Nexus 5500UP switch-A.
interface Vlan163
description DC-Management
no ip redirects
ip address 10.4.63.2/24
Ethernet Infrastructure
50
Procedure 6
If you want to provide the ability to monitor the state of critical interfaces on a Cisco Nexus 5500 data center
core switch in order to influence vPC operations and prevent potential outages, you can track interfaces and
enable an action.
As an example, in the figure below, you can track the state of the Layer 3 links to the LAN core and the vPC peer
link port channel. You can then program the switch such that if all three of these tracked interfaces on the switch
are in a down state at the same time on a data center core Nexus 5500 switch, that switch will relinquish vPC
domain control to the peer data center core Nexus 5500 switch. The signaling of vPC peer switchover requires
the vPC peer keepalive link between the Nexus 5500 switches to remain operational in order to communicate
vPC peer state.
Data Center
Servers and Services
vPC
Primary
X
e 1/19 X
vPC
Secondary
Data Center Core
Nexus 5500s
Port-Channel 10
X
X
Rest of Network
Ethernet Infrastructure
2284
LAN Core
51
Step 1: Configure the interfaces to track on each data center core switch.
track 1 interface port-channel10 line-protocol
track 2 interface Ethernet1/19 line-protocol
track 3 interface Ethernet1/20 line-protocol
Step 2: Configure a track list on each data center core switch that contains all of the objects being tracked in the
previous step. Use a boolean or condition in the command to indicate that all three objects must be down for the
action to take effect.
track 10
object
object
object
list boolean or
1
2
3
Step 3: Configure the vPC process on each data center core switch to monitor the track list created in the
previous step.
PROCESS
vpc domain 10
track 10
Cisco Fabric Extender (FEX) ports are designed to support end host connectivity. There are some design rules to
be aware of when connecting devices to Cisco FEX ports:
Cisco FEX ports do not support connectivity to LAN switches that generate spanning-tree BPDU
packets. If a Cisco FEX port receives a BPDU packet, it will shut down with an Error Disable status.
Cisco FEX ports do not support connectivity to Layer 3 routed ports where routing protocols are
exchanged with the Layer 3 core; they are only for Layer 2connected end hosts or appliances.
The Cisco Nexus 5500UP switch running Layer 3 routing supports a maximum of sixteen connected
Cisco FEX on a switch.
Cisco Fabric Extender connections are also configured as port channel connections on Cisco Nexus
5500 Series for uplink resiliency and load sharing.
If the Cisco FEX is to be single-homed to only one member of the switch pair, it is configured as a
standard port channel.
If the Cisco FEX is to be dual-homed to both members of the vPC switch pair to support single-homed
servers or for increased resiliency, it is configured as a vPC on the port channel. Every end node or
server connected to a dual-homed FEX is logically dual homed to each of the Cisco Nexus 5500 core
switches and will have a vPC automatically generated by the system for the Ethernet FEX edge port.
Ethernet Infrastructure
52
Single-homed
FEX 102
Single-homed
FEX 103
PoCh-104
vPC-104
PoCh-103
2228
PoCh-102
Dual-homed
FEX 104
Procedure 1
When assigning Cisco FEX numbering, you have the flexibility to use a numbering scheme (different from the
example) that maps to some other identifier, such as a rack number that is specific to your environment.
53
!
interface Ethernet1/14
channel-group 103
no shutdown
!
interface port-channel103
description single-homed 2248
switchport mode fex-fabric
fex associate 103
no shutdown
54
!
interface port-channel104
description dual-homed 2232
switchport mode fex-fabric
fex associate 104
vpc 104
no shutdown
After configuration is completed for either FEX attachment model, you can power up the FEX and verify the
status of the fabric extender modules by using the show fex command and then looking for the state of online
for each unit.
DC5548UPa# show fex
FEX
FEX
FEX
FEX
Number Description State
Model
Serial
-------------------------------------------------------102
FEX0102
Online N2K-C2248TP-1GE SSI14140643
104
FEX0104
Online N2K-C2232PP-10GE SSI142602QL
Tech Tip
It may take a few minutes for the Cisco FEX to come online after it is programmed,
because the initial startup of the Cisco FEX downloads operating code from the
connected Cisco Nexus switch.
Procedure 2
When configuring Cisco Nexus FEX Ethernet ports for server or appliance connectivity, you must configure the
port on one or both of the Cisco Nexus 5500UP core switches depending on the FEX connection (single-homed
or dual-homed).
Example
interface Ethernet103/1/1
switchport access vlan 148
spanning-tree port type edge
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Ethernet Infrastructure
55
Tech Tip
You must assign the Ethernet interface configuration on both data center core Cisco
Nexus 5500UP switches as the host is dual homed because it is on a dual-homed
Cisco FEX. Failure to configure the port on both Nexus 5500 switches with matching
VLAN assignment will prevent the Ethernet interface from being activated.
Step 2: When connecting a single-homed server to a dual-homed Cisco FEX, assign physical interfaces to
support servers or devices that require a VLAN trunk interface to communicate with multiple VLANs. Most
virtualized servers will require trunk access to support management access plus user data for multiple virtual
machines. Setting the spanning-tree port type to edge allows the port to provide immediate connectivity on
the connection of a new device. Enable QoS classification for the connected server or end node as defined in
Procedure 3, Configure QoS policies.
Because the server is connected to a dual-homed FEX, this configuration must be done on both Cisco Nexus
5500UP data center core switches.
Example
interface Ethernet103/1/2
switchport mode trunk
switchport trunk allowed vlan 148-163
spanning-tree port type edge trunk
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
PoCh-600
vPC-600
PoCh-102
Nexus 5500UP
Ethernet vPC
Switch Fabric
Single-homed
FEX 103
PoCh-103
2226
Single-homed
FEX 102
When connecting a dual-homed server that is using IEEE 802.3ad EtherChannel from the server to a pair of
single-homed Cisco FEX, you must configure the Cisco FEX Ethernet interface as a port channel and assign a
vPC interface to the port channel to talk EtherChannel to the attached server.
Ethernet Infrastructure
56
Example
Step 1: On Cisco Nexus 5500 switch-A.
interface ethernet102/1/1-2
switchport mode trunk
switchport trunk allowed vlan 148-163
spanning-tree port type edge trunk
channel-group 600
no shutdown
interface port-channel 600
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
vpc 600
no shutdown
Step 2: On Cisco Nexus 5500 switch-B.
interface ethernet103/1/1-2
switchport mode trunk
switchport trunk allowed vlan 148-163
spanning-tree port type edge trunk
channel-group 600
no shutdown
interface port-channel 600
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
vpc 600
no shutdown
Tech Tip
When connecting ports via vPC, Cisco NX-OS does consistency checks to make sure
that the VLAN list, spanning-tree mode, and other characteristics match between the
ports configured on each switch that make up a vPC. If the configuration for each port
is not identical with the other, the port will not come up.
Ethernet Infrastructure
57
When connecting a dual-homed server that is using IEEE 802.3ad EtherChannel from the server to a pair of
dual-homed Cisco FEX, you must configure the Ethernet interface on each of the Cisco FEX interfaces as a port
channel but not as a vPC. The Cisco Nexus 5500 switches will automatically create a vPC to track the dualhomed port channel.
Dual-homed
Server
PoCh-1002
PoCh-106
vPC-106
Dual-homed
FEX 107
PoCh-107
vPC-107
Nexus 5500UP
Ethernet vPC
Switch Fabric
2229
Dual-homed
FEX 106
In this configuration option, you use FEX numbers 106 and 107. Both FEX would have to be configured as dualhomed to the Cisco Nexus 5500 data center core switches as defined in Option 2: Configure dual-homed FEX.
Step 1: Configure the Ethernet interfaces of the first dual-homed FEX on Cisco Nexus 5500 switch-A for a port
channel to the server.
interface ethernet106/1/3-4
channel-group 1002
no shutdown
Step 2: Configure the Ethernet interfaces of the second dual-homed FEX on Cisco Nexus 5500 switch-A for a
port channel to the server.
interface ethernet107/1/3-4
channel-group 1002
no shutdown
Step 3: Configure the port-channel for the VLANs to be supported.
interface port-channel 1002
switchport mode trunk
switchport trunk allowed vlan 148-163
spanning-tree port type edge trunk
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Ethernet Infrastructure
58
Step 4: Repeat the commands on Cisco Nexus 5500 switch-B with the same settings, because the server and
the FEX are dual-homed.
interface ethernet106/1/3-4
channel-group 1002
no shutdown
!
interface ethernet107/1/3-4
channel-group 1002
no shutdown
!
interface port-channel 1002
switchport mode trunk
switchport trunk allowed vlan 148-163
spanning-tree port type edge trunk
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
Ethernet Infrastructure
59
Storage Infrastructure
Design Overview
IP-based Storage Options
Many storage systems provide the option for access using IP over the Ethernet network. This approach allows
a growing organization to gain the advantages of centralized storage without needing to deploy and administer
a separate Fibre Channel network. Options for IP-based storage connectivity include Internet Small Computer
System Interface (iSCSI) and network attached storage (NAS).
iSCSI is a protocol that enables servers to connect to storage over an IP connection and is a lower-cost
alternative to Fibre Channel. iSCSI services on the server must contend for CPU and bandwidth along with other
network applications, so you need to ensure that the processing requirements and performance are suitable for
a specific application. iSCSI has become a storage technology that is supported by most server, storage, and
application vendors. iSCSI provides block-level storage access to raw disk resources, similar to Fibre Channel.
NICs also can provide support to offload iSCSI to a separate processor to increase performance.
Network attached storage (NAS) is a general term used to refer to a group of common file access protocols,
the most common implementations use Common Internet File System (CIFS) or network file server (NFS). CIFS
originated in the Microsoft network environment and is a common desktop file-sharing protocol. NFS is a multiplatform protocol that originated in the UNIX environment and can be used for shared hypervisor storage. Both
NAS protocols provide file-level access to shared storage resources.
Most organizations will have applications for multiple storage access technologiesfor example, Fibre Channel for
the high performance database and production servers, and NAS for desktop storage access.
Storage Infrastructure
60
This design prevents failures or misconfigurations in one fabric from affecting the other fabric.
Figure 12 - Dual fabric SAN with a single disk array
Storage
Array
Fabric A
Fabric B
Server 1
SAN B
Server 2
2230
SAN A
Each server or host on a SAN connects to the Fibre Channel switch with a multi-mode fiber cable from a host
bus adapter (HBA). For resilient connectivity, each host connects a port to each of the fabrics.
Each port has a port worldwide name (pWWN), which is the ports address that uniquely identifies it on the
network. An example of a pWWN is: 10:00:00:00:c9:87:be:1c. In data networking this would compare to a MAC
address for an Ethernet adapter.
VSANs
The virtual storage area network (VSAN) is a technology created by Cisco that is modeled after the virtual
local area network (VLAN) concept in Ethernet networks. VSANs provide the ability to create many logical SAN
fabrics on a single Cisco MDS 9100 Family switch. Each VSAN has its own set of services and address space,
which prevents an issue in one VSAN from affecting other VSANs. In the past, it was a common practice to
build physically separate fabrics for production, backup, lab, and departmental environments. VSANs allow all of
these fabrics to be created on a single physical switch with the same amount of protection provided by separate
switches.
Zoning
The terms target and initiator will be used throughout this section. Targets are disk or tape devices. Initiators are
servers or devices that initiate access to disk or tape.
Zoning provides a means of restricting visibility and connectivity among devices connected to a SAN. The use
of zones allows an administrator to control which initiators can see which targets. It is a service that is common
throughout the fabric, and any changes to a zoning configuration are disruptive to the entire connected fabric.
Initiator-based zoning allows for zoning to be port-independent by using the world wide name (WWN) of the end
host. If a hosts cable is moved to a different port, it will still work if the port is a member of the same VSAN.
Storage Infrastructure
61
Device Aliases
When configuring features such as zoning, quality of service (QoS), and port security on a Cisco MDS 9000
Family switch, WWNs must be specified. The WWN naming format is cumbersome, and manually typing WWNs
is error prone. Device aliases provide a user-friendly naming format for WWNs in the SAN fabric (for example:
p3-c210-1-hba0-a instead of 10:00:00:00:c9:87:be:1c).
Use a naming convention that makes initiator and target identification easy. For example, p3-c210-1-hba0-a in
this setup identifies:
Rack location: p3
Host type: c210
Host number: 1
HBA number: hba0
Port on HBA: a
Tech Tip
Specific interfaces, addresses, and device aliases are examples from the lab. Your
WWN addresses, interfaces, and device aliases will likely be different.
Storage Infrastructure
62
Deployment Details
Deployment examples documented in this section include:
Configuration of Cisco Nexus 5500UPbased SAN network to support Fibre Channelbased storage.
Configuration of a Cisco MDS SAN switch for higher-density Fibre Channel environments.
FCoE access to storage from Cisco UCS C-Series servers using Cisco Nexus 5500.
Complete each of the following procedures to configure the Fibre Channel SAN on the data center core Cisco
Nexus 5500UP switches.
Cisco Nexus 5500UP Series requires a Nexus Storage license to support FC or FCoE switching. For more
information about licensing, see the Cisco NX-OS Licensing Guide on:
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/licensing/guide/b_Cisco_NX-OS_Licensing_
Guide.html
Procedure 1
The Cisco Nexus 5500UP switch has universal ports that are capable of running Ethernet+FCoE or Fibre Channel
on a per-port basis. By default, all switch ports are enabled for Ethernet operation. Fibre Channel ports must
be enabled in a contiguous range and be the high numbered ports of the switch baseboard and/or the high
numbered ports of a universal port expansion module.
Reader Tip
The first part of this procedure was outlined in Procedure 2 in the Configuring the
Data Center Core process in the Ethernet Infrastructure chapter of this design guide.
If you have already configured ports for Fibre Channel operation, you can skip through
parts of this procedure.
Slot 2 GEM
FC Ports
Ethernet
FC
2224
Slot 1 (Baseboard)
Ethernet Ports
In this design, you enable ports 28 through 32 on the Cisco Nexus 5548UP switch as Fibre Channel ports.
Storage Infrastructure
63
Tech Tip
Changing port type to fc requires a reboot in Cisco Nexus 5500UP version 5.2(1)
N1(1) software to recognize the new port operation. This is subject to change in later
releases of software. Ports will not show up in the configuration as fc ports if you did
not previously enable the FCoE feature.
Step 2: If you are changing the port type at this time, save your configuration and reboot the switch so that the
switch recognizes the new fc port type operation. If you have already done this, there is no need to reboot.
Step 3: If you have not done so, enable FCOE operation, which enables both native Fibre Channel and FCoE
operation.
feature fcoe
Step 4: Enable SAN port-channel trunking operation and Fibre Channel N-Port ID Virtualization for connecting to
Cisco UCS fabric interconnects.
feature npiv
feature fport-channel-trunk
Reader Tip
More detail for connecting to a Cisco UCS B-Series fabric interconnect for Fibre
Channel operation can be found in the Unified Computing System Technology Design
Guide.
Procedure 2
Configure VSANs
Cisco Data Center Network Manager (DCNM) for SAN Essentials Edition is a no-cost application to configure and
manage Cisco MDS and Cisco Nexus SAN switches, available for download from www.cisco.com. DCNM for
SAN Essentials includes Cisco MDS Device Manager and Cisco SAN Fabric Manager. Managing more than one
switch at the same time requires a licensed version.
To manage a switch with Cisco DCNM Device Manager, connect to the switchs management IP address. The
CLI can also be used to configure Fibre Channel operation.
Storage Infrastructure
64
Java runtime environment (JRE) is required to run Cisco DCNM Fabric Manager and Device Manager; and should
be installed on your desktop before accessing either application.
By default, all ports are assigned to VSAN 1 at initialization of the switch. It is a best practice to create a separate
VSAN for production and to leave VSAN 1 for unused ports. By not using VSAN 1, you can avoid future problems
with merging of VSANs when combining other existing switches that may be set to VSAN 1.
Fibre Channel operates in a SAN-A and SAN-B approach, where you create two separate SAN fabrics. Fibre
Channel hosts and targets connect to both fabrics for redundancy. The SAN fabrics operate in parallel.
The example below describes creating two VSANs, one on each data center core Cisco Nexus 5500UP switch.
You can use the CLI or Device Manager to create a VSAN.
Step 1: Install Cisco DCNM for SAN Essentials.
Step 2: Using DCNM Device Manager, connect to Cisco Nexus data center core switch-A IP address
(10.4.63.10).
Step 3: Using Device Manager, click FC> VSANS, and then, in the Create VSAN General window, click Create.
Step 4: In the VSAN id list, choose 4, and in the name box, type General-Storage.
Step 5: Next to the Interface Members box, click the ellipsis () button.
Storage Infrastructure
65
Step 6: Select the interface members by clicking the port numbers you want.
Step 7: Click Create. The VSAN is created. You can add additional VSAN members in the Membership tab of the
main VSAN window.
The preceding steps apply this configuration in CLI.
vsan database
vsan 4 name General-Storage
vsan 4 interface fc1/31
Step 8: Repeat the steps in this procedure to create a VSAN 5 on Cisco Nexus 5500UP switch-B. Use the same
VSAN name.
Procedure 3
By default, the ports are configured for port mode Auto, and this setting should not be changed for most devices
that are connected to the fabric. However, you will have to assign a VSAN to the port.
Step 1: If you want to change the port mode by using Device Manager, right-click the port you want to configure,
and then click Configure.
Storage Infrastructure
66
You can see in the preceding figure that the PortVSAN assignment is listed in the top left of the General tab.
Step 2: Next to Status Admin, select up. This enables the port.
Step 3: In the PortVSAN list, choose 4 or 5, depending on which switch you are working on, and then click
Apply. This changes the VSAN and activates the ports.
The preceding steps apply this configuration in CLI.
vsan database
vsan 4 interface fc1/28
This step assigns ports to a VSAN similar to Step 5 in the previous procedure, Configure VSANs. If you have
already created VSANs, you can use this as another way to assign a port to a VSAN.
Step 4: Connect Fibre Channel devices to ports.
Reader Tip
For more information about preparing Cisco UCS B-Series and C-Series servers
for connecting to the Fibre Channel network see the Unified Computing System
Technology Design Guide.
Storage Infrastructure
67
Step 5: Display fabric login (FLOGI) by entering the show flogi database on the switch CLI.
Tech Tip
When the initiator or target is plugged in or starts up, it automatically logs into the
fabric. Upon login, the initiator or target WWN is made known to the fabric. Until you
have storage arrays or servers with active HBAs plugged into the switch on Fibre
Channel ports, you will not see entries in the FLOGI database.
Example
Procedure 4
Device aliases map the long WWNs for easier zoning and identification of initiators and targets. An incorrect
device name may cause unexpected results. Device aliases can be used for zoning, port-security, QoS, and
show commands.
Tech Tip
Until you have storage arrays or servers with active HBAs plugged into the switch on
Fibre Channel ports, you will not see entries in the FLOGI database to use for device
alias configuration.
Storage Infrastructure
68
Step 3: In the Alias box, enter a name, and in the WWN box, paste in or type the WWN of the host, and then
click Create.
Step 4: After you have created your devices aliases, click CFS> Commit. The changes are written to the
database.
name
name
name
name
name
69
Procedure 5
Configure zoning
Tech Tip
Until you have storage arrays or servers with active HBAs plugged into the switch on
Fibre Channel ports, you will not see entries in the FLOGI database to use for zone
configuration.
Tech Tip
A zoneset is a collection of zones. Zones are members of a zoneset. After you add all
the zones as members, you must activate the zoneset. There can only be one active
zoneset per VSAN.
Storage Infrastructure
70
Storage
Array
Fabric A
Fabric B
SAN A
Server 1
SAN B
Server 2
2231
Storage Infrastructure
71
Step 4: From the DCNM-SAN menu, choose Zone, and then click Edit Local Full Zone Database.
Step 5: In the Zone Database window, in the left pane, right-click Zones, and then click Insert. This creates a
new zone.
Step 6: In the Zone Name box, enter the name of the new zone, and then click OK.
Step 7: Select the new zone, and then, from the bottom of the right-hand side of the database window, choose
initiator or targets you want to add to the zone, and then click Add to Zone.
Storage Infrastructure
72
Step 12: Configure SAN B the same way by using the procedures in this process to create VSAN 5 on data
center core Cisco Nexus 5500UP switch-B.
Procedure 6
Storage Infrastructure
73
Storage Infrastructure
74
PROCESS
If your Fibre Channel SAN environment requires a higher density of Fibre Channel port connectivity, you may
choose to use Cisco MDS 9100 series SAN switches.
Fabric A
Fabric B
Cisco MDS 9100 Series
Storage Fabrics
SAN B
SAN A
Cisco
Nexus 5500UP Series
Data Center Core
2232
Expansion Fibre
Channel Ports
The following procedures describe how to deploy a Cisco MDS 9124 or 9148 SAN switch to connect to the data
center core Cisco Nexus 5500UP switches.
Procedure 1
75
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the
remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): y
Create another login account (yes/no) [n]: n
Configure read-only SNMP community string (yes/no) [n]: n
Configure read-write SNMP community string (yes/no) [n]: n
Enter the switch name : MDS9124a
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y
Mgmt0 IPv4 address : 10.4.63.12
Mgmt0 IPv4 netmask : 255.255.255.0
Configure the default gateway? (yes/no) [y]: y
IPv4 address of the default gateway : 10.4.63.1
Configure advanced IP options? (yes/no) [n]: n
Enable the ssh service? (yes/no) [y]: y
Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa
Number of rsa key bits <768-2048> [1024]: 2048
Enable the telnet service? (yes/no) [n]: n
Configure congestion/no_credit drop for fc interfaces? (yes/no)
[q/quit] to
quit [n]: n
Enable the http-server? (yes/no) [y]: y
Configure clock? (yes/no) [n]: n
Configure timezone? (yes/no) [n]: n
Configure summertime? (yes/no) [n]: n
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : 10.4.48.17
Configure default switchport interface state (shut/noshut) [shut]: noshut
Configure default switchport trunk mode (on/off/auto) [on]: on
Configure default switchport port mode F (yes/no) [n]: n
Configure default zone policy (permit/deny) [deny]: deny
Enable full zoneset distribution? (yes/no) [n]: y
Configure default zone mode (basic/enhanced) [basic]: basic
The following configuration will be applied:
password strength-check
switchname MDS9124a
interface mgmt0
ip address 10.4.63.12 255.255.255.0
no shutdown
ip default-gateway 10.4.63.1
ssh key rsa 2048 force
feature ssh
no feature telnet
feature http-server
ntp server 10.4.48.17
no system default switchport shutdown
system default switchport trunk mode on
no system default zone default-zone permit
system default zone distribute full
Storage Infrastructure
76
Tech Tip
NTP is critical to troubleshooting and should not be overlooked.
Step 2: Run the setup script for the second Cisco MDS 9100 switch (switch-B) using a unique switch name and
10.4.63.13 for the Mgmt0 IPv4 address.
Step 3: If you want to reduce operational tasks per device, configure centralized user authentication by using the
TACACS+ protocol to authenticate management logins on the infrastructure devices to the AAA server.
As networks scale in the number of devices to maintain, the operational burden to maintain local user accounts
on every device also scales. A centralized AAA service reduces operational tasks per device and provides an
audit log of user access for security compliance and root-cause analysis. When AAA is enabled for access
control, all management access to the network infrastructure devices (SSH and HTTPS) is controlled by AAA.
TACACS+ is the primary protocol used to authenticate management logins on the infrastructure devices to the
AAA server. A local AAA user database is also defined in the setup script on each MDS 9100 switch to provide a
fallback authentication source in case the centralized TACACS+ server is unavailable.
feature tacacs+
tacacs-server host 10.4.48.15 key [key]
aaa group server tacacs+ tacacs
server 10.4.48.15
aaa authentication login default group tacacs
Step 4: Set the SNMP strings in order to enable managing Cisco MDS switches with Device Manager. Set both
the read-only (network-operator) and read/write (network-admin) SNMP strings:
snmp-server community [SNMP RO string] group network-operator
snmp-server community [SNMP RW string] group network-admin
Step 5: Configure the clock. In the setup mode, you configured the NTP server address. In this step, configuring
the clock enables the clock to use the NTP time for reference and makes the switch output match the local time
zone.
clock timezone PST -8 0
clock summer-time PDT 2 Sunday march 02:00 1 Sunday nov 02:00 60
Storage Infrastructure
77
Procedure 2
Configure VSANs
To configure the Cisco MDS switches to expand the Fibre Channel SAN that you built on the Cisco Nexus
5500UP switches, use the same VSAN numbers for SAN A and SAN B, respectively. The CLI and GUI tools work
the same way for Cisco MDS as they do with Cisco Nexus 5500UP.
Step 1: In Device Manager, log in to the first Cisco MDS SAN switch, and then click FC> VSANS.
Storage Infrastructure
78
Procedure 3
Connect the Cisco MDS switch to the existing Cisco Nexus 5500UP core Fibre Channel SAN.
Step 1: In Device Manager, navigate to the Cisco MDS switch.
Step 2: In the Device Manager screen, click Interfaces> Port Channels, and then click Create. Next, you
configure the trunk ports on Cisco MDS.
Step 3: Choose the port channel Id number, select trunk, select Mode E, and then select Force.
Step 4: In the Allowed VSANs box, enter 1,4. For the Cisco MDS switch for SAN Fabric B, the VSANs to enter
would be 1 and 5.
Storage Infrastructure
79
Step 5: To the right of the Interface Members box, click the ellipsis button (), and then select the interface
members that will belong to this port channel.
80
interface fc1/14
switchport mode E
channel-group 1 force
switchport rate-mode dedicated
no shutdown
The preceding steps apply this Cisco MDS 9100 configuration to the MDS SAN-B switch.
interface port-channel 1
switchport mode E
switchport trunk allowed vsan 1
switchport trunk allowed vsan add 5
switchport rate-mode dedicated
interface fc1/13
switchport mode E
channel-group 1 force
switchport rate-mode dedicated
no shutdown
interface fc1/14
switchport mode E
channel-group 1 force
switchport rate-mode dedicated
no shutdown
Step 8: Create the corresponding SAN port channel to connect to the Cisco MDS switch for Cisco Nexus
5500UP by following the preceding steps in this procedure.
The resulting Cisco Nexus 5500UP CLI for this SAN port channel is the following for the SAN-A switch.
interface san-port-channel 31
switchport trunk allowed vsan 1
switchport trunk allowed vsan add 4
interface fc1/31
switchport description Link to MDS9124a port fc-1/13
switchport mode E
channel-group 31 force
no shutdown
interface fc1/32
switchport description Link to MDS9124a port fc1/14
switchport mode E
channel-group 31 force
no shutdown
The resulting Cisco Nexus 5500UP CLI for this SAN port channel is the following for the SAN-B switch.
interface san-port-channel 31
switchport trunk allowed vsan 1
switchport trunk allowed vsan add 5
Storage Infrastructure
81
interface fc1/31
switchport description Link to MDS9124b port fc-1/13
switchport mode E
channel-group 31 force
no shutdown
interface fc1/32
switchport description Link to MDS9124b port fc1/14
switchport mode E
channel-group 31 force
no shutdown
Step 9: Distribute the zone database created on the Cisco Nexus 5500UP switch to the new Cisco MDS 9100
switch.
Configure the Cisco Nexus 5500UP CLI for SAN-A to distribute the zone database to the new Cisco MDS 9100
switch.
zoneset distribute full vsan 4
Configure the Cisco Nexus 5500UP CLI for SAN-B to distribute the zone database to the new Cisco MDS 9100
switch.
PROCESS
Cisco UCS C-Series rack-mount servers ship with onboard 10/100/1000 Ethernet adapters and Cisco
Integrated Management Controller (CIMC), which uses a 10/100 or 10/100/1000 Ethernet port. To get the most
out of the rack servers and minimize cabling in the Unified Computing architecture, the Cisco UCS C-Series
rack-mount server is connected to a unified fabric. The Cisco Nexus 5500UP Series switch that connects the
Cisco UCS 5100 Series Blade Server Chassis to the network can also be used to extend Fibre Channel traffic
over 10-Gigabit Ethernet. The Cisco Nexus 5500UP Series switch consolidates I/O onto one set of 10-Gigabit
Ethernet cables, eliminating redundant adapters, cables, and ports. A single converged network adapter (CNA)
card and set of cables connects servers to the Ethernet and Fibre Channel networks by using FCoE. FCoE and
CNA also allows the use of a single cabling infrastructure within server racks.
In the data center design, the Cisco UCS C-Series rack-mount server is configured with a dual-port CNA.
Cabling the Cisco UCS C-Series server with a CNA limits the cables to three: one for each port on the CNA and
one for the CIMC connection.
Storage Infrastructure
82
Tech Tip
A server connecting to Cisco Nexus 5500UP that is running FCoE consumes a Fibre
Channel port license. If you are connecting the FCoE attached servers to a Cisco FEX
model 2232PP, only the 5500UP ports connected to the Cisco FEX require a Fibre
Channel port license for each port connecting to the Cisco FEX. This way, you could
connect up to 32 FCoE servers to a Cisco FEX 2232PP and only use Fibre Channel
port licenses for the Cisco FEX uplinks.
A standard server without a CNA could have a few Ethernet connections or multiple Ethernet and Fibre Channel
connections. The following figure shows a topology with mixed unified fabric, standard Ethernet and Fibre
Channel connections, and optional Cisco MDS 9100 Series for Fibre Channel expansion.
Figure 13 - CVD data center design
Third-party
Rack Servers
Cisco UCS
C-Series Servers
LAN Core
Nexus 5500 Layer 2/3 Ethernet
and SAN Fabric
Cisco
ASA Firewalls
with IPS
Ethernet
Fibre Channel
Expanded
Cisco MDS 9100
Storage Fabric
Fibre Channel
Storage Array
SAN A
SAN B
Storage Infrastructure
Fibre Channel
Storage Array
2216
Data
Center
83
The Cisco UCS C-Series server is connected to both Cisco Nexus 5500UP Series switches from the CNA with
twinax or fiber optic cabling. The Cisco UCS server running FCoE can also attach to a single-homed Cisco FEX
model 2232PP.
Tech Tip
At this time, FCoE-connected hosts can only connect over 10-Gigabit Ethernet and
must use a fiber optic or twinax connection.
The recommended approach is to connect the CIMC management port(s) to an Ethernet port on the out-of-band
management switch. Alternatively, you can connect the CIMC management port(s) to a Cisco Nexus 2248 fabric
extender port in the management VLAN (163).
Procedure 1
Configuration is the same across both of the Cisco Nexus 5500UP Series switches with the exception of the
VSAN configured for SAN fabric A and for SAN fabric B.
The Cisco Nexus 5500UP does not preconfigure QoS for FCoE traffic.
Step 1: Ensure that the Cisco Nexus 5500UP data center core switches have been programmed with a QoS
policy to support lossless FCoE transport. The QoS policy for the data center core Nexus 5500UP switches was
defined in Procedure 3 Configure QoS policies.
Tech Tip
You must have a QoS policy on the Cisco Nexus 5500UP switches that classifies FCoE
for lossless operation.
Storage Infrastructure
84
Procedure 2
On the Cisco Nexus 5500UP switches, configure the Ethernet ports connected to the CNA on the dual-homed
host.
Step 1: Create a VLAN that will carry FCoE traffic to the host.
In the following, VLAN 304 is mapped to VSAN 4. VLAN 304 carries all VSAN 4 traffic to the CNA over
the trunk for Cisco Nexus 5500UP switch-A.
vlan 304
fcoe vsan 4
exit
85
Tech Tip
The Cisco UCS C-Series server using the Cisco P81E CNA must have the FCoE
VSANs configured for virtual host bus adapter (vHBA) operation to connect to the Fibre
Channel fabric. For more information on configuring the C-Series server for FCoE
connectivity, please see the Unified Computing System Technology Design Guide.
Procedure 3
Step 1: On the Cisco Nexus 5500UP switches, use the show interface command to verify the status of the
virtual Fibre Channel interface. The interface should now be up as seen below if the host is properly configured
to support the CNA.
Reader Tip
Host configuration is beyond the scope of this guide. Please see CNA documentation
for specific host drivers and configurations.
86
Tech Tip
Much of the configuration of the Cisco Nexus 5500UP Series switch can also be done
from within Device Manager; however, Device Manager for SAN Essentials cannot
be used to configure VLANs or Ethernet trunks on the Cisco Nexus 5500UP Series
switches.
Storage Infrastructure
87
Compute Connectivity
Design Overview
Server virtualization offers the capability to run multiple application servers on a common hardware platform,
allowing an organization to focus on maximizing the application capability of the data center while minimizing
costs. Increased capability and reduced costs are realized through multiple aspects:
Multiple applications can be combined in a single hardware chassis, reducing the number of boxes that
must be accommodated in data-center space
Simplified cable management, due to fewer required cable runs and greater flexibility in allocating
network connectivity to hosts on an as-needed basis
Improved resiliency and application portability as hypervisors allow workload resiliency and load-sharing
across multiple platforms, even in geographically dispersed locations
Applications that are deployed on standardized hardware platforms, which reduces platformmanagement consoles and minimizes hardware spare stock challenges
Minimized box count reduces power and cooling requirements, because there are fewer lightly loaded
boxes idling away expensive wattage
The ability to virtualize server platforms to handle multiple operating systems and applications with hypervisor
technologies building virtual machines allows the organization to lower capital and operating costs by collapsing
more applications onto fewer physical servers. The hypervisor technology also provides the ability to cluster
many virtual machines into a domain where workloads can be orchestrated to move around the data center
to provide resiliency and load balancing, and to allow new applications to be deployed in hours versus days or
weeks.
The ability to move virtual machines or application loads from one server to the next, whether the server is a
blade server in a chassis-based system or a standalone rack-mount server, requires the network to be flexible
and scalable, allowing any VLAN to appear anywhere in the data center. Cisco Virtual Port Channel (vPC) and
Fabric Extender (FEX) technologies are used extensively in this data center design to provide flexible Ethernet
connectivity to VLANs distributed across the data center in a scalable and resilient manner.
Streamlining the management of server hardware and its interaction with networking and storage equipment is
another important component of using this investment in an efficient manner. Cisco offers a simplified reference
model for managing a small server room as it grows into a full-fledged data center. This model benefits from the
ease of use offered by Cisco UCS. Cisco UCS provides a single graphical management tool for the provisioning
and management of servers, network interfaces, storage interfaces, and the network components directly
attached to them. Cisco UCS treats all of these components as a cohesive system, which simplifies these
complex interactions and allows an organization to deploy the same efficient technologies as larger enterprises
do, without a dramatic learning curve.
The primary computing platforms targeted for the CVD Unified Computing reference architecture are Cisco
UCS B-Series Blade Servers and Cisco UCS C-Series Rack-Mount Servers. The Cisco UCS Manager graphical
interface provides ease of use that is consistent with the goals of CVD. When deployed in conjunction with the
CVD data center network foundation, the environment provides the flexibility to support the concurrent use
of the Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack-Mount Servers, and third-party servers
connected to 1- and 10-Gigabit Ethernet connections.
Compute Connectivity
88
The following sections describe features that enhance connectivity options in the data center.
Host
Switch
Host Port Active-Standby
Without
Channel
Teaming
Port Channel
Spanning Tree
Blocking Link
Standby
Interface
Mgmt 0
or
2233
The important point to remember about vPC orphan ports is that if the vPC peer link is lost and the secondary
vPC shuts down vPC ports, it will not shut down vPC orphan ports unless programmed to do so with the vpc
orphan-port suspend command on the switch interface.
Compute Connectivity
89
Example
interface Ethernet103/1/2
description to_teamed_adapter
switchport mode access
switchport access vlan 50
vpc orphan-port suspend
interface Ethernet104/1/2
description to_teamed_adapter
switchport mode access
switchport access vlan 50
vpc orphan-port suspend
Reader Tip
The fundamental concepts of vPC are described in detail in the whitepaper titled
Cisco NX-OS Software Virtual PortChannel: Fundamental Concepts, located here:
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/
C07-572835-00_NX-OS_vPC_DG.pdf
The complete vPC domain programming for the Cisco Nexus 5500UP switches is detailed in the Procedure 4,
Configure virtual port channel, earlier in this guide.
Compute Connectivity
90
Host Single
Attached
Dual-Homed
Fabric Extender
Single-Homed
Fabric Extender
or
2234
vPC Peer
Keepalive
The dual-homed (active/active) Cisco FEX uses vPC to provide resilient connectivity to both data center core
switches for single attached host servers. Each host is considered to be vPC connected through the associated
connectivity to a vPC dual-homed Cisco FEX. The Cisco FEXto-core connectivity ranges from 4 to 8 uplinks,
depending on the Cisco FEX type in use, and the Cisco FEX uplinks can be configured as a port channel as well.
The host connected to a pair of single-homed Cisco FEXs can be configured for port channel operation to
provide resilient connectivity to both data center core switches through the connection to each Cisco FEX. The
Cisco FEXto-core connectivity ranges from 4 to 8 uplinks, depending on the Cisco FEX type in use, and the
Cisco FEX uplinks are typically configured as a port channel as well.
Tech Tip
Devices such as LAN switches that generate spanning-tree bridge protocol data units
(BPDUs) should not be connected to Cisco FEXs. The Cisco FEX is designed for host
connectivity and will error disable a port that receives a BPDU packet.
The complete Cisco FEX connectivity programming to the Cisco Nexus 5500UP data center core switches and
Ethernet port configuration for server connection is detailed in the Configure Fabric Extender Connectivity
chapter, earlier in this guide.
Compute Connectivity
91
2206
92
Layer 2 vPC
Links to Core
Cisco UCS
Fabric Interconnects
Integrated Ports
2236
Detailed configuration for Cisco UCS B-Series design can be found in the Unified Computing System Technology
Design Guide.
Compute Connectivity
93
Connections for Fast Ethernet or 1-Gigabit Ethernet can also use the Cisco Nexus 2248TP Fabric Extender.
Figure 18 - Example Cisco UCS C-Series FEX Connections
Nexus 5500UP
Data Center Core
FEX Uplinks in
Port Channel Mode
Cisco 2232PP
Fabric Extenders
UCS C-Series
Server Multiple
1 Gigabit
Ethernet
2238
The Cisco UCS C-Series Server connectivity to Cisco FEX options in Figure 18 above all make use of vPC
connections by using IEEE 802.3ad EtherChannel from the host to single-homed Cisco Nexus 2232PP FEXs.
When using vPC for server connections, each server interface must be identically configured on each data
center core Cisco Nexus 5500UP switch. The Cisco FEXtodata center core uplinks use a port channel to load
balance server connections over multiple links and provide added resiliency.
The Cisco UCS C-Series Server with 10-Gigabit Ethernet and FCoE connectivity uses a converged network
adapter (CNA) in the server and must connect to either a Cisco Nexus 2232PP FEX or directly to the Cisco
Nexus 5500UP switch. This is because FCoE uplinks must use a fiber optic or twinax connection to maintain bit
error rate (BER) thresholds for Fibre Channel transport. Cisco supports FCoE on 10-Gigabit Ethernet only at this
time. If used with vPC, the Ethernet traffic is load balanced across the server links with EtherChannel and Fibre
Channel runs up each link to the core, with SAN-A traffic on one link to the connected Cisco FEX and data center
core switch, and SAN-B traffic on the other link to the connected Cisco FEX and data center core switch, as is
typical of Fibre Channel SAN traffic.
Example
This example shows the configuration of the FEX interface on Cisco Nexus 5500UP switch-A.
interface Ethernet 103/1/3
description Dual-homed server FCoE link to SAN-A VSAN 304
switchport mode trunk
switchport trunk allowed vlan 148-163,304
spanning-tree port type edge trunk
no shut
This example shows the configuration of the FEX interface on Cisco Nexus 5500UP switch-B.
interface Ethernet 104/1/3
description Dual-homed server FCoE link to SAN-B VSAN 305
switchport mode trunk
switchport trunk allowed vlan 148-163,305
spanning-tree port type edge trunk
no shut
Compute Connectivity
94
The Cisco UCS C-Series Server with 10-Gigabit Ethernet without FCoE can connect to a Cisco Nexus 2232 FEX
or directly to the Cisco Nexus 5500UP switch. These server connections can be fiber optic, copper, or twinax,
depending on the Cisco FEX and server combination used. If used with vPC, the Ethernet traffic is load balanced
across the server links with EtherChannel.
The Cisco UCS C-Series Server with multiple 1-Gigabit Ethernet uses vPC to load balance traffic over multiple
links using EtherChannel. The use of vPC is not a requirement. In a non-vPC server connection where you want
independent server interfaces, you may prefer connecting to a dual-homed Cisco FEX for resiliency unless the
server operating system provides resilient connectivity.
Configuration for the Cisco Nexus FEX to Cisco Nexus 5500UP switch connections is detailed in the Configure
Fabric Extender Connectivity chapter earlier in this guide. Detailed configuration for Cisco UCS C-Series
deployment can be found in the Unified Computing System Technology Design Guide.
Cisco 2248TP
Fabric Extenders
2239
Single-Homed
Server
The vPC connection from the Cisco Nexus 2248TP FEX provides both control plane and data plane redundancy
for servers connected to the same Cisco FEX. This topology provides resiliency for the attached servers in the
event of a fabric uplink or Cisco Nexus 5500UP core switch failure; however there is no resiliency in the event
of a Cisco Nexus 2248TP failure. All servers connected to the vPC dual-homed Cisco FEX are vPC connections
and must be configured on each data center core Cisco Nexus 5500UP switch. Although this approach does
provide added resiliency, single-homed servers hosting important applications should be migrated to dualhomed connectivity to provide sufficient resiliency.
Compute Connectivity
95
Cisco Nexus
Fabric Extenders
2240
Standby Links
The vPC connection from the Cisco Nexus 2248TP FEX provides both control plane and data plane redundancy
for servers connected to each Cisco FEX. This topology provides resiliency for the attached servers in the event
of a fabric uplink or Cisco Nexus 5500UP core switch failure. In the event of a Cisco FEX failure, the NIC teaming
switches to the standby interfaces.
Cisco Nexus
Fabric Extenders
2202
Server Port
Channel
Compute Connectivity
96
With Enhanced vPC, the dual-homed FEX uplinks are programmed with a port channel and vPC that connects
it to both data core switches, and the Ethernet interfaces on the FEX connected to the server interfaces are
programmed with a different port channel for the server port channel. The Cisco Nexus 5500 switches then
automatically create a vPC to enable the server port channel that is connected to the dual-homed FEX pair. The
result is a more resilient and simplified FEX design in the data center that can support single- and dual-homed
servers with or without EtherChannel from the server.
Enhanced vPC also supports a dual-homed server with EtherChannel running FCoE. However, this may not
be suitable for a high-bandwidth FCoE environment, because the FCoE traffic can only use a subset of the
FEX uplinks to the data center core as shown in Figure 22. The FCoE traffic can only use the FEX-toCisco
Nexus 5500 uplinks on the left side or right side, respectively, because SAN traffic must maintain SAN-A and
SAN-B isolation and therefore cannot connect to both data center core switches. Non-FCoE Ethernet traffic (for
example, IP connectivity) from the dual-homed FEX can utilize all FEX-todata center core uplinks, maximizing
traffic load balancing and bandwidth.
Figure 22 - Enhanced vPC with FCoE traffic
Nexus 5500UP Data Center Core
FCoE SAN-B
Traffic
FCoE SAN-A
Traffic
Cisco 2232PP
Fabric Extenders
FCoE SAN-A
Traffic
FCoE SAN-B
Traffic
UCS C-Series
Server Mulitple
1 Gigabit Ethernet
Compute Connectivity
2203
UCS C-Series
Server 10 Gigabit
Ethernet and
FCoE Connected
97
2241
Cisco Nexus
Fabric Extenders
Blade Server
with Passthrough
A second option for connecting a non-Cisco blade server system to the data center involves a blade server
system that has an integrated Ethernet switch. In this scenario, the integrated switch in the blade server chassis
generates spanning-tree BPDUs and therefore cannot be connected to fabric extenders. Another consideration
is that a blade server with an integrated switch generally uses a few high-speed 10-Gigabit Ethernet uplinks
where direct connection to the Cisco Nexus 5500UP switch core, as shown in Figure 24, is recommended.
Figure 24 - Third-party blade server system with integrated switch
Compute Connectivity
2242
Nexus 5500UP
Data Center Core
98
A third option is imbedding Cisco Nexus fabric extenders directly into the nonCisco blade server system
to connect to the data center core, as shown in Figure 25. Although this option has not been tested and
documented in this data center design guide, it has proven to be a desirable connectivity option for many
organizations.
Figure 25 - NonCisco blade server system with embedded Cisco Nexus fabric extenders
2243
Nexus 5500UP
Data Center Core
Summary
The compute connectivity options outlined in this chapter show how the CVD data center foundation design
integrates with Cisco UCS to build flexible and scalable compute connectivity. The data center architecture also
provides support for resilient, nonCisco server and blade system connectivity. For further detail on deploying
Cisco UCS Server systems, please refer to the Unified Computing System Design Guide.
Compute Connectivity
99
Network Security
Design Overview
To minimize the impact of unwanted network intrusions, firewalls and intrusion prevention systems (IPSs) should
be deployed between clients and centralized data resources.
Figure 26 - Deploy firewall inline to protect data resources
LAN Core
Internet
Data Center
Resources
2210
LAN/WAN
Because everything else outside the protected VLANs hosting the data center resources can be a threat, the
security policy associated with protecting those resources has to include the following potential threat vectors.
Data center threat landscape:
Internet
Remote access and teleworker VPN hosts
Remote office/branch networks
Business partner connections
Campus networks
Unprotected data center networks
Other protected data center networks
The data center security design employs a pair of Cisco Adaptive Security Appliance (ASA) 5585-X with SSP-20
firewall modules and matching IPS Security Service Processors (SSP) installed. This configuration provides up
to 10 Gbps of firewall throughput. The IPS and firewall SSPs deliver 3 Gbps of concurrent throughput. There is a
range of Cisco ASA 5585-X with IPS firewalls to meet your processing requirements.
All of the ports on modules installed in the Cisco ASA chassis are available to the firewall SSP, which offers
a very flexible configuration. The Cisco ASA firewalls are dual-homed to the data center core Cisco Nexus
5500UP switches using two 10-Gigabit Ethernet links for resiliency. The pair of links on each Cisco ASA is
configured as an EtherChannel, which provides load balancing as well as rapid and transparent failure recovery.
The Cisco NX-OS Virtual Port Channel (vPC) feature on the Cisco Nexus 5500UP data core switches allow the
firewall EtherChannel to span the two data center core switches (multichassis EtherChannel) but appear to be
connected to a single upstream switch. This EtherChannel link is configured as a VLAN trunk in order to support
access to multiple secure VLANs in the data center. One VLAN on the data center core acts as the outside
VLAN for the firewall, and any hosts or servers that reside in that VLAN are outside the firewall and therefore
receive no protection from Cisco ASA for attacks originating from anywhere else in the organizations network.
Other VLANs on the EtherChannel trunk will be designated as being firewalled from all the other data center
threat vectors or firewalled with additional IPS services.
Network Security
100
The pair of Cisco ASAs is configured for firewall active/standby high availability operation to ensure that access
to the data center is minimally impacted by outages caused by software maintenance or hardware failure. When
Cisco ASA appliances are configured in active/standby mode, the standby appliance does not handle traffic, so
the primary device must be sized to provide enough throughput to address connectivity requirements between
the core and the data center. Although the IPS modules do not actively exchange state traffic, they participate
in the firewall appliances active/standby status by way of reporting their status to the firewalls status monitor. A
firewall failover will occur if either the Cisco ASA itself has an issue or the IPS module becomes unavailable.
The Cisco ASAs are configured in routing mode; as a result, the secure network must be in a separate subnet
from the client subnets. IP subnet allocation would be simplified if Cisco ASA were deployed in transparent
mode; however, hosts might inadvertently be connected to the wrong VLAN, where they would still be able to
communicate with the network, incurring an unwanted security exposure.
The data center IPSs monitor for and mitigate potential malicious activity that is contained within traffic allowed
by the security policy defined on the Cisco ASAs. The IPS sensors can be deployed in promiscuous intrusion
detection system (IDS) mode so that they only monitor and alert for abnormal traffic. The IPS modules can
be deployed inline in IPS mode to fully engage their intrusion prevention capabilities, wherein they will block
malicious traffic before it reaches its destination. The choice to have the sensor drop traffic or not is one that is
influenced by several factors: risk tolerance for having a security incident, risk aversion for inadvertently dropping
valid traffic, and other possibly externally driven reasons like compliance requirements for IPS. The ability to run
in IDS mode or IPS is highly configurable to allow the maximum flexibility in meeting a specific security policy.
LAN/WAN
Internet
Firewalled
VLANs
Firewall
+IPS VLANs
Open
VLANs
2244
Data Center
Core
As another example, services that are indirectly exposed to the Internet (via a web server or other application
servers in the Internet demilitarized zone) should be separated from other services, if possible, to prevent
Internet-borne compromise of some servers from spreading to other services that are not exposed. Traffic
between VLANs should be kept to a minimum, unless your security policy dictates service separation. Keeping
traffic between servers intra-VLAN will improve performance and reduce the load on network devices.
For this design, open VLANs without any security policy applied are configured physically and logically on the
data center core switches. For devices that need an access policy, they will be deployed on a VLAN behind the
firewalls. Devices that require both an access policy and IPS traffic inspection will be deployed on a different
VLAN that exists logically behind the Cisco ASAs. Because the Cisco ASAs are physically attached only to
the data center core Nexus switches, these protected VLANs will also exist at Layer 2 on the data center core
switches. All protected VLANs are logically connected via Layer 3 to the rest of the network through Cisco ASA
and, therefore, are reachable only by traversing the appliance.
Network Security
101
Reader Tip
A detailed examination of regulatory compliance considerations exceeds the scope of
this document; you should include industry regulation in your network security design.
Non-compliance may result in regulatory penalties such as fines or suspension of
business activity.
Network security policies can be broken down into two basic categories: whitelist policies and blacklist policies.
A whitelist policy offers a higher implicit security posture, blocking all traffic except that which must be allowed
(at a sufficiently granular level) to enable applications. Whitelist policies are generally better positioned to meet
regulatory requirements because only traffic that must be allowed to conduct business is allowed. Other traffic is
blocked and does not need to be monitored to assure that unwanted activity is not occurring. This reduces the
volume of data that will be forwarded to an IDS or IPS, and also minimizes the number of log entries that must be
reviewed in the event of an intrusion or data loss.
Xterm
FTP
Microsoft Data
SQL
DNS/HTTP/HTTPS
SNMP
MSRPC
Bypass
Other Data
Network Security
3020
102
Inversely, a blacklist policy only denies traffic that specifically poses the greatest risk to centralized data
resources. A blacklist policy is simpler to maintain and less likely to interfere with network applications. A whitelist
policy is the best-practice option if you have the opportunity to examine the networks requirements and adjust
the policy to avoid interfering with desired network activity.
Telnet
SNMP
Other Data
3019
Cisco ASA firewalls implicitly end access lists with a deny-all rule. Blacklist policies include an explicit rule, prior
to the implicit deny-all rule, to allow any traffic that is not explicitly allowed or denied.
Whether you choose a whitelist or blacklist policy basis, consider IDS or IPS deployment for controlling
malicious activity on otherwise trustworthy application traffic. At a minimum, IDS or IPS can aid with forensics
to determine the origin of a data breach. Ideally, IPS can detect and prevent attacks as they occur and provide
detailed information to track the malicious activity to its source. IDS or IPS may also be required by the regulatory
oversight to which a network is subject (for example, PCI 2.0).
A blacklist policy that blocks high-risk traffic offers a lower-impactbut less secureoption (compared to
a whitelist policy) in cases where a detailed study of the networks application activity is impractical, or if
the network availability requirements prohibit application troubleshooting. If identifying all of the application
requirements is not practical, you can apply a blacklist policy with logging enabled to generate a detailed history
of the policy. With details about its networks behavior in hand, an organization can more easily develop an
effective whitelist policy.
Network Security
103
Deployment Details
Data center security deployment is addressed in five discrete processes:
Configuring Cisco ASA Firewall ConnectivityDescribes configuring network connections for the Cisco
ASA firewalls on the Cisco Nexus 5500UP data center core.
Configuring the Data Center FirewallDescribes configuring Cisco ASA initial setup and the
connections to the data center core.
Configuring Firewall High AvailabilityDescribes configuring the high availability active/standby state for
the firewall pair.
Evaluating and Deploying Firewall Security PolicyOutlines the process for identifying security policy
needs and applying a configuration to meet requirements.
PROCESS
Complete the following procedures to configure connectivity between the Cisco ASA chassis and the core. Note
that this design describes a configuration wherein the Cisco ASA firewalls are connected to the Nexus 5500UP
data center core switches by using a pair of 10-Gigabit Ethernet interfaces in an EtherChannel. The Cisco ASA
firewall connects between the data center corerouted interface and the protected VLANs that also reside on the
switches.
Connect the interfaces on the primary Cisco ASA firewall to both Cisco Nexus 5500 data center core switches,
and the secondary Cisco ASA firewall to both Cisco Nexus 5500 data center core switches as shown in Figure
30. Cisco ASA network ports are connected as follows:
Firewall-A Ten Gigabit Ethernet 0/8 connects to the Cisco Nexus 5500UP switch-A Ethernet 1/1
Firewall-A Ten Gigabit Ethernet 0/9 connects to the Cisco Nexus 5500UP switch-B Ethernet 1/1
Gigabit Ethernet 0/1 connects via a crossover or straight-through Ethernet cable between the two
firewalls for the failover link
Firewall-B Ten Gigabit Ethernet 0/8 connects to the Cisco Nexus 5500UP switch-A Ethernet 1/2
Firewall-B Ten Gigabit Ethernet 0/9 connects to the Cisco Nexus 5500UP switch-B Ethernet 1/2
Table 5 - Data center firewall VLANs
VLAN
IP address
Trust state
Use
153
10.4.53.1 /25
Untrusted
154
10.4.54.X /24
Trusted
155
10.4.55.X /24
Trusted
Network Security
104
Procedure 1
Step 1: Configure the outside (untrusted) and inside (trusted) VLANs on Cisco Nexus 5500UP data center core
switch-A.
vlan 153
name FW_Outside
vlan 154
name FW_Inside_1
vlan 155
name FW_Inside_2
Step 2: Configure the Layer 3 SVI for VLAN 153 on Cisco Nexus 5500UP data center core switch-A. Set the
HSRP address for the default gateway to 10.4.53.1 and the HSRP priority for this switch to 110.
interface Vlan153
no shutdown
description FW_Outside
no ip redirects
ip address 10.4.53.2/25
ip router eigrp 100
ip passive-interface eigrp 100
ip pim sparse-mode
hsrp 153
priority 110
ip 10.4.53.1
Step 3: Configure static routes pointing to the trusted subnets behind the Cisco ASA firewall on Cisco Nexus
5500UP data center core switch-A.
ip route 10.4.54.0/24 Vlan 153 10.4.53.126
ip route 10.4.55.0/24 Vlan 153 10.4.53.126
Step 4: Redistribute the trusted subnets into the existing EIGRP routing process on the first Cisco Nexus
5500UP data center core switch. This design uses route maps to control which static routes will be redistributed.
route-map static-to-eigrp permit 10
match ip address 10.4.54.0/24
route-map static-to-eigrp permit 20
match ip address 10.4.55.0/24
!
router eigrp 100
redistribute static route-map static-to-eigrp
Step 5: Configure the outside (untrusted) and inside (trusted) VLANs on Cisco Nexus 5500UP data center core
switch-B.
vlan 153
name FW_Outside
vlan 154
name FW_Inside_1
Network Security
105
vlan 155
name FW_Inside_2
Step 6: Configure the Layer 3 SVI for VLAN 153 on Cisco Nexus 5500UP data center core switch-B. Set the
HSRP address for the default gateway to 10.4.53.1 and leave the HSRP priority for this switch at the default
setting.
interface Vlan153
no shutdown
description FW_Outside
no ip redirects
ip address 10.4.53.3/25
ip router eigrp 100
ip passive-interface eigrp 100
ip pim sparse-mode
hsrp 153
ip 10.4.53.1
Step 7: Configure static routes pointing to the trusted subnets behind the Cisco ASA firewall on Cisco Nexus
5500UP data center core switch-B.
ip route 10.4.54.0/24 Vlan 153 10.4.53.126
ip route 10.4.55.0/24 Vlan 153 10.4.53.126
Step 8: Redistribute the trusted subnets into the existing EIGRP routing process on Cisco Nexus 5500UP data
center core switch-B. This design uses route maps to control which static routes will be redistributed.
route-map static-to-eigrp permit 10
match ip address 10.4.54.0/24
route-map static-to-eigrp permit 20
match ip address 10.4.55.0/24
!
router eigrp 100
redistribute static route-map static-to-eigrp
Network Security
106
Procedure 2
The Cisco ASA firewalls protecting applications and servers in the data center will be dual-homed to each of the
data center core Cisco Nexus 5500UP switches by using EtherChannel links.
Figure 30 - Firewall to data center core switch connections
Data Center
Firewall-A
Data Center
Firewall-B
Failover Cable
Po Ch-53
vPC-53
Po Ch-54
vPC-54
To LAN Core
2245
Nexus 5500UP
Ethernet vPC
Switch Fabric
Dual-homed or multichassis EtherChannel connectivity to the Cisco Nexus 5500UP switches uses vPCs, which
allow Cisco ASA to connect to both of the data center core switches with a single logical EtherChannel.
Step 1: Configure the physical interfaces that will make up the port channels on Cisco Nexus 5500UP data
center core switch-A.
interface Ethernet1/1
description DC5585a Ten0/8
channel-group 53 mode active
!
interface Ethernet1/2
description DC5585b Ten0/8
channel-group 54 mode active
When you assign the channel group to a physical interface, it creates the logical EtherChannel (port-channel)
interface that will be configured in the next step.
Step 2: Configure the logical port-channel interfaces on data center core switch-A. The physical interfaces tied
to the port channel will inherit the settings from the logical port-channel interface. Assign the QoS policy created
in Procedure 3, Configure QoS policies, to the port channel interfaces.
interface port-channel53
switchport mode trunk
switchport trunk allowed vlan
service-policy type qos input
vpc 53
!
interface port-channel54
switchport mode trunk
switchport trunk allowed vlan
service-policy type qos input
vpc 54
Network Security
153-155
DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
153-155
DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
August 2014 Series
107
The port channels are created as vPC port channels, because the fabric interfaces are dual-homed
EtherChannels to both Nexus 5500UP data center core switches.
Tech Tip
The default interface speed on the Cisco Nexus 5500 Ethernet ports is 10 Gbps. If
you are using a 1-Gigabit Ethernet SFP you must program the interface for 1-Gigabit
operation with the speed 1000 command on either the port-channel interface or the
physical interfaces.
Step 3: Apply following configuration to Cisco Nexus 5500UP data center core switch-B.
interface Ethernet1/1
description DC5585a Ten0/9
channel-group 53 mode active
!
interface Ethernet1/2
description DC5585b Ten0/9
channel-group 54 mode active
!
interface port-channel53
switchport mode trunk
switchport trunk allowed vlan
service-policy type qos input
vpc 53
!
interface port-channel54
switchport mode trunk
switchport trunk allowed vlan
service-policy type qos input
vpc 54
Network Security
153-155
DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
153-155
DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
108
Firewall IP address
Primary
10.4.53.126 /25
10.4.63.21 /24
Secondary
10.4.53.125 /25
10.4.63.23 /24
Address
Domain name
cisco.local
10.4.48.10
10.4.48.15
NTP server
10.4.48.17
Procedure 1
Connect to the console of the Cisco ASA firewall and perform the following global configuration.
Step 1: Select anonymous monitoring preference. When you enter configuration mode for an unconfigured unit,
you are prompted for anonymous reporting. You are given a choice to enable anonymous reporting of error and
health information to Cisco. Select the choice appropriate for your organizations security policy.
*************************** NOTICE ***************************
Help to improve the ASA platform by enabling anonymous reporting, which allows
Cisco to securely receive minimal error and health information from the device.
To learn more about this feature, please visit: http://www.cisco.com/go/smartcall
Network Security
109
Would you like to enable anonymous error reporting to help improve the product?
[Y]es, [N]o, [A]sk later:N
Step 2: Configure the Cisco ASA firewall host name to make it easy to identify.
hostname DC5585a
Step 3: Disable the dedicated management port. This design does not use it.
interface Management0/0
shutdown
Step 4: Configure local user authentication.
Username [username] password [password]
Tech Tip
All passwords in this document are examples and should not be used in production
configurations. Follow your companys policy, orif no policy existscreate a password
using a minimum of eight characters with a combination of uppercase, lowercase, and
numbers.
Procedure 2
Two 10-Gigabit Ethernet links connect each Cisco ASA chassis to the two core Cisco Nexus switches. The two
interfaces are paired in a port channel group. Subinterfaces are created on the port channel for the outside
VLAN 153 and all the protected VLANs inside (154 and 155). Each interface created will be assigned the correct
VLAN, an appropriate name, a security level, and an IP address and netmask.
Cisco ASA 5585-X with IPS
Standby
Active
Firewall
Protected VLANs
Open VLANs
Data Center
Core
Non-secure VLANs
Secure VLANs
Network Security
2211
LAN/WAN
110
All interfaces on Cisco ASA have a security-level setting. The higher the number, the more trusted the interface,
relative to other interfaces. By default, the inside interface is assigned 100, the highest security level. The outside
interface is assigned 0. By default, traffic can pass from a high-security interface to a lower-security interface. In
other words, traffic from an inside network is permitted to an outside network, but not conversely.
Step 1: Configure the port channel group by using the two 10-Gigabit Ethernet interfaces.
interface Port-channel10
description ECLB Trunk to 5548 Switches
no shutdown
!
interface TenGigabitEthernet0/8
description Trunk to DC5548 eth1/1
channel-group 10 mode passive
no shutdown
!
interface TenGigabitEthernet0/9
description Trunk to DC5548 eth1/2
channel-group 10 mode passive
no shutdown
Step 2: Configure the subinterfaces for the three VLANs: VLAN 153 outside, VLAN 154 inside the firewall, and
VLAN 155 inside the firewall with IPS.
interface Port-channel10.153
description DC VLAN Outside the FW
vlan 153
nameif outside
security-level 0
ip address 10.4.53.126 255.255.255.128 standby 10.4.53.125
no shutdown
!
interface Port-channel10.154
description DC VLAN Inside the Firewall
vlan 154
nameif DC-InsideFW
security-level 75
ip address 10.4.54.1 255.255.255.0 standby 10.4.54.2
no shutdown
!
interface Port-channel10.155
description DC VLAN Inside the FW w/ IPS
vlan 155
nameif DC-InsideIPS
security-level 75
ip address 10.4.55.1 255.255.255.0 standby 10.4.55.2
no shutdown
Network Security
111
Procedure 3
Because the Cisco ASAs are the gateway to the secure VLANs in the data center, the Cisco ASA pair is
configured to use a static route to the HSRP address of the Cisco Nexus switches on outside VLAN 153.
Step 1: Configure the static route pointing to the data center core HSRP address on the Cisco ASA pair.
route outside 0.0.0.0 0.0.0.0 10.4.53.1 1
Procedure 4
(Optional)
If you want to reduce operational tasks per device, configure centralized user authentication by using the
TACACS+ protocol to authenticate management logins on the infrastructure devices to the AAA server.
As networks scale in the number of devices to maintain, there is an operational burden to maintain local user
accounts on every device also scales. A centralized AAA service reduces operational tasks per device and
provides an audit log of user access for security compliance and root-cause analysis. When AAA is enabled for
access control, it controls all management access to the network infrastructure devices (SSH and HTTPS).
TACACS+ is the primary protocol used to authenticate management logins on the infrastructure devices to the
AAA server. A local AAA user database was defined already to provide a fallback authentication source in case
the centralized TACACS+ server is unavailable.
Step 1: Configure the TACACS+ server.
aaa-server AAA-SERVER protocol tacacs+
aaa-server AAA-SERVER (outside) host 10.4.48.15 [Key]
Step 2: Configure the appliances management authentication to use the TACACS+ server first, and then the
local user database if the TACACS+ server is unavailable.
aaa
aaa
aaa
aaa
authentication
authentication
authentication
authentication
Tech Tip
User authorization on the Cisco ASA firewall, unlike Cisco IOS devices, does not
automatically present the user with the enable prompt if they have a privilege level of
15.
Network Security
112
Procedure 5
Logging and monitoring are critical aspects of network security devices to support troubleshooting and policycompliance auditing.
NTP is designed to synchronize time across a network of devices. An NTP network usually gets its time from an
authoritative time source, such as a radio clock or an atomic clock attached to a time server. NTP then distributes
this time across the organizations network.
Network devices should be programmed to synchronize to a local NTP server in the network. The local NTP
server typically references a more accurate clock feed from an outside source.
There is a range of detail that can be logged on the appliance. Informational-level logging provides the ideal
balance between detail and log-message volume. Lower log levels produce fewer messages, but they do
not produce enough detail to effectively audit network activity. Higher log levels produce a larger volume of
messages, but they do not add sufficient value to justify the number of messages logged.
Step 1: Configure the NTP server IP address.
ntp server 10.4.48.17
Step 2: Configure the time zone.
clock timezone PST -8 0
clock summer-time PDT recurring
Step 3: Configure which logs to store on the appliance.
logging enable
logging buffered informational
Procedure 6
Cisco Adaptive Security Device Manager (ASDM) requires that the appliances HTTPS server be available. Be
sure that the configuration includes networks where administrative staff has access to the device through Cisco
ASDM; the appliance can offer controlled Cisco ASDM access for a single address or management subnet (in
this case, 10.4.48.0/24).
HTTPS and SSH are more secure replacements for the HTTP and Telnet protocols. They use SSL and TLS to
provide device authentication and data encryption.
Use SSH and HTTPS protocols in order to more securely manage the device. Both protocols are encrypted for
privacy, and the unsecure protocolsTelnet and HTTPare turned off.
SNMP is enabled to allow the network infrastructure devices to be managed by a network management system
(NMS). SNMPv2c is configured for a read-only community string.
Step 1: Allow internal administrators to remotely manage the appliance over HTTPS and SSH.
domain-name cisco.local
http server enable
http 10.4.48.0 255.255.255.0 outside
ssh 10.4.48.0 255.255.255.0 outside
ssh version 2
Network Security
113
Step 2: Specify the list of supported SSL encryption algorithms for Cisco ADSM.
ssl encryption aes256-sha1 aes128-sha1 3des-sha1
Step 3: Configure the appliance to allow SNMP polling from the NMS.
snmp-server host outside 10.4.48.35 community [SNMP RO string]
snmp-server community [SNMP RO string]
Procedure 7
PROCESS
Cisco ASAs are set up as a highly available active/standby pair. Active/standby is used, rather than an active/
active configuration, because this allows the same appliance to be used for firewall and VPN services if required
in the future (VPN functionality is disabled on the appliance in active/active configuration). In the event that the
active appliance fails or needs to be taken out of service for maintenance, the secondary appliance assumes all
active firewall and IPS functions. In an active/standby configuration, only one device is passing traffic at a time;
thus, the Cisco ASAs must be sized so that the entire traffic load can be handled by either device in the pair.
Both units in the failover pair must be the same model, with identical feature licenses and IPS (if the software
module is installed). For failover to be enabled, the secondary ASA unit needs to be powered up and cabled to
the same networks as the primary unit.
One interface on each appliance is configured as the state-synchronization interface, which the appliances use
to share configuration updates, determine which device in the high availability pair is active, and exchange state
information for active connections. The failover interface carries the state synchronization information. All session
state data is replicated from the primary to the secondary unit through this interface. There can be a substantial
amount of data, and it is recommended that this be a dedicated interface.
By default, the appliance can take from 2 to 25 seconds to recover from a failure. Tuning the failover poll times
can reduce that to 0.5 to 5 seconds. On an appropriately sized appliance, the poll times can be tuned down
without performance impact to the appliance, which minimizes the downtime a user experiences during failover.
It is recommended that you do not reduce the failover timer intervals below the values in this guide.
Network Security
114
Procedure 1
Step 1: Enable failover on the primary appliance, and then assign it as the primary unit.
failover
failover lan unit primary
Step 2: Configure the failover interface. Enter a key for the failover that you will later enter on the secondary
appliance to match.
failover
failover
failover
failover
Step 3: If you want to speed up failover in the event of a device or link failure, you can tune the failover timers.
With the default setting, depending on the failure, Cisco ASA can take from 2 to 25 seconds to fail over to the
standby unit. Tuning the failover poll times can reduce that to 0.5 to 5 seconds, depending on the failure.
failover polltime unit msec 200 holdtime msec 800
failover polltime interface msec 500 holdtime 5
Step 4: Configure the failover interface IP address.
failover interface ip failover 10.4.53.130 255.255.255.252 standby 10.4.53.129
Step 5: Enable the failover interface.
interface GigabitEthernet0/1
no shutdown
Step 6: Configure failover to monitor the inside and outside interfaces so that the active firewall will defer to the
standby firewall if connectivity is lost on the data center VLANs.
monitor-interface outside
monitor-interface DC-InsideFW
monitor-interface DC-InsideIPS
Procedure 2
Step 1: On the secondary Cisco ASA, enable failover and assign it as the secondary unit.
failover
failover lan unit secondary
Step 2: Configure the failover interface.
failover
failover
failover
failover
Network Security
115
This host
Other host -
State
Primary
Active
Secondary
Standby Ready
Date/Time
None
None
====Configuration State===
Sync Done
====Communication State===
Mac set
Step 6: Save your firewall configuration. On the CLI of the primary appliance, issue the copy runningconfig startup-config command. This will save the configuration on the primary appliance and replicate the
configuration to the secondary appliance.
copy running-config startup-config
Network Security
116
PROCESS
This process describes the steps required to evaluate which type of policy fits an organizations data center
security requirements and provides the procedures necessary to apply these policies.
Procedure 1
Procedure 2
Network security policy configuration can vary greatly among organizations and is dependent on the policy and
management requirements of the organization. Thus, examples here should be used as a basis for security policy
configuration.
After the system setup and high availability is complete via CLI, you will use the integrated GUI management tool,
Cisco ASDM, to program security policies:
Network Objectssuch as hosts and IP subnets
Firewall access rules
Network Security
117
First, to simplify the configuration of the security policy, you create the network objects that are used in the
firewall policies.
Table 8 - Firewall network objects
Network object name
Object
type
IP address
Description
IT_Web_Server
Host
10.4.54.80
IT Dept. server
Finance_Web_Server
Host
10.4.54.81
HR_Web_Server
Host
10.4.55.80
HR Dept. server
Research_Web_Server
Host
10.4.55.81
IT_Management_Host_Range
Network
10.4.48.224 - 254
IT Management Systems
Step 1: Using a secure HTTP session (Example: https://10.4.53.126), navigate to the Cisco ASA firewall outside
interface programmed in Step 2 of Procedure 2, Configure firewall connectivity, and then click Run ASDM.
Cisco ASDM starts from a Java Web Start application.
Step 2: Enter the username and password configured for the Cisco ASA firewall in Step 4 of Procedure 1,
Configure initial Cisco ASA settings.
Step 3: In the Cisco ASDM work pane, navigate to Configuration> Firewall> Objects> Network Objects/
Groups.
Step 4: Repeat Step 5 through Step 9 for each object listed in Table 8. If an object already exists, then skip to
the next object listed in the table.
Step 5: Click Add> Network Object.
Step 6: The Add Network Object dialog box appears.
Step 7: In the Name box, enter the name. (Example: IT_Web_Server)
Step 8: In the Type list, choose Host.
Step 9: In the IP Address box, enter the address. (Example: 10.4.54.80)
Network Security
118
Step 10: In the Description box, enter a useful description, and then click OK. (Example: IT Dept. server)
Step 11: After adding all of the objects listed in Table 8, on the Network Objects/Groups pane, click Apply.
Next specify which resources certain users (for example, IT management staff or network users) can use to
access management resources. In this example, management hosts in the IP address range 10.4.48.224254
are allowed SSH and SNMP access to server room subnets.
Step 12: Navigate to Configuration> Firewall> Objects> Network Objects/Groups.
Step 13: Click Add> Network Object.
Step 14: The Add Access Rule dialog box appears.
Step 15: In the Name box, enter the name. (Example: IT_Management_Host_Range)
Step 16: In the Type list, choose Range.
Step 17: In the Start Address box, enter the first address in the range. (Example: 10.4.48.224)
Step 18: In the End Address box, enter the last address in the range. (Example: 10.4.48.254)
Network Security
119
Step 19: In the Description box, enter a useful description, and then click OK. (Example: IT Management
Systems Range)
Next you will create a service group containing SSH and SNMP protocols, and you create an access list to permit
the SSH and SNMP traffic service group from the network management range to the server subnets.
Step 20: Navigate to Configuration> Firewall> Objects> Service Objects/Groups.
Step 21: Click Add> Service Group.
Step 22: In the Group Name box, enter the name. (Example: Mgmt-Traffic)
Step 23: In the Description box, enter a useful description. (Example: Management Traffic SSH and SNMP)
Network Security
120
Step 24: In the Existing Service/Service Group list, choose tcp> ssh and udp> snmp, click Add, and then
click OK.
Procedure 3
If you are deploying a whitelist security policy, complete Option 1 of this procedure. If you are deploying a
blacklist security policy, complete Option 2 of this procedure.
Interface
Action
Source
Destination
Service
Description
Any
Permit
any4
IT_Web_Server
tcp/http, tcp/
https
Inbound web to IT
Dept. Server
Selected / Default
Any
Permit
any4
Finance_Web_Server
tcp/http, tcp/
https
Selected / Default
Any
Permit
any4
HR_Web_Server
tcp/http, tcp/
https
Inbound web to HR
Dept. Server
Selected / Default
Any
Permit
any4
Research_Web_Server
tcp/http, tcp/
https
Inbound web to
Research Dept. Server
Selected / Default
Outside
Permit
IT_
Management_
Host_Range
DC-InsideFW-network,
DC-InsideIPS-network
tcp/ssh, udp/
snmp
Management access to
servers
Selected / Default
Network Security
121
Step 2: Repeat Step 3 through Step 11 for all rules listed in Table 9.
Step 3: Click Add> Add Access Rule.
Step 4: The Add Access Rule dialog box appears.
Step 5: In the Interface list, choose the interface. (Example: Any)
Step 6: For the Action option, select the action. (Example: Permit)
Step 7: In the Source box, choose the source. (Example: any4)
Step 8: In the Destination box, choose the destination. (Example: IT_Web_Server)
Step 9: In the Service box, enter the service. (Example: tcp/http, tcp/https)
Step 10: In the Description box, enter a useful description. (Example: Inbound web to IT Dept. Server)
Step 11: Select or clear Enable Logging. (Example: Selected)
Step 12: In the Logging Level list, choose the logging level value, and then click OK. (Example: Default)
Network Security
122
Step 13: After adding all of the rules in Table 9, in the order listed, click Apply on the Access Rules pane.
Logging
Enable / Level
Mgmt-Traffic
Management access to
servers
Selected /
Default
any
tcp/ssh, udp/
snmp
Selected /
Default
DC-InsideFW-network,
DC-InsideIPS-network
ip
Interface
Action
Source
Destination
Outside
Permit
IT_
Management_
Host_Range
DC-InsideFW-network,
DC-InsideIPS-network
Any
Deny
any4
Any
Permit
any4
Service or
Service Group
Step 2: Repeat Step 3 through Step 12 for all rules listed in Table 10.
Step 3: Click Add> Add Access Rule.
Step 4: The Add Access Rule dialog box appears.
Step 5: In the Interface list, choose the interface. (Example: Outside)
Step 6: For the Action option, select the action. (Example: Permit)
Network Security
123
Network Security
124
PROCESS
From a security standpoint, intrusion detection systems (IDS) and intrusion prevention systems (IPS) are
complementary to firewalls because firewalls are generally access-control devices that are built to block access
to an application or host. In this way, a firewall can be used to remove access to a large number of application
ports, reducing the threat to the servers. IDS and IPS sensors look for attacks in network and application traffic
that is permitted to go through the firewall. If it detects an attack, the IDS sensor generates an alert to inform the
organization about the activity. IPS is similar in that it generates alerts due to malicious activity and, additionally, it
can apply an action to block the attack before it reaches the destination.
Design Considerations
Use IDS when you do not want to impact the availability of the network or create latency issues. Use IPS when
you need higher security than IDS can provide and when you need the ability to drop malicious data packets.
The secure data center design using a Cisco ASA 5585-X with IPS implements a policy for IPS, which sends all
traffic to the IPS module inline.
Your organization may choose an IPS or IDS deployment depending on regulatory and application requirements.
It is very easy to initially deploy an IDS, or promiscuous, design and then move to IPS after you understand
the traffic and performance profile of your network and you are comfortable that production traffic will not be
affected.
Network Security
125
Procedure 1
A LAN switch port on the data center Ethernet Out-of-Band Management switch provides connectivity for the
IPS sensors management interface.
Step 1: Connect the IPS modules management port on each appliance to the data center Ethernet Out-of-Band
Management switch configured in earlier in this guide in Procedure 3, Configure switch access ports.
Step 2: Ensure that the ports are configured for the management VLAN 163 so that the sensors can route to or
directly reach the management station.
interface GigabitEthernet1/0/32
description DC-5585X-IPSa
!
interface GigabitEthernet1/0/34
description DC-5585X-IPSb
!
Interface range GigabitEthernet1/0/32, GigabitEthernet1/0/34
switchport
switchport access vlan 163
switchport mode access
switchport host
Tech Tip
In this design, Cisco ASA is managed in-band, and the IPS, either module or appliance,
is always managed from the dedicated management port.
Procedure 2
Use the sensors CLI in order to set up basic networking information, specifically: the IP address, gateway
address, and access lists that allow remote access. After these critical pieces of data are entered, the rest of the
configuration is accomplished by using Cisco Adaptive Security Device Manager/IPS Device Manager (ASDM/
IDM), the embedded GUI console.
Table 11 - Cisco ASA 5585-X firewall and IPS module addressing
ASA firewall failover status
Firewall IP address
Primary
10.4.53.126 /25
10.4.63.21 /24
Secondary
10.4.53.125 /25
10.4.63.23 /24
Network Security
126
Step 1: Connect to the IPS SSP console through the serial console on the IPS SSP module on the front panel of
the Cisco ASA 5585-X primary firewall.
Tech Tip
You can also gain access to the console on the IPS SSP by using the session 1
command from the CLI of the Cisco ASAs SSP.
Step 2: Log in to the IPS device. The default username and password are both cisco.
login: cisco
Password:[password]
If this is the first time the sensor has been logged into, you are prompted to change the password. Enter the
current password, and then input a new password. Change the password to a value that complies with the
security policy of your organization.
Step 3: At the IPS modules CLI, launch the System Configuration Dialogue.
sensor# setup
The IPS module enters the interactive setup.
Step 4: Define the IPS modules host name. Note that unlike Cisco IOS devices where the host name instantly
changes the CLI prompt to reflect the new host name, the IPS will display the new host name for the CLI prompt
upon the next login to the sensor.
--- Basic Setup ----- System Configuration Dialog --At any point you may enter a question mark ? for help.
Use ctrl-c to abort configuration dialog at any prompt.
Default settings are in square brackets [].
Current time: Tue Jul 1 10:22:35 2014
Setup Configuration last modified: Wed Jun 25 12:33:28 2014
Enter host name [sensor]: IPS-SSP20-A
Step 5: Define the IP address and gateway address for the IPS modules external management port.
Enter IP interface [192.168.1.62/24,192.168.1.250]: 10.4.63.21/24,10.4.63.1
Step 6: Define the access list, and then press Enter. This controls management access to the IPS module. Press
Enter at a blank Permit prompt to go to the next step.
Modify current access list?[no]: yes
Current access list entries:
No entries
Permit: 10.4.48.0/24
Network Security
127
Step 7: Configure the DNS server address, and then accept the default answer (no) for the next two questions.
Use DNS server for Auto-Updates from www.cisco.com and Global Correlation?[yes]:
yes
DNS server IP address[]: 10.4.48.10
Use HTTP proxy server for Auto-Updates from www.cisco.com and Global
Correlation?[no]: no
Modify system clock settings?[no]: no
Note the following:
An HTTP proxy server address is not needed for a network that is configured according to this guide.
You will configure time details in the IPS modules GUI console.
Step 8: For the option to participate in the SensorBase Network, enter partial and agree to participate based on
your security policy.
Participation in the SensorBase Network allows Cisco to collect aggregated
statistics about traffic sent to your IPS.
SensorBase Network Participation level? [off]: partial
.....
Do you agree to participate in the SensorBase Network?[no]: yes
.....
The IPS SSP displays your configuration and a brief menu with four options.
Step 9: On the System Configuration dialog box, save your configuration and exit setup by entering 2.
The following configuration was entered.
[removed for brevity]
exit
[0] Go to the command prompt without saving this configuration.
[1] Return to setup without saving this configuration.
[2] Save this configuration and exit setup.
[3] Continue to Advanced setup.
Enter your selection [3]: 2
Warning: DNS or HTTP proxy is required for global correlation inspection and
reputation filtering, but no DNS or proxy servers are defined.
--- Configuration Saved --Complete the advanced setup using CLI or IDM.
To use IDM, point your web browser at https://<sensor-ip-address>.
Step 10: Repeat this procedure for the IPS sensor installed in the other Cisco ASA chassis. In Step 4, be sure to
use a different host name (IPS-SSP20-B) and in Step 5, be sure to use a different IP address (10.4.63.23) on
the other sensors management interface.
Network Security
128
Procedure 3
After the basic setup in the System Configuration dialog box is complete, you will use the startup wizard in the
integrated management tool, Cisco ASDM/IDM, to complete the remaining tasks in order to configure a basic IPS
configuration:
Configure time settings
Configure DNS and NTP servers
Define a basic IDS configuration
Configure inspection service rule policy
Assign interfaces to virtual sensors
Using ASDM to configure the IPS module operation allows you to set up the communications path from the Cisco
ASA firewall to the IPS module, as well as configure the IPS module settings.
Step 1: Using a secure HTTP session (Example: https://10.4.53.126), navigate to the Cisco ASA firewall outside
interface programmed in Step 2 of the Configure firewall connectivity procedure, and then click Run ASDM,
which runs Cisco ASDM from a Java Web Start application.
Step 2: Enter the username and password configured for the Cisco ASA firewall in Step 4 of the Configure local
user authentication. procedure.
Step 3: In the Cisco ASDM work pane, click the Intrusion Prevention tab, enter the IP address, username, and
password that you configured for IPS-SSP20-A access, and then click Continue.
Cisco ASDM downloads the IPS information from the appliance for IPS-SSP20-A.
Network Security
129
Step 4: Click Configuration, click the IPS tab, and then click Launch Startup Wizard.
Network Security
130
On the next Sensor Setup page, in the Zone Name list, choose the appropriate time zone. Enter the NTP
Server IP address (Example: 10.4.48.17), ensure the Authenticated NTP is cleared, set the summertime
settings, and then click Next.
Tech Tip
NTP is particularly important for security event correlation if you use a Security Event
Information Manager product to monitor security activity on your network.
Skip the Virtual Sensors page and accept the defaults by clicking Next.
Skip the Signatures page and accept the defaults by clicking Next.
You must now decide the sensor mode. In IPS mode, the sensor is inline in the traffic path. In this mode,
the sensor inspectsand can droptraffic that is malicious. Alternatively, in IDS mode, a copy of the traffic is
passively sent to the sensor and the sensor inspectsand can send alerts abouttraffic that is malicious. IPS
mode provides more protection from Internet threats and has a low risk of blocking important traffic at this point
in the network, particularly when it is coupled with reputation-based technologies. You can deploy IDS mode as a
temporary solution to see what kind of impact IPS would have on the network and what traffic would be stopped.
After you understand the impact on your networks performance and after you perform any necessary tuning,
you can easily change the sensor to IPS mode.
Network Security
131
In the Specify traffic for IPS Scan window, in the Interface list, choose DC-InsideIPS, and next to Traffic
Inspection Mode, select Inline, and then click OK.
Network Security
132
Configure the IPS device to automatically pull updates from Cisco.com. On the Auto Update page, select
Enable Signature and Engine Updates. Provide a valid cisco.com username and password that holds
entitlement to download IPS software updates. Select Daily, enter a time between 12:00 AM and 4:00
AM for the update Start Time, and then select Every Day. Click Finish.
Step 6: When you are prompted if you want to commit your changes to the sensor, click Yes. ASDM/IDM applies
your changes and replies with a message that a reboot is required.
Step 7: Click OK, proceed to the next step, and delay the reboot until the end of this procedure.
Next, you assign interfaces to your virtual sensor.
Step 8: Navigate to Sensor Setup> Policies> IPS Policies.
Tech Tip
With certain versions of Java, ASDM does not properly load the IPS Policies
configuration section. If you are unable to load the IPS Policies configuration section in
ASDM, use IDM. To launch IDM, enter the management IP address of the IPS module in
a web browser (Example: https://10.4.24.27). Navigate to Configuration > Policies > IPS
Policies. The following steps apply to both ASDM and IDM.
Step 9: Highlight the vs0 virtual sensor, and then click Edit.
Network Security
133
Step 10: On the Edit Virtual Sensor dialog box, for the PortChannel0/0 interface, select Assigned, and then click
OK.
Network Security
134
Step 11: At the bottom of the main work pane, click Apply.
Caution
Do not attempt to modify the firewall configuration on the standby appliance.
Configuration changes are only made on the primary appliance.
Procedure 4
(Optional)
If you opted to run inline mode on an IPS device, the sensor is configured to drop high-risk traffic. By default,
this means that if an alert fires with a risk rating of at least 90 or if the traffic comes from an IP address with
a negative reputation that raises the risk rating to 90 or higher, the sensor drops the traffic. If the risk rating is
raised to 100 because of the source address reputation score, then the sensor drops all traffic from that IP
address.
Network Security
135
The chances of the IPS dropping traffic that is not malicious when using a risk threshold of 90 is very low.
However, if you want to adopt a more conservative policy, for the risk threshold, raise the value to 100.
Step 1: In Cisco ASDM, navigate to Configuration> IPS> Policies> IPS Policies.
Step 2: In the Virtual Sensor panel, right-click the vs0 entry, and then click Edit.
Step 3: In the Event Action Rule work pane, select Deny Packet Inline (Inline), and then click Delete.
Network Security
136
Step 5: On the Add Event Action Override dialog box, in the Risk Rating list, enter new value of 100-100, select
Deny Packet Inline, and then click OK.
Network Security
137
Product Description
Part Numbers
Software
Core Switch
N5K-C5596UP-FA
NX-OS 7.0(2)N1(1)
Layer 3 License
N55-M160L30V2
N5K-C5548UP-FA
N55-D160L3
N55-LAN1K9
N55-8P-SSK9
N2K-C2248TP-E
N2K-C2248TP-1GE
Cisco Nexus 2000 Series 32 1/10 GbE SFP+, FCoE capable Fabric
Extender
N2K-C2232PP-10GE
Ethernet
Extension
Product Description
Part Numbers
Software
Firewall
Cisco ASA 5585-X Security Plus IPS Edition SSP-40 and IPS
SSP-40 bundle
ASA5585-S40P40-K9
ASA 9.1(5)
IPS 7.3(2) E4
Cisco ASA 5585-X Security Plus IPS Edition SSP-20 and IPS
SSP-20 bundle
ASA5585-S20P20X-K9
Cisco ASA 5585-X Security Plus IPS Edition SSP-10 and IPS
SSP-10 bundle
ASA5585-S10P10XK9
Product Description
Part Numbers
Software
Fibre-channel
Switch
DS-C9148D-8G16P-K9
NX-OS 5.2(8d)
DS-C9124-K9
NX-OS 5.2(8d)
138
Computing Resources
Functional Area
Product Description
Part Numbers
Software
UCS Fabric
Interconnect
UCS-FI-6296UP
UCS-FI-6248UP
2.2(1d)
Cisco UCS Release
UCS B-Series
Blade Servers
N20-C6508
UCS-IOM2208XP
UCS-IOM2204XP
UCSB-B200-M3
N20-B6625-2
UCS-VIC-M82-8P
N20-AC0002
UCSC-C220-M3S
UCSC-C240-M3S
UCSC-BASE-M2-C460
Cisco UCS 1225 Virtual Interface Card Dual Port 10Gb SFP+
UCSC-PCIE-CSC-02
Cisco UCS P81E Virtual Interface Card Dual Port 10Gb SFP+
N2XX-ACPCI01
UCS C-Series
Rack-mount
Servers
2.2(1d)
Cisco UCS Release
2.0(1a)
Cisco UCS CIMC Release
139
Appendix B:
DeviceConfigurationFiles
For the configuration files from the CVD lab devices that we used to test this guide, please see the Data Center
Configuration Files Guide.
Appendix B: DeviceConfigurationFiles
140
Appendix C: Changes
This appendix summarizes the changes Cisco made to this guide since its last edition.
We updated the following software:
Cisco NX-OS 7.0(2)N1(1) on the Cisco Nexus 5500 Series Switches
Cisco MDS NX-OS 5.2(8d) on the Cisco MDS 9100 Series Switches
Cisco UCS Release 2.2(1d) on the Cisco UCS 6200 Series Fabric Interconnects
Cisco UCS Release 2.2(1d) on the Cisco UCS B-Series Blade Servers
Cisco UCS CIMC 2.0(1a) on the Cisco UCS C-Series Rack-Mount Servers
Cisco ASA 9.1(5) and IPS 7.3(2)E4 on the Cisco ASA 5500 Series Firewalls
Appendix C: Changes
141
Feedback
Please use the feedback form to send comments and
suggestions about this guide.
Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Europe Headquarters
Cisco Systems International BV Amsterdam,
The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, DESIGNS) IN THIS MANUAL ARE PRESENTED AS IS,
WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS
SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR
DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS
DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL
ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. Any examples, command display output, and figures included in the
document are shown for illustrative purposes only. Any use of actual IP addresses in illustrative content is unintentional and coincidental.
2014 Cisco Systems, Inc. All rights reserved.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this
URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership
relationship between Cisco and any other company. (1110R)
B-0000515-1 09/14