0% found this document useful (0 votes)
24 views

Final Network Configuration Vsphere5 1

Uploaded by

sp163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Final Network Configuration Vsphere5 1

Uploaded by

sp163
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Network Configuration & Troubleshooting in

vSphere 5.1
Agenda

 Basics of Virtual Networking


 Network IO Control
 New Features in vSphere 5.1
 Configuration Best Practices
 Questions
Anatomy of Virtual Networking

ESX/ESXi Host

VM0 VM1 VM2 VM3 Vmkernel


Service vmkernel (vmknic)
Virtual NIC (vnic) Console

Port Group
Service Console
vSwitch (vswif)

NIC Teams
Physical NIC
(vmnic or pnic)
Uplinks

Physical
Physical switch Network
Three Types of Virtual Switches

 vNetwork Standard Switch (vSS)


• Created and managed on a per-host basis
• Support basic features such as VLAN, NIC teaming, port security
 vNetwork Distributed Switch (vDS)
• Created managed at vSphere vCenter
• Supports all vSS features and more (PVLAN, traffic management, etc.)
• NOTE: vSS/vDS share same etherswitch module, only control path differ
 Third Party virtual Switches, Cisco Nexus 1000v (N1K)
• Created and managed by VSM (either VM or hardware/Nexus 1010)
• Supports features typically available in Cisco hardware switches
NIC Teaming

 Originating Virtual Port ID Based


• Default mode, distributes load on a per vnic basis.
• Physical switches not aware/involved
 MAC Based Teaming
• Distributes load on a source MAC hash basis
• Physical switches not aware/involved
 IP Hash Based
• Distributes load on a per SRC IP/DST IP basis (hash)
• Requires Portchannel / Etherchannel on physical switches
Load Based Teaming

• Introduced in vSphere 4.1


• Only traffic-load-aware teaming policy
• Supported only with the vNetwork Distributed Switch (vDS)
• Reshuffles the port binding dynamically
• Only move a flow when the mean send or receive utilization on an
uplink exceeds 75% of capacity
• Default Change over time is 30 Seconds
• In combination with VMware Network IO Control (NetIOC), LBT offers
a powerful solution
Refer:
http://blogs.vmware.com/performance/2010/12/vmware-load-based-
teaming-lbt-performance.html

6
VLAN Tagging Options

VST – Virtual Switch Tagging VGT – Virtual Guest Tagging EST – External Switch Tagging
vnic

vnic

vnic

vnic
vnic

vnic

vnic

vnic

vnic
Port
Groups
assigned
to a VLAN vSwitch vSwitch vSwitch

VLAN Tags VLAN Tags


applied in PortGroup
applied in set to VLAN
vSwitch Guest “4095”

Physical Switch Physical Switch Physical Switch

External Physical
switch applies
VST is the preferred and VLAN tags

most common method


Distributed Virtual Network (vNetwork)

Standard vSwitch vNetwork & dvSwitch


vCenter vCenter
vDistributed Switch Architecture
Control Plane (CP) and Data Plane, or IP Plane are separated.

• CP, responsible for configuring dvSwitches,dvPortgroups, dvPorts, Uplinks, NICTeaming


and so on, and for coordinating the migration of the ports, runs on vCenter

• DP, responsible for performing the forwarding, runs inside the VMKernel of the ESX/ESXi
(vSwitch).

vCenter

Distributed vSwitch Control Plane

ESX ESX ESX


Distributed vSwitch

vSwitch vSwitch vSwitch


vSwitch

I/O Plane
vSwitch Vs DVSwitch Vs Cisco N1K

Capabilities vSwitch dvSwitch Cisco N1K


L2 Switch Yes Yes Yes
VLAN Segmentation Yes Yes Yes
802.1Q Tagging Yes Yes Yes
Link Aggregation Static Static & LACP Static & LACP
TX Rate Limiting Yes Yes Yes
RX Rate Limiting No Yes Yes
Unified Management vSphere Client vSphere Client
Cisco CLI
Interface @Host @Vcenter
PVLAN No Yes Yes
Network I/O Control No Yes Yes
Port Mirroring No Yes Yes
SNMP, Netflow, etc. No Yes Yes
Load Based Teaming No Yes No
10
Network IO Control
Introduction

vSphere Network IO Control prioritize network access by continuously


monitoring I/O load over the network and dynamically allocating available I/O
resources according to specific business needs
NIOC features:

• Isolation: Ensure traffic isolation so that a given flow will never be


allowed to dominate over others, preventing drops and undesired
jitter
• Shares: Allow flexible networking capacity partitioning to help users
to deal with over commitment when flows compete aggressively for
the same resources
• Limits: Enforce traffic bandwidth limit on the overall VDS set of
dvUplinks
• Load-Based Teaming: Efficiently use a VDS set of dvUplinks for
networking capacity
• IEEE 802.1p tagging: Tag packets going out of the vSphere host
for proper handling by physical network resources.
New Features in vSphere 5.1
Distributed Switch Enhancements – vSphere 5.1
Manageability
• Roll-Back and Recovery
• Supported only for VMkernel Interfaces
• Great advantage in vDS environment
• Quick automatic recovery by failback in 30 seconds

• Config Backup & Restore


• Back up VDS or port group configuration asynchronously on disk
• Restore VDS or port group configuration from a backup
• Create a new entity (VDS or port group) from a backup

• MAC Address Management


• Takes away the limit of 64K MAC addresses
• Supports Locally Administered MAC Addresses
• User can control all 48 bits of MAC address

• Elastic Port Groups


• Reduce Management efforts for Static Port Binding
• Automatically Expands ports Allocation to vDS Port Groups
Performance & Scale

• LACP
• Plug and Play – Automatically configures and negotiates
• Dynamic – Detects link failures and cabling mistakes

• SR-IOV (Single Root I/O Virtualization)


• One PCI Express (PCIe) adapter to multiple separate logical devices (VM)
• SR-IOV capable devices benefits of direct I/O
• Lack vSphere VMotion and HA benefits

• VDS Scale Enhancements


• Number of VDS per vCenter Server 32  128
• Number of Static Port Groups per vCenter Server 5,000  10,000
• Number of Distributed Ports per vCenter Server 30,000  60,000
• Number of Hosts per VDS 350  500

• Data Plane Performance Improvements


Visibility & Troubleshooting
• Network Health Check
• Checks for VLAN, MTU, Network adapter teaming in every 1 minutes
• Requires at least two NIC in vSwitch for health check on MTU, VLAN and teaming

• RSPAN / ERSPAN
• Allow Port-Mirroring from VM to Remote Host (Dedicated VLAN)
• With ERSPAN mirror data is allowed to be encapsulated with GRE tunnel for across IP
Network monitoring.

• Internet Protocol Flow Information eXport (NetFlow v10)


• Users can employ templates to define the records
• Template descriptions are communicated by the VDS to the collector engine
• IPv6, MPLS, VXLAN flows can be reported

• SNMP MIB
• Better security through support for SNMPv3
• Support for IEEE/IETF networking MIB modules that provides additional visibility into
virtual networking infrastructure

• NetDUMP – Added support VLAN tag and vDS


Network Health Check – Introduction
 Inconsistencies in Virtual and Physical network configuration lead to
connectivity issues
 Prevent the common misconfiguration errors
• Mismatched VLAN trunks between virtual switch and physical switch
• Mismatches MTU settings between vNIC, virtual switch, physical adapter and
the physical switch ports
• Mismatched Teaming configurations
 vSphere Admins can now work more closely with the Network Admins
• Can provide hints and additional data to help resolve network misconfigurations
Network Health Check
 Size of your deployment
• If you have a small deployment and need basic network connectivity, vSS should be
sufficient
• If you have a large deployment, consider vDS/N1K

 Organizational
• If you have a group which controls both VM deployment and network provisioning,
then choose vSS/vDS (integrated control via vSphere Client UI)
• If you have a separate network admin group, trained on Cisco IOS CLI, and wishes to
maintain control over virtual and physical networking, then
choose N1K

 Other factors
• Budget – vDS/N1K requires Enterprise+ License
• Features – vSS features are frozen, vDS features are evolving (ask Cisco about N1K)
Security

• BPDU Filter
• Filter the BPDU packets that are generated by virtual machines,
• Preventing any DoS attack situation
• Feature is available on VSS and VDS

• ACL Support via vCloud Networking & Security App


Performance & Scale

• LACP
• Plug and Play – Automatically configures and negotiates
• Dynamic – Detects link failures and cabling mistakes

• SR-IOV (Single Root I/O Virtualization)


• One PCI Express (PCIe) adapter to multiple separate logical devices (VM)
• SR-IOV capable devices benefits of direct I/O
• Lack vSphere VMotion and HA benefits

• VDS Scale Enhancements


• Number of VDS per vCenter Server 32  128
• Number of Static Port Groups per vCenter Server 5,000  10,000
• Number of Distributed Ports per vCenter Server 30,000  60,000
• Number of Hosts per VDS 350  500

• Data Plane Performance Improvements


Configuration Best Practices
Choosing the Type of Switch
 Size of your deployment
• If you have a small deployment and need basic network connectivity, vSS should be
sufficient
• If you have a large deployment, consider vDS/N1K

 Organizational
• If you have a group which controls both VM deployment and network provisioning,
then choose vSS/vDS (integrated control via vSphere Client UI)
• If you have a separate network admin group, trained on Cisco IOS CLI, and wishes to
maintain control over virtual and physical networking, then
choose N1K

 Other factors
• Budget – vDS/N1K requires Enterprise+ License
• Features – vSS features are frozen, vDS features are evolving (ask Cisco about N1K)
Configuration Best Practices: #1

 Enable on Physical Switch Ports


• Spanning Tree Protocol- Loop avoidance mechanism
• PortFast- Fast convergence after failure
• Link State tracking-Detection of upstream ports(on Cisco switches)
• Enable BPDU Guard

 Validate
• Duplex settings, NIC Hardware status
• Link status
• Switch Port status
• Switch Port Configuration
• Jumbo Frames Configuration

 Ensure adequate CPU resources are available


• Heavy gigabit networking loads are CPU-intensive
• Both native and virtualized
Configuration Best Practices: #2
 Use separate Networks to avoid contention
• For Console OS (host management traffic), VMKernel (VMotion, iSCSI, NFS traffic), and VM
• For VMs running heavy networking workloads
• With explicit failover, Set Failback = ‘No’ to avoid the flapping of traffic between two network
adapters

 Tune VM-to-VM networking on same host


• Use same virtual switch to connect communicating VMs
• Avoid buffer overflow in guest driver: Tune receive/transmit buffers (Refer KB: 1428)

 Use vmxnet3 virtual device in guest


• Default 32-bit guest vNIC is vlance, but vmxnet3 performs better
• For vmxnet3 driver install tools
• e1000 is the default for 64-bit guests
• Enhanced vmxnet3 is available for several guest OSes
Configuration Best Practices: #3
Converge Network and Storage I/O onto 10GE
• Reduce cabling requirements
• Simplify management and reduce cost

Tools for Traffic Management


1. Traffic Shaping
• Limit the amount of traffic a vNic may send / receive
2. Network I/O Control (vDS + vSphere 4.1)
• Isolate different traffic class from each other
• Each type of traffic is guaranteed a shared of the pNic bandwidth
• Unused bandwidth are automatically distributed to other traffic types
Questions ??
Additional Resources-Documentation
Network-Configuration & Administration
• http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-ESXi-vcenter-server-50-networking-guide.pdf
• http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf
• http://blogs.vmware.com/performance/2010/12/vmware-load-based-teaming-lbt-performance.html
• VMware Technical Whitepaper - VMware vSphere Distributed Switch Best Practices
• vSphere 5 Configuration Maximums Guide

Performance Best Practices


• Technical Whitepaper – VMware Network I/O Control: Architecture, Performance and Best Practices for VMware vSphere 4.1
• http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf

KB articles
• ESX/ESXi hosts have intermittent or no network connectivity (1004109)
• Configuring networking from the ESX/ESXi service console command line (1000258)
• Verifying ESX/ESXi host networking configuration on the service console (1003796)
• Observed IP range does not show network in ESX/ESXi or ESX (1006744)
• Configuring the ESXi Management Network from the direct console (1006710)
• Configuring and troubleshooting basic software iSCSI setup (1008083)
• VMware KB 1003804 – STP may cause temporary loss of network connectivity when a failover or failback event occurs
• VMware KB2007467 - Multiple-NIC vMotion in vSphere 5

You might also like