VMware VCP5-DCV Study Guide
VMware VCP5-DCV Study Guide
by Antun Peicevic
First edition
Technical editor: Marko Maslac
Copyright© 2015 Geek University Press
Disclaimer
This book is designed to provide information about selected topics for the VMware VCP5-
DCV exam. Every effort has been made to make this book as complete and as accurate as
possible, but no warranty is implied. The information is provided on an as is basis. Neither
the authors, Geek University Press, nor its resellers, or distributors will be held liable for
any damages caused or alleged to be caused either directly or indirectly by this book. The
opinions expressed in this book belong to the author and are not necessarily those of Geek
University Press.
Note that this is not an unofficial book. VMware, Inc. is in no way affiliated with this
book or its content.
Trademarks
Geek University is a trademark of Signum Soft, LLC, and may not be used without written
permission.
Feedback Information
At Geek University Press, our goal is to create in-depth technical books of the highest
quality and value. Readers’ feedback is a natural continuation of this process. If you have
any comments about how we could improve our books and learning resources for you, you
can contact us through email at [email protected]. Please include the book title
in your message. For more information about our books, visit our website at http://geek-
university.com.
About the author
Antun Peicevic is a systems engineer with more than 10 years of experience in the
internetworking field. His certifications include CCNA Routing and Switching, CompTIA
Network+, CompTIA Security+, and much more. He is the founder and editor of geek-
university.com, an online education portal that offers courses that cover various aspects of
the IT system administration. Antun can be reached at [email protected].
About this book
This book was written to help you in preparation for the VCP5-DCV (VMware Certified
Professional – Data Center Virtualization) certification. VCP5-DCV is a cerfitication
from VMware that deals with data center virtualization. This certification confirms that
you have the education needed to successfully install, manage, and deploy VMware
vSphere environments.
Audience
This book is designed for people with some experience in the world of virtualization.
Although the book presumes some knowledge about computer systems in general, it is
customized for beginners.
Prerequisites
You should have a basic understanding of computers. You should know how to download
and install a program in Windows and some basic knowledge of system administration
(configuring IP addresses, connecting to the Internet, installing Windows, etc.).
What you’ll learn
You will learn how to set up your own virtual infrastructure using VMware vSphere. You
will learn how to deploy a virtual machine on ESXi, how to set up a virtual network,
migrate virtual machines using vMotion, set up Fault Tolerance…
Here is the full list of topics covered in the book:
Chapter 6 - Storage
Storage technologies for ESXi
Storage protocols for ESXi
What is a datastore?
Virtual Machine File System (VMFS)
Raw Device Mapping (RDM)
Chapter 7 - iSCSI
iSCSI SAN components
iSCSI naming and addressing
iSCSI initiators
Network configuration for iSCSI
iSCSI target discovery
VMkernel port for iSCSI software initiator
Configure iSCSI software initiator
iSCSI CHAP overview
Configure iSCSI CHAP
Chapter 8 - NFS
NFS (Network File System) overview
NFS components
Access controls in NFS
Configure NFS datastore
Chapter 11 - Templates
What is a virtual machine template?
Create virtual machine template
Update virtual machine template
Customize guest OS
Deploy VM from template
Clone virtual machine
Chapter 13 - VM migration
What is VM migration?
VM migration types
vSphere vMotion explained
vSphere vMotion process
vMotion requirements
CPU compatibility for vMotion
Hide or expose NX/XD bit
VMware CPU identification utility
Create virtual switch and VMkernel port group for vMotion
Use vSphere vMotion
vSphere Storage vMotion explained
Use vSphere Storage vMotion
Enhanced vMotion explained
Chapter 14 - VM snapshots
Virtual machine snapshot
VM snapshot files
Take snapshots
Revert snapshot
Delete snapshot
Consolidate snapshots
Remove virtual machine
Chapter 15 - vApps
vApps explained
Create vApp
vApp settings
Chapter 16 - Security
Security Profile services
Configure ESXi firewall
Lockdown mode explained
Integrate ESXi host with Active Directory
Access control system
Users and groups
Roles explained
Create custom role
Objects explained
Assign permissions
Chapter 18 - Reporting
Performance charts in vCenter Server
Monitor CPU utilization
Monitor active memory utilization
Monitor disk usage
Monitor network performance
Real-time and historical statistics
Log levels in vCenter Server
Chapter 19 - Alarms
Alarms in vSphere
Alarm trigger types
Actions explained
Notifications explained
Create alarms
Acknowledge alarm
What is vCenter Operations Manager?
Chapter 22 - Scalability
vSphere Distributed Resource Scheduler (DRS) explained
vSphere Distributed Resource Scheduler (DRS) requirements
Set DRS automation level
Enhanced vMotion Compatibility (EVC) explained
Enhanced vMotion Compatibility (EVC) requirements
DRS affinity rules
Preferential DRS rules
Required DRS rules
Enable DRS
Create DRS affinity rule
Create DRS anti-affinity rule
Create VM to host rule
Maintenance mode explained
Chapter 1 - vSphere overview
VMware vSphere components
Why use virtualization?
Resource sharing explained
What is CPU virtualization?
Physical and virtualized host memory
Physical and virtual networking
VMware vSphere VMFS
VMware vSphere components
VMware vSphere is a software suite that includes components like ESXi, vCenter Server,
vSphere Client, vCenter Orchestrator, vSphere Update Manager, etc. vSphere components
provide virtualization, management, resource optimization and many other features useful
for a virtual environment. vSphere is used to virtualize and aggregate the underlying
physical hardware resources and to provide the pool of virtual resources to the data center.
It also supports some advanced virtualization features such as disaster recovery, high
availability, fault tolerance, dynamic resource allocation, etc.
People new to the VMware’s virtualization platform sometimes get confused in dealing
with vSphere and its components. Remember that vSphere is a suite of products, just like
Microsoft Office (a suite of office products such as Word, Excel, Access), and not a single
product that you can install in your environment.
Here is a list and description of the most important components included in the vSphere
product suite:
ESXi - a type 1 hypervisor. A hypervisor is a piece of software that creates and runs
virtual machines. In vSphere, virtual machines are installed on ESXi servers.
vCenter Server - a centralized management platform and framework that lets you
manage virtual machines and ESXi hosts centrally.
vSphere Update Manager - an add-on package for vCenter Server that helps you keep
your ESXi hosts and VMs patched with the latest updates.
vSphere Web Client - a web-based user interface used for managing a virtual
infrastructure.
vSphere Client - a locally installed Windows application with a graphical user
interface (GUI) for all day-to-day management tasks and for the advanced
configuration of a virtual infrastructure.
vSphere Standard
vSphere Enterprise
vSphere Enterprise Plus
Why use virtualization?
There are many reasons why you should consider to use virtualization in your
environment. Here are some of them:
With the default settings, all VMs on the same ESXi host receive an equal share of
resources available.
What is CPU virtualization?
CPU virtualization involves a single CPU acting as if it were multiple separate CPUs. The
most common reason for doing this is to run multiple different operating systems on one
machine. CPU virtualization emphasizes performance and runs directly on the available
CPUs whenever possible. The underlying physical resources are used whenever possible
and the virtualization layer runs instructions only as needed to make virtual machines
operate as if they were running directly on a physical machine.
When many virtual machines are running on an ESXi host, those virtual machines might
compete for CPU resources. When CPU contention occurs, the ESXi host time-slices the
physical processors across all virtual machines so each virtual machine runs as if it has its
specified number of virtual processors.
Note that the CPU virtualization is not the same thing as emulation. With emulation, all
operations are run in software by an emulator. The emulator emulates the original
computer’s behavior by accepting the same data or inputs and achieving the same
results.
Physical and virtualized host memory
In a nonvirtual environment, the operating system assumes it owns all physical memory
available. When an application starts, it uses interfaces provided by the OS to allocate or
release virtual memory pages during the execution. Virtual memory is a technique used in
most operating systems, and is supported by almost all modern CPUs. Virtual memory
creates a uniform virtual address space for applications and allows the OS and hardware to
handle the address translation between the virtual and physical address space. This
technique adapts the execution environment to support large address spaces, process
protection, file mapping, and swapping in modern computer systems.
In a vSphere environment, the VMware virtualization layer creates a contiguous
addressable memory space for the virtual machine when it is started. The allocated
memory space is configured when the virtual machine is created and has the same
properties as the virtual address space. This configuration allows the hypervisor to run
multiple virtual machines simultaneously while protecting the memory of each virtual
machine from being accessed by others.
Physical and virtual networking
The key virtual networking components in virtual architecture are virtual Ethernet
adapters and virtual switches. A virtual machine can be configured with one or more
virtual Ethernet adapters. A virtual switch enables virtual machines on the same ESXi host
to communicate with each other using the same protocols used over physical switches,
without the need for additional hardware.
VMware technology lets you link local virtual machines to each other and to the external
network through a virtual switch. A virtual switch, just like any physical Ethernet switch,
forwards frames at the data link layer. An ESXi host can contain multiple virtual switches.
The virtual switch connects to the external network through physical Ethernet adapters.
The virtual switch is capable of binding multiple virtual network cards together, offering
greater availability and bandwidth to the virtual machines.
Virtual switches are similar to modern physical Ethernet switches in many ways. Like a
physical switch each virtual switch is isolated and has its own forwarding table, so every
destination the switch looks up can match only ports on the same virtual switch where the
frame originated. This feature improves security, making it difficult for hackers to break
virtual switch isolation.
Virtual switches also support VLAN segmentation at the port level, so that each port can
be configured as an access or trunk port, providing access to either single or multiple
VLANs.
VMware vSphere VMFS
Physical file systems usually allow only one host to have read-write access to the same file
at a given time. By contrast, VMware vSphere VMFS enables a distributed storage
architecture that allows multiple ESXi hosts concurrent read and write access to the same
shared storage resources. VMFS is optimized for a virtualized environment and offers a
high-performance cluster file system designed specifically for virtual machines. It uses
distributed journaling of its file system meta data changes to allow fast and resilient
recovery in the event of a hardware failure. VMFS is also the foundation for distributed
infrastructure services, such as live migration of virtual machines and virtual machine
files, dynamically balance workloads across available compute resources, automated
restart of virtual machines and fault tolerance.
VMFS provides an interface to storage resources so that several storage protocols (Fibre
Channel, Fibre Channel over Ethernet, NAS, iSCSI) can be used to access datastores on
which virtual machines can reside. Dynamic growth of VMFS datastores through
aggregation of storage resources and dynamic expansion of a VMFS datastore enables
you to increase a shared storage resource pool with no downtime. In addition, you have a
means for mounting a point in time copy of a datastore.
Chapter 2 - Getting started
What is VMware ESXi?
ESXi installation
Basic ESXi configuration
GUI in vSphere
vSphere Client installation
vSphere Web Client installation
VMware Client Integration Plug-in installation
What is vCenter Server?
Communication between vCenter Server and ESXi
What is vCenter Server Appliance?
vCenter Server Appliance installation
vCenter Server Appliance configuration
Install vCenter Server (Simple Install)
What is VMware ESXi?
The core of the vSphere product suite is the hypervisor called ESXi. A hypervisor is a
piece of software that creates and runs virtual machines. Hypervisors are divided into two
groups:
Type 1 hypervisors - also called bare metal hypervisors, Type 1 hypervisors run
directly on the system hardware. A guest operating-system runs on another level
above the hypervisor. VMware ESXi is a Type 1 hypervisor that runs on the host
server hardware without an underlying operating system.
Type 2 hypervisors - hypervisors that run within a conventional operating-system
environment, and the host operating system provides I/O device support and memory
management. Examples of Type 2 hypervisors are VMware Workstation and Oracle
VirtualBox .
ESXi provides a virtualization layer that abstracts the CPU, storage, memory and
networking resources of the physical host into multiple virtual machines. That means that
applications running in virtual machines can access these resources without direct access
to the underlying hardware. VMware refers to the hypervisor used by VMware ESXi as
vmkernel. vmkernel receives requests from virtual machines for resources and presents the
requests to the physical hardware.
ESXi is supported on Intel processors (Xeon and above) and AMD Opteron processors.
ESXi includes a 64-bit VMkernel and hosts with 32-bit-only processors are not supported.
However, both 32-bit and 64-bit guest operating systems are supported. ESXi supports up
to 4,096 virtual processors per host, 320 logical CPUs per host, 512 virtual machines per
host and up 4 TB of RAM per host.
ESXi can be installed on a hard disk, USB device, or SD card. It has an ultralight footprint
of approximately 144 MB for increased security and reliability.
Prior to vSphere 5, the hypervisor was available in two forms: VMware ESX and
VMware ESXi. Starting with vSphere 5, ESXi is the only hypervisor architecture
option to deploy vSphere.
Here is an example of an ESXi host accessed through the Direct Console User Interface:
ESXi installation
VMware offers a free 60-day evaluation of ESXi. To download your version of ESXi, go
to http://www.vmware.com and find VMware Player under the Products tab. You will
need to register to download your version of ESXi.
After you download the ESXi ISO image, burn that image to a CD. Next, you need to set
up the BIOS to boot from the CD-ROM device. Here are the steps:
First, when computer boots, enter the BIOS setup. This can be done in a number of
different ways, depending on the motherboard type of your computer. Entering the BIOS
is usually done by pressing the F2, F10 or ESC key while the computer is booting. Check
the documentation of your motherboard vendor for more information.
In the BIOS utility, go to the Boot tab:
Make sure that your CD/ROM is the first listed device. If not, follow the instructions on
the right side of the screen to move the CD/ROM entry to the top.
The next step is to save the changes you’ve just made. Go to the Exit tab and select Exit
saving changes:
Press F2 to access the configuration screen. You will need to provide your administrative
login credentials that you’ve set up during the ESXi installation:
In the System Customization menu, select Configure Management Network and press
Enter:
Now, select IP Configuration:
Select Set static IP address and network configuration and enter the network configuration
details:
Press Enter to accept the IP configuration changes. And that’s it! You can
now use vSphere Client to access your ESXi host using the IP address you’ve just
configured.
GUI in vSphere
Besides the direct console user interface, there are two other interfaces that can be used to
configure your vSphere environment:
Here is how you can install vSphere Client using the vCenter Server installation media:
1. Go to the location of the installation software and double-click autorun.exe:
Now you can use vSphere Client to manage your ESXi host.
vSphere Web Client installation
vSphere Web Client is usually installed on a Windows Server instance using the Simple
Install method, which installs vCenter Single Sign-on, vSphere Web Client, vCenter
Inventory Service and vCenter Server on the same physical server or virtual machine.
This section describes the Custom Install method, which enables you to install each
vSphere component separately. If you are using vCenter Server Applicance, a
preconfigured Linux-based virtual machine with all vSphere components included, feel
free to skip this section, since vSphere Web Client is installed as part of the appliance.
vSphere Web Client is a cross-platform web application that enables you to connect to a
vCenter Server system to manage ESXi hosts. You can install it using VMware vCenter
Server Installer. Note that vCenter Single Sign-On (SSO) is a prerequisite.
Here are the steps for installing vSphere Web Client using the Custom Install method:
1. Go to the location of the vCenter Server installation software and double-click
autorun.exe.
2. In the VMware vCenter Installer window, select vSphere Web Client and click Install:
5. Select “I accept the terms in the license agreement” and click Next:
6. Choose the installation folder:
7. Enter the connection information for vSphere Web Client. Make sure that the ports in
question are not already used:
8. Enter the SSO administrator username and password and SSO Lookup Service URL.
The administrative user account is the administrative account used in vsphere.local, a
domain used internally by vSphere that you’ve created during the vCenter SSO
installation. The Lookup Service URL takes the form
https://SSO_host_FQDN_or_IP:7444/lookupservice/sdk, where SSO_host_FQDN_or_IP
is the system on which the SSO is installed on and 7444 is the default vCenter Single
Sign-On HTTPS port number:
9. Review the SSL fingerprint of the SSO Lookup Service certificate and click Yes:
vSphere Client and vSphere Web Client - both tools can be used to manage your
vCenter Server. vSphere Web Client is the recommended way to manage an ESXi
host when the host is managed by vCenter Server.
vCenter Server database - stores the inventory items, security roles, resource pools,
performance data, and other information. Oracle and Microsoft SQL Server are
supported databases for vCenter Server.
vCenter Single Sign-On (SSO) - allows authentication against multiple user
repositories, such as Active Directory or Open LDAP.
Managed hosts - ESXi hosts and their respective virtual machines.
1. Windows-based installation
2. vCenter Server Appliance deployment
Both options provide features such as inventory management, virtual machine migration,
high availability, distributed resource scheduling, etc. Both options offer the identical user
experience. In fact, users connecting to vCenter Server will not even know on which
platform vCenter Server is installed on.
vCenter Server Appliance is a preconfigured SUSE Linux-based virtual machine
optimized for running vCenter Server and the associated services. It is a prepackaged 64-
bit application with an embedded PostgreSQL database that supports up to 100 hosts and
3000 virtual machines.
vCenter Server Appliance comes as an Open Virtualization Format (OVF) template. The
appliance is imported to an ESXi host and configured through the web-based interface. It
comes pre-installed with all the components needed to run a vCenter Server, including
vCenter SSO (Single Sign-on), Inventory Service, vSphere Web Client and the vCenter
Server itself.
Here are the main benefits of using the vCenter Server Appliance over the vCenter Server
Windows installation:
The vCenter Server Appliance can be downloaded from the vCenter Server 5.x download
page: https://my.vmware.com/web/vmware/downloads
Here are the system requirements for vCenter Server 5.5 Appliance:
Disk storage on the host machine:
vCenter Server Appliance 5.0.x / 5.1.x:
At least 7 GB
A maximum of 80 GB
At least 70 GB
A maximum of 125 GB
Very small inventory (10 or fewer hosts, 100 or fewer virtual machines) - at least 8
GB.
Small inventory (10-50 hosts or 100-1500 virtual machines) - at least 16 GB.
Medium inventory (the maximum inventory supported with the embedded database;
50-100 hosts or 1500-3000 virtual machines) - at least 24 GB.
Large inventory (More than 400 hosts or 4000 virtual machines) - at least 32 GB.
Processor:
The vCenter Server Appliance must also have the JVM heap settings configured. Here are
the recommended values:
Policy Based
vCenter Server Appliance Query
Tomcat Storage
hardware Service (QS)
Management (SPS)
In this example we will use the vCenter Server Appliance OVA package, which is an
archive file with the OVF directory inside.
Here are the steps to deploy the vCenter Server virtual appliance:
1. Connect to the ESXi host on which the appliance will be installed using the vSphere
Client:
username: root
password: vmware
Select whether you want to allow vCenter Server Appliance to send technical data to
VMware and click Next:
In the next screen, select Configure with default settings if you don’t want to assign a
static IP address to your appliance and configure options such as SSO and database
settings. You can configure these settings later.
vCenter Server is usually installed on Windows using the Simple Install method. This
method installs vCenter Single Sign-on, vSphere Web Client, vCenter Inventory Service
and vCenter Server on the same physical server or virtual machine.
To start the vCenter Server installation using the Simple Install method, launch the
vCenter Server Installer. Select the Simple Install option on the left and click Install:
Click Next to start the setup:
Choose a site name. The site name is used in environments where there are SSO servers in
multiple sites.
Next, you are prompted to choose the TCP port number for the SSO service. You can
leave the default value of 7444:
During the installation, you will be prompted to enter the license key, select the database
solution for vCenter Server (by default, Microsoft SQL Server 2008 Express), select the
user type that the vCenter Server should run with (by default, a local system account will
be used), and to select the inventory size, depending on the requirements of your
environment.
You can now connect to your vCenter Server instance using the following URL:
https://<server.domain.com>:9443/vsphere-client
Use the username [email protected] and the password you’ve configured
during the vCenter Server installation.
Chapter 3 - vCenter SSO and vCenter Inventory
What is vCenter Single Sign-On (SSO)?
SSO identity sources
Deployment modes in vCenter SSO
vCenter SSO installation
Configure vCenter SSO policies
vCenter Server and Active Directory
vCenter Server inventory explained
vCenter Inventory Service installation
Add ESXi host to vCenter Server Inventory
vCenter Server installation (Custom Install method)
What is vCenter Single Sign-On (SSO)?
With vCenter Single Sign-On (SSO), you can access everything you need through the
virtual infrastructure with a single username and password, which makes the
authentication process simpler and faster. vSphere components in your virtual
infrastructure, such as vCenter Server, vCenter Orchestrator and vCloud Director can use
SSO to securely communicate with each other using a secure token mechanism. vCenter
Single Sign-On (SSO) is a prerequisite for installing vCenter Server; you must install SSO
before installing the vCenter Server.
The vCenter SSO can be configured to authenticate against multiple user repositories, also
called identity sources, such as Active Directory and OpenLDAP.
Here is a description of vCenter SSO authentication:
The default identity source (vshpere.local)
The default identity source called vsphere.local is created when vCenter SSO is installed.
This identity source is used when a user logs in without a domain name. The user named
administrator is created in this domain and can be used to add identity sources, set the
default identity source, change the password and lockout policy and manage users and
groups in the vsphere.local domain.
Users who do not belong to the vsphere.local domain must specify their domain name in
one of two ways:
1. specifying the domain name prefix, for example, DOMAIN\john
2. including the domain, for example, [email protected]
Deployment modes in vCenter SSO
There are three different deployment modes to choose from when installing vCenter SSO:
1. Basic deployment mode - this is the most common SSO deployment option. This mode
contains only one vCenter SSO node. It is appropriate for cases when you have a single
vCenter Server instance of an inventory size of up to 1000 hosts and 10000 virtual
machines or you are using vCenter Server Appliance. This option is used with the vCenter
Simple Install process.
2. Multiple vCenter SSO instances in the same location - this deployment mode provides
HA (High Availability) for your vCenter SSO environment. In this mode, you install a
primary vCenter SSO instance and one or more additional vCenter SSO nodes. The SSO
nodes replicate information with each other.
3. Multiple vCenter SSO instances in multiple locations - this deployment mode is
required when you have geographically dispersed vCenter Servers and you must
administer them in Linked Mode. SSO nodes replicate information with each other.
vCenter SSO installation
You can install vCenter SSO using the VMware vCenter Installer. Two installation options
are supported:
1. use the Simple Install option to deploy the basic mode. This option installs vCenter
SSO, vCenter Server, vSphere Web Client and vCenter Inventory Service on the same host
and it is appropriate for most deployments.
2. use the Custom Install option to install the multisite or HA mode. This option enables
you to install vCenter SSO separately from vCenter Server, vCenter Inventory Service or
vSphere Web Client. This is often recommended for medium to large environments.
NOTE - You can also use vCenter Server Appliance, which is a preconfigured SUSE
Linux-based virtual machine optimized for running vCenter Server and the associated
services, such as vCenter SSO.
In this section we will describe how to install vCenter SSO using the Custom Install
option.
To start the installation, select vCenter Single Sign-On from the VMware vCenter
Installer:
Choose a password for the Single Sign-On (SSO) administrator user. This is the
administrative account used in the vsphere.local, which is a domain used internally by
vSphere:
Choose a site name. The site name is used in environments where there are SSO servers in
multiple sites.
Next, you are prompted to choose the TCP port number for the SSO service. You can
leave the default value of 7444 in most cases:
Choose the install location and click Next:
Review the install options and click Install to start the installation:
Configure vCenter SSO policies
There are three vCenter SSO policies that you can edit to conform to your company’s
security standards:
1. Password policy - a set of rules and restrictions on the format and lifespan of user
passwords. Note that this policy applies only to users in the vCenter Single Sign-On
domain (vsphere.local).
To edit the password policy parameters, log in to your vCenter Server with a user that has
vCenter Single Sign-On administrator privileges and go to Administration > Single Sign-
On > Configuration:
Under the Policies tab you can see the current password policies. For example, you can
see that vCenter Single Sign-On passwords are set to expire after 90 days. Click the Edit
button on the right to edit the password policy parameters:
You can configure the following parameters:
Click the Edit button on the right to edit the lockout policy parameters:
Max. number of failed Maximum number of failed login attempts that are
login attempts allowed before the account is locked.
Time interval between Time period in which failed login attempts must
failures (seconds) occur to trigger a lockout.
Click the Edit button on the right to edit the token policy parameters:
2. On the vCenter Server tab, click Authentication. Check the Active Directory
Enabled check box and type the domain name and domain administrator user name and
password:
3. Now you need to reboot your vCenter Server. Click on the System tab and click Reboot:
Now you need to add Active Directory as an identity source. Here are the steps:
1. Log in to vCenter Server at https://[WEB_CLIENT_FQDN]:9443/vsphere-client. In the
navigation bar on the left, click Administration. Under Single Sign-On, click
Configuration:
5. Click on the Add Identity source icon (the green plus sign) to add a new identity source.
You will need to provide the following information:
Identity source type - select Active Directory as a LDAP server.
Name - type the domain name.
Base DN for users - type the Base DN for users. This parameter describes where to load
users. If you’re using a default Active Directory setup, all user are located in the
Users folder under your domain. Our domain is mydomain.local, so in LDAP form, that’s
cn=Users, dc=mydomain, dc=local.
Domain Name - type the FQDN.
Domain alias - type the domain name.
Base DN for groups - type the Base DN for groups. This parameter describes where to
load groups. In our case, the groups are located inside the Users folder.
Primary server URL - type the URL of your domain controller. Precede the URL with
ldap://.
Secondary server URL - type the URL of your secondary domain controller, if you have
one.
Username - type the domain administrator username.
Password - type the domain administrator password.
7. Click Test Connection. If your parameters are correct, you should get the following
message:
Each location might have its own team of vSphere administrators, its own customers, and
its own set of hosts, virtual machines, networks and other objects. Note that the interaction
across datacenters is limited, For example, you can migrate a virtual machine using
vSphere vMotion from one host to another in the same datacenter, but not to a host in a
different data center.
The topmost object in the vCenter Server inventory is called the root object and
represents the vCenter Server system itself. It cannon be removed from the inventory.
Items in a data center can be placed into folders to better organize the system. For
example, you can place virtual machines in folders that are based on function and ESXi
hosts in folders based on the CPU family:
vCenter Inventory Service installation
vCenter Inventory Service is usually installed on a Windows Server instance using the
Simple Install method, which installs vCenter Single Sign-on, vSphere Web Client,
vCenter Inventory Service and vCenter Server on the same physical server or virtual
machine. This section describes the Custom Install method, which enables you to
install each component separately. If you are using vCenter Server Applicance, a
preconfigured Linux-based virtual machine with all vSphere components included, feel
free to skip this lesson, since vCenter Inventory Service is installed as part of the
appliance.
vCenter Inventory Service is used to manage the vSphere Web Client inventory objects
and property queries that the client requests when users navigate the vSphere environment.
In this lesson we will describe how you can install this service using the Custom Install
method in the vCenter Server installer. Note that the vCenter SSO is a prerequisite for
installing vCenter Inventory Service.
Start your vCenter Server Installer by running autorun.exe. Select vCenter Inventory
Service under the Custom Install option and click Install:
The Fully Qualified Domain Name (FQDN) of the host where vCenter Inventory Service
is being installed should be auto-populated:
Select the port numbers for vCenter Inventory Service and click Next:
Select the option that fits your environment and click Next:
Next, enter the SSO administrator username and password and SSO Lookup Service URL.
The administrative user account is the administrative account used in vsphere.local, a
domain used internally by vSphere that you’ve created during the vCenter SSO
installation. The Lookup Service URL takes the form
https://SSO_host_FQDN_or_IP:7444/lookupservice/sdk, where SSO_host_FQDN_or_IP
is the system on which the SSO is installed on and 7444 is the default vCenter Single
Sign-On HTTPS port number:
Review the SSL fingerprint of the SSO Lookup Service certificate and click Yes:
Now you can add an ESXi host to the vCenter Server inventory. Here are the steps:
1. Right-click the datacenter object you’ve created in the previous step and select Add
Host:
2. Type the following information in the Add Host wizard:
Host name or IP address - the fully qualified domain name or IP address of your ESXi
host:
User name and password - the password of the root account on the ESXi host. Note that
this is not the password of the vCenter Server administrator:
If you get the security alert about the authenticy of the host, click Yes to trust the host:
Host summary - review the summary information and click Next:
Assign license - select whether you want to assign a new license key or use an existing
one:
Lockdown mode - you can disable access to the ESXi host for the root account after
vCenter Server takes control of the host. This way the ESXi host can be managed only
through vCenter Server or the local console:
Enter the license key for your vCenter Server installation. If you don’t have one, leave the
field blank and vCenter Server will be installed in the 60-day evaluation mode:
Select the database solution for vCenter Server. In this example we will install a
new Microsoft SQL Server 2008 Express instance. If you are planning to have a larger
virtual environment, you should use an external database provider such as Oracle.
Select the user type that the vCenter Server should run with. By default, the vCenter
Server service will use the Windows Local System account:
Select the Linked mode option. This mode allows multiple vCenter Servers to share
information. If this is your first vCenter Server installation, select the first option:
Next, choose the ports for vCenter Server. It is recommended to leave the default values:
To set the JVM memory size, select the option that best describes your environment:
Next, enter the SSO administrator username and password and SSO Lookup Service URL.
The administrative user account is the administrative account used in vsphere.local, a
domain used internally by vSphere that you’ve created during the vCenter SSO
installation. The Lookup Service URL takes the form
https://SSO_host_FQDN_or_IP:7444/lookupservice/sdk, where SSO_host_FQDN_or_IP
is the system on which the SSO is installed on and 7444 is the default vCenter Single
Sign-On HTTPS port number:
Review the cerfiticate fingerprint and click Yes:
A virtual machine interacts with installed hardware through a thin layer of software called
the hypervisor. The hypervisor provides physical hardware resources dynamically as
needed and allows virtual machines to operate with a degree of independence from the
underlying physical hardware. For example, a virtual machine can be moved from one
physical host to another. Also, its virtual disks can be moved from one type of storage to
another without affecting the functioning of the virtual machine.
By default, ESXi presents the following hardware to the VM:
Phoenix BIOS
Intel motherboard
Intel PCI IDE controller
IDE CD-ROM drive
BusLogic parallel SCSI, LSI Logic parallel SCSI, or LSI Logic SAS controller
AMD or Intel CPU, depending upon the physical hardware
Intel E1000, Intel E1000e, or AMD PCnet NIC
Standard VGA video adapter
Virtual machine files in ESXi
Each VM consists of several types of files stored on a storage device. Here is a list and a
description of some of the files that make up a virtual machine running on ESXi:
The first virtual disk files have filenames <VM_name>.vmdk and <VM_name>-flat.vmdk.
If a virtual machine has more than one disk file, the second and later disk files would have
the filenames <VM_name>_#.vmdk and <VM_name>_$-flat.vmdk, starting with 1. For
example, if a virtual machine named vmhost have two disks, the files would be called
vmhost.vmdk, vmhost-flat.vmdk, vmhost_1.vmdk and vmhost_1-flat.vmdk.
You can display a virtual machine’s files with the vSphere Web Client or vSphere Client
by browsing the datastore on which the virtual machine is stored. Here is a screenshot
from vSphere Web Client that shows the files of the virtual machine named linux:
Note that there is only one .vmdk file shown in the picture above. In reality, a virtual disk
consists of two files, .vmdk and -flat.vmdk. To see both files, you would have to go to a
command-line interface.
Virtual machine hardware
A virtual machine uses virtual hardware. Each guest operating system sees ordinary
hardware devices and is no aware that these devices are virtual. All virtual machines have
uniform hardware, which makes virtual machines portable across VMware virtualization
platforms.
You can configure virtual machine memory and CPU settings, add virtual hard disks and
network interface cards, add and configure virtual hardware, such as CD/DVD drives,
floppy drives, and SCSI devices. You can also add multiple USB devices to a virtual
machine that resides on an ESXi host to which the devices are attached.
Virtual machine hardware version
The virtual machine hardware version designates the features of the virtual hardware
(number of CPUs, maximum memory configuration, etc.). By default, new virtual
machines will be created with the latest version of the virtual hardware available on the
host where the VM is being created.
Here is a table that shows the highest hardware version that each vSphere version
supports:
Hardware
Product
Version
ESXI 5.x 8
ESXi/ESX 4.x 7
Note that you will need to use vSphere Web Client in order to configure a virtual machine
to use the hardware version 10. Virtual machines using virtual machine hardware versions
prior to this can still be created and run on ESXi 5.5 hosts, but they will not have all of the
features and capabilities of virtual machine hardware version 10.
Here are the virtual machines configuration maximums:
The following table identifies the differences between the virtual disk types in vSphere:
flexible - a virtual NIC identifies itself as a Vlance adapter, an emulated form of the
AMD 79C970 PCnet32 LANCE 10 Mbps NIC with drivers available in most 32-bit
guest operating system. If VMware Tools is installed, this virtual NIC functions as
the higher-performance vmxnet adapter, a virtual network adapter optimized for
performance in a virtual machine.
e1000 - an emulated version of the Intel 82545EM Gigabit Ethernet NIC. The driver
for this NIC is found in many modern guest operating sytems, including Windows
XP and Linux version 2.4.19 and later. The default adapter type for virtual machines
running 64-bit guest operating systems.
e1000e - an emulated version of the Intel 82574L Gigabit Ethernet NIC. This adapter
type can be chosen on Windows 8 guest operating systems and newer.
vmxnet2 (Enhanced vmxnet) - based on the vmxnet adapter but offers some high-
performance features such as jumbo frames and hardware offload support.
vmxnet3 - the latest version of a paravirtualized driver designed for performance and
offers such high-performance features such as jumbo frames, hardware offloads,
support for multiqueue, IPv6 offloads, etc. vmxnet3 devices support fault tolerance
and record/replay. This virtual network adapter type is available only on virtual
machines with hardware version 7 or later. VMware Tools is required to provide the
driver.
The virtual network adapter type is chosen during the virtual machine creation:
2. Go to the Summary tab and click the blue Launch Console link:
3. The virtual machine console is now open and you should be able to access the VM’s
guest operating system:
When you click within a VM console, all keystrokes and mouse clicks will be directed
to that VM. To manually tell vSphere Web Client that you want to shift focus out of the
VM, use the vSphere Web Client’s special keystroke: Ctrl+Alt.
Create virtual machines
Here are the instructions to create virtual machines using vSphere Web Client:
1. Launch the vSphere Web Client, and connect to a vCenter Server instance. From a
datacenter or ESXi host in the inventory, right-click and select New Virtual Machine from
the Actions menu:
2. In the New Virtual Machine Wizard, select Create a new virtual machine and click Next:
3. Type the virtual machine name and select a location in the inventory where the VM
will reside:
4. Select a host, cluster or resource pool on which the VM will run:
9. Verify the information and click Finish to start creating the virtual machine:
Install a guest operating system
A new virtual machine is analogous to a physical computer with an empty hard drive and
without an operating system. To make your virtual machine fully functional, you need to
install a guest operating system. This is usually done through vSphere Web Client by
attaching a CD-ROM, DVD, or ISO image containing the installation image to the virtual
CD/DVD drive.
ISO images are the recommended way to install a guest operating systems. If you want to
use an ISO image to install the guest OS, you must first put it in a location that ESXi can
access. Usually, ISO images are uploaded into a datastore accessible to the ESXi host on
which the guest OS installation will be performed.
Here are the steps to install a guest OS using an ISO image:
1. Connect to a vCenter Server using vSphere Web Client. Go to vCenter > Hosts And
Clusters. In the inventory tree, right-click the virtual machine and select the Edit Settings
menu option:
2. From the virtual machine properties window, expand the CD/DVD drive 1 hardware
option to reveal the additional properties:
3. Change the drop-down box to Datastore ISO File, and select the Connect At Power
On check box:
4. Click the Browse button to browse a datastore for the ISO file of the guest OS:
5. Right-click the virtual machine and select Power On from the menu:
6. Right-click the virtual machine and select the Open Console option:
7. Now you can install the guest operating system, just like you would do on a physical
machine.
VMware Tools explained
VMware Tools is a suite of utilities that enhances the performance and improves
management of the virtual machine’s guest operating system. It is not installed by default
and are not required for the guest OS functionality, but offers many benefits, including:
time synchronization.
The following components are installed when you install VMware Tools:
a set of scripts that helps you automate guest operating system operations.
5. Back in the virtual machine console, click Run setup64.exe if the AutoPlay dialog box
appears. If the AutoPlay dialog box does not appear, open Windows Explorer and double-
click the CD/DVD drive icon. The AutoPlay dialog box should then appear:
6. Click Next on the Welcome to the installation wizard for VMware Tools page:
7. Select the appropriate setup type for the VMware Tools installation, and click Next. The
Typical installation option will suffice for most situations. The Complete installation
option installs all available features, while the Custom installation option lets you choose
which features to install:
8. Click Install to begin the installation:
2. Log into the Linux guest OS using the root account or some other account with
sufficient permissions.
3. Back in vSphere Web Client, right-click the virtual machine and choose All vCenter
Actions > Guest OS > Install VMware Tools.
4. Click Mount to mount the disk image:
5. Back in the virtual machine console. open the Linux shell and navigate to the location
of the VMware Tools mount point. The exact path may vary from distribution to
distribution:
6. Extract the compressed tar file to a directory of your choice, and then navigate to that
temporary directory using the following commands:
tar -zxf VMwareTools-[VERSION].tar.gz -C /tmp
cd /tmp/vmware-tools-distrib
7. The installer will ask you a number of questions, such as where to place the binary files,
where the init scripts will be located, and where to place the library files. In most cases,
you can use the default values:
8. After the installation is complete, the VMware Tools ISO will be automatically
unmounted. Remove the temporary installation directory using the rm command:
9. Reboot the Linux VM for the installation of VMware Tools to take full effect.
Chapter 5 - Virtual networks
Virtual switch explained
Standard switch explained
Create standard switches
Configure VLANs
Configure speed and duplex
Switch network security policies
Switch traffic shaping policies
Switch load balancing policies
Network failover detection
How to handle network failures
Distributed switches explained
Virtual switch explained
VMware has designed the vSphere suite to mimic the functions of a physical network, so a
lot of the network hardware you’ll find in the real world, you will also find virtualized in
vSphere. Virtual switches work very much like their physical counterparts, Ethernet
switches, but lack some of their advanced functionality. They are used to establish a
connection between the virtual and the physical network. A virtual switch can detect
which virtual machines are logically connected to each of its virtual ports and use that
information to forward traffic to the correct virtual machines. A virtual switch is connected
to physical switches by using physical Ethernet adapters to join virtual networks with
physical networks.
Two connection types are possible on a virtual switch in vSphere:
1. virtual machine port groups - ports used to connect virtual machines to other VMs or
the physical network.
2. VNkernel ports - ports configured with their own IP address, subnet mask and default
gateway to allow hypervisor management traffic, vMotion, iSCSI storage access, network
attached storage (NAS) access, and vSphere Fault Tolerance (FT) logging.
All physical NICs are assigned at the virtual switch level, so all ports defined on a
virtual switch share the same hardware.
In the picture below you can see a graphical representation of a standard switch in vSphere
Web Client:
A standard switch has the ability to move layer 2 traffic between virtual machines
internally. This means that two virtual machines on the same subnet and on the same ESXi
host can communicate directly; the traffic does not need to leave the ESXi host. Standard
switces also support some advanced networking features, such as outbound traffic
shaping, NIC teaming, different security policies, Cisco Discovery Protocol (CDP)
support, etc. You can have a total of 4096 standard switch ports per host, a maximum of
1016 active ports per host, and 512 port groups per switch.
In the picture below you can see a graphical representation of a standard switch in vSphere
Web Client. Notice the different port groups for the virtual machines network and
VMkernel:
Create standard switches
During an ESXi installation a virtual switch named vSwitch0 is created. It contains two
port groups: one for virtual machines (named VM Network) and one for management
(named Management Network). You can use the vSphere Web Client to add a new virtual
switch. Here are the steps:
1. Log into vCenter Server through vSphere Web Client.
2. Go to vCenter > Hosts And Clusters and select an ESXi host from the inventory:
7. Click the green plus icon under Assigned adapters to assign a physical network adapter:
8. Under Failover Order Group, select Unused adapters and select an unused network
adapter:
9. Move the entry from the Unused adapters section to the Active adapters section using
the blue up arrow and click Next:
10. Type a network label for the default port group that will be added to this switch. If you
are using multiple ESXi hosts with vCenter and vMotion, make sure that this network
label is consistent across all of your ESXi hosts. You can keep the default setting for
VLAN ID.
broadcast traffic will be received and processed only by devices inside the same
VLAN, which can improve network performance.
users can be grouped by a department and not by the physical location.
sensitive traffic can be isolated in a separate VLAN for the purpose of security.
ESXi supports 802.1Q VLAN tagging. A port group is given a VLAN ID, uniquely
identifying that VLAN across the network. Packets from a virtual machine are tagged as
they exit the virtual switch and untagged as they return to the VM. Since VLAN is a
switching technology, no configuration is required on the virtual machine. The port on the
physical switch to which the ESXi host is connected must be defined as a static trunk port
(a port that can carry traffic from and to all VLANs).
Here are the steps to configure a port group with a VLAN ID on a standard virtual switch
using vSphere Web Client:
1. Navigate to the ESXi host to which you want to add the port group. Select
the Manage tab, and then select Networking:
2. Select the virtual switch where the new port group should be created and click the Add
Host Networking icon:
3. The Add networking wizard starts. Select the Virtual Machine Port Group for a
Standard Switch and click Next:
4. Select the Select An Existing Standard Switch radio button and use the Browse button to
choose which virtual switch will host the new port group:
7. Type the name of the VM port group in the Network Label text box. In the VLAN ID
text box, type the VLAN ID:
8. Click Finish to end the wizard.
You can now change the VM port group in order to place the virtual machine in the new
VLAN:
Configure speed and duplex
You can configure the speed and duplex of the ESXi host physical network adapter using
vSphere Web Client. Here is how you can do that:
1. Select your ESXi host from the inventory and select Manage > Networking:
2. Click the Physical adapters link and select the physical network adapter whose settings
you would like to modify. Click on the Edit button above the list of adapters:
If you are using a Gigabit Ethernet adapter, leave the default value of Auto negotiate,
because it is a part of the Gigabit standard.
Switch network security policies
There are there network security policies for virtual switches that enable you to protect
virtual machines from impersonation or interception attacks. These policies are:
1. Promiscuous Mode - set to Reject by default to prevent guest operating systems from
observing all traffic passing through a virtual switch. Set this mode to Accept only if you
use a packet sniffer or intrusion detection system in the guest operating system.
2. MAC Address Changes - when set to Reject and the guest operating systems attempts to
change the MAC address assigned to the virtual NIC, the virtual machine will stop
receiving traffic. Set to Accept by default.
3. Forget Transmits - affects traffic that is transmitted from a virtual machine. When set to
Reject, the virtual NIC drops frames that the guest operating system sends if the source
MAC address is different than the one assigned to the virtual NIC. Set to Accept by
default.
Network security policies can be defined at the standard switch level or at the port
group level. The policies defined at the port group level override the policies set at the
standard switch level.
To set the security policies using the vSphere Web Client, go to the host’s Manage >
Networking tab. Choose the virtual switch you would like to modify and select the Edit
settings icon:
Average Bandwidth - the number of kilobits per second allowed across a port. This
number is measured over a period of time and represents the allowed average load.
Peak Bandwidth - the maximum number of kilobits per second allowed across a port
when it is sending a burst of traffic. This number is used to limit the bandwidth
during a burst and cannot be smaller than the average bandwidth number.
Burst Size - the maximum number of kilobytes allowed in a burst. This option can
allow a port that needs more bandwidth than is specified in the average bandwidth
value to gain a burst of higher-speed traffic if a burst bonus is available.
A traffic shaping policy can be defined at either the virtual switch level or the port group
level, with settings at the port group level overriding settings at the virtual switch level.
Here are the steps for configuring traffic shaping on a standard virtual switch using
vSphere Web Client:
1. Go to the host’s Manage > Networking tab. Select the virtual switch you would like to
modify and select the Edit icon:
The traffic shaping policies configured on a standard virtual switch shape only
outbound network traffic.
Switch load balancing policies
The load-balancing policy determines how ESXi hosts will use their uplink adapters. Four
load-balancing methods are available when using a standard virtual switch:
1. Originating virtual port ID - a VM’s outbound traffic is mapped to a specific physical
NIC. The NIC is determined by the ID of the virtual port to which the VM is connected.
This is the default.
2. Source MAC hash - a VM’s outbound traffic is mapped to a specific physical NIC that
is based on the virtual NIC’s MAC address.
3. IP hash - a NIC for each outbound packet is selected based on its source and destination
IP address. This method requires the use of EtherChannel on the physical switch.
4. Explicit failover order - an adapter that is listed highest in the order of active adapters
and passes failover detection criteria will be used.
A load balancing policy can be defined at either the virtual switch level or the port group
level, with settings at the port group level overriding settings at the virtual switch level.
Here are the steps for configuring load balancing on a standard virtual switch using the
vSphere Web Client:
1. Go to the host’s Manage > Networking tab. Select the virtual switch you would like to
modify and select the Edit settings icon:
Select the Teaming and failover menu and specify the option under Load Balancing:
Network failover detection
Network failover detection is a mechanism used to detect a network failure.
Two network failover detection methods are available in vSphere when using a standard
virtual switch:
1. Link status only - relies on the link status provided by the network adapter. This method
can detect failures like cable pulls and physical switch power failures, but can not detect
configuration errors (e.g. wrong VLAN configuration of a physical switch port) or cable
pulls on the other side of a physical switch. This is the default.
2. Beacon probing - probes are sent out and listened for on all NICs in the team. This
method can determine link status and failures that the Link status only method can not,
such as configuration errors and cable pulls on the other side of a physical switch. Beacon
probing should not be used in conjunction with the IP Hash load-balancing policy.
Here are the steps for configuring network failover detection on a standard virtual switch
using the vSphere Web Client:
1. Go to the host’s Manage > Networking tab. Select the virtual switch you would like to
modify and select the Edit settings icon:
Select the Teaming and failover menu and choose the option under Network failure
detection:
How to handle network failures
These three failover policies are used to determine how to handle network failures in
vSphere:
1. Notify switches - a physical switch can be notified when a virtual NIC is connected to a
virtual switch or a failover event occurs. The notification is sent out over the network so
the lookup tables on the physical switches can be updated. This policy is set to Yes by
default.
2. Failback - determines what a physical adapter does after recovering from a failure. If
set to Yes, the failed physical adapter is put back to active duty immediately after recovery,
and the standby adapter returns to being a standby adapter. If set to No, the failed physical
adapter is left out of service after recovery, until another active adapter fails.
3. Failover order - determines the physical adapter failover order. Three categories are
available:
Active adapters - the adapters listed here will be used as long as network connectivity
is available.
Standby adapters - the adapters listed here will be used if one of the active adapters
loses connectivity.
Unused adapters - the adapters listed here will never be used.
Here are the steps for configuring failover policies on a standard virtual switch using the
vSphere Web Client:
1. Go to the host’s Manage > Networking tab. Select the virtual switch you would like to
modify and select the Edit settings icon:
Select the Teaming and failover menu and select the failover option you would like to
configure:
Distributed switches explained
A distributed switch functions as a single virtual switch accross all associated ESXi hosts
and allows virtual machines to maintain consistent network configuration as they migrate
across multiple hosts. Just like standard switches, distributed switches forward frames at
layer 2, support VLANs, NIC teaming, outbound traffic shaping, etc. The biggest
difference between these two types of virtual switches is that distributed switches are
configured using a central unified management interface through vCenter Server, which
greatly simplifies virtual machine network configuration and reduces complexity in
clustered ESXi environments. Distributed switches also support some advanced
networking features that standard switches don’t, such as network I/O Control, port
mirroring, network health check, and support for protocols such as NetFlow, Private
VLAN (PVLAN), Link Aggregation Control Protocol (LACP), Link Layer Discovery
Protocol (LLDP), etc.
In vCenter Server 5.5, five versions of distributed switches are available:
Distributed Switch 5.5.0 - compatible with vSphere 5.5 and newer. Supports Traffic
Filtering and Marking.
Distributed Switch 5.1.0 - compatible with vSphere 5.1 and newer. Supports
Management Network Rollback and Recovery, Health Check, Enhanced Port
Mirroring, and LACP.
Distributed Switch 5.0.0 - compatible with vSphere 5.0 and newer. Supports user-
defined network resource pools in Network I/O Control, NetFlow, and Port
Mirroring.
Distributed Switch 4.1.0 - compatible with vSphere 4.1 and newer. Supports load-
based teaming and Network I/O Control.
Distributed Switch 4.0 - compatible with vSphere 4.0 and newer. Doesn’t support
many of the features supported by later versions of distributed switches.
Distributed switches are not covered in the VCP-DCV curriculum, so if you are
studying for the exam, don’t expect questions about this topic.
Chapter 6 - Storage
Storage technologies for ESXi
Storage protocols for ESXi
What is a datastore?
Virtual Machine File System (VMFS)
Raw Device Mapping (RDM)
Storage technologies for ESXi
ESXi hosts support host-level storage virtualization, which logically abstracts the physical
storage layer from virtual machines. The following storage technologies are supported by
ESXi:
1. Direct-attached storage - internal hard disks or external storage systems attached to the
ESXi host through a direct connection using procotols such as SAS or SATA. This type of
storage does not require a storage network to communicate with your host, but prevents
you from using vSphere features that require shared storage, such as High Availability and
vMotion.
2. Fibre Channel - a high-speed network technology used for SANs. It works by
encapsulating SCSI commands and transmitting them between FC nodes. ESXi hosts
should be equipped with Fibre Channel host bus adapters (HBAs).
3. Fibre Channel over Ethernet - a network technology that encapsulates Fibre Channel
frames over Ethernet networks. The same Ethernet link carries both FC and Ethernet
traffic.
4. Internet SCSI - iSCSI is a protocol used for encapsulating SCSI control and data in
TCP/IP packets, enabling access to storage devices over standard TCP/IP networks.
5. Network-attached storage - a file-level storage shared over standard TCP/IP networks.
Files are usually accessed using the NFS (Network File System) protocol.
Virtual machines use virtual disks to store their operating system, program files, and other
data. Virtual disks are large files that can be copied, moved, deleted. and archived just like
any other file. Each virtual disk resides on a datastore that is deployed on the physical
storage. From the standpoint of a virtual machine, each virtual disk appears as if it were a
SCSI drive connected to a SCSI controller. Whether the actual physical storage is being
accessed through storage or network adapters on the host is typically transparent to the
guest operating system and its application.
Storage protocols for ESXi
Direct-attached storage is sometimes used for the ESXi installation. It can also be used in
smaller environments that don’t require shared SAN storage. Noncritical data is
sometimes stored on direct-attached storage, for example CD-ROM ISO images, VM
templates, decommissioned VMs, etc.
Shared storage enables you to use some advanced vSphere features such as vMotion, High
Availability and Distributed Resource Scheduler. It can also be used as a central
repositoriy for VM files and templates, clustering of VMs across ESXi host and allocation
of large amounts of storage to the ESXi hosts.
The following table shows which vSphere features are supported by different storage
protocols (source: VMware):
What is a datastore?
Datastores in VMware vSphere are storage containers for files. They could be located on
a local server hard drive or across the network on a SAN. Datastores hide the specifics of
each storage device and provide a uniform model for storing virtual machine files.
Datastores are used to hold virtual machine files, templates, and ISO images. They can be
formatted with VMFS (Virtual Machine File System, a clustered file system from
VMware), or with a file system native to the storage provider (in the case of a NAS/NFS
device).
To display datastore information using the vSphere Web Client, go to vCenter >
Datastores:
In the picture above you can see that there are two datastores available, both of which are
formatted with VMFS5.
To view specific datastore details, double-click a datastore. To browse the files stores on
the datastore, right-click and select Browse Files. You can delete, move, or upload files:
Virtual Machine File System (VMFS)
VMFS (Virtual Machine File System) is a clustered file system from VMware that
provides storage virtualization. VMFS offers many virtalization-based features, such as:
concurrent access to shared storage. Multiple ESXi hosts can read and write to the
same storage device at the same time.
migration of powered-on virtual machines from one ESXi host to another without
downtime.
VMFS datastore size can be increased while VMs residing on the datastore are
running.
when combined with shared storage, advanced vSphere features such as vMotion,
DRS, HA, and FT are supported.
Physical - the VMkernel passes through all SCSI commands with the exception of
the REPORT LUNs command. This RDM mode is used when running SAN-aware
applications in a virtual machine. Note that physical mode RDMs can’t be included in
a vSphere snapshot.
Virtual - allows the guest OS to treat the RDM more like a virtual disk. Virtual mode
RDMs can be included in a vSphere snapshot.
Chapter 7 - iSCSI
iSCSI SAN components
iSCSI naming and addressing
iSCSI initiators
Network configuration for iSCSI
iSCSI target discovery
VMkernel port for iSCSI software initiator
Configure iSCSI software initiator
iSCSI CHAP overview
Configure iSCSI CHAP
iSCSI SAN components
iSCSI (Internet Small Computer System Interface) encapsulates SCSI control and data in
TCP/IP packets, allowing access to storage devices over the existing network
infrastructure. An iSCSI SAN usually consists of the following components:
The following picture gives you an overview of iSCSI components (source: VMware):
iSCSI naming and addressing
An iSCSI node (which can be either a target or an initiator) is identified by a unique name
so that storage can be managed regardless of address. iSCSI names are formatted in two
different ways:
1. iSCSI qualified name (IQN)
Takes the form iqn.yyyy-mm.naming-authority:unique-name, where:
You can display the iSCSI name assigned to your iSCSI adapters using the vSphere Web
Client. To do this, select your ESXi host in the Inventory and go to Manage > Storage.
Under Storage Adapters, select your iSCSI software adapter. You should the the iSCSI
name under Adapter status:
iSCSI initiators
iSCSI initiators are used by ESXi hosts to access iSCSI targets. iSCSI initiators
encapsulate SCSI commands into Ethernet packets, enabling ESXi hosts to communicate
with an iSCSI SAN device over the standard Ethernet cabling.
Two types of initiators are supported by ESXi hosts:
1. software iSCSI initiator - the initiator code that is built into the VMkernel. The iSCSI
SAN device can be accessed using standard network adapters. With the software iSCSI
initiator, you don’t need to purchase a hardware iSCSI adapter to obtain iSCSI
connectivity, but all iSCSI processing is done by the ESXi host.
2. hardware iSCSI initiator - a specialized adapter capable of accessing an iSCSI SAN
device over the standard network. All iSCSI processing is done by the adapter. Hardware
iSCSI initiators are divided into two categories:
depended hardware iSCSI adapter - an adapter that performs the iSCSI processing
but relies on the VMkernel for networking access.
independed hardware iSCSI adapter - an adapter that performs both the iSCSI
processing and networking. The adapter of this type implements its own interfaces
for networking, configuration, and management and does not depend on VMkernel.
The following picture illustrates the differences between the two iSCSI initiator types
(source: VMware):
Network configuration for iSCSI
You need to create a VMkernel port on a virtual switch to access software iSCSI. A
VMkernel port is configured with its own IP address, subnet mask and default gateway to
allow the ESXi host to access the iSCSI SAN device.
You can choose one of two networking setups, depending on the number of physical
adapters used for software iSCSI:
1. one physical network adapter - you need a VMkernel port on a virtual switch.
2. two or more physical network adapters - you can use adapters for host-based
multipathing..
It is recommended to physically isolate your iSCSI network from other networks for
performance and security reasons. If the physical isolation is impossible, configure a
separate VLAN for each network to logically isolate the networks.
In the picture below you can see a VMkernel port configured with an IP address for the
purpose of accessing iSCSI storage:
iSCSI target discovery
An iSCSI target is a logical target-side device that hosts iSCSI LUNs and masks to
specific iSCSI initiators. You need to configure the iSCSI initiator to discover the iSCSI
target so that the ESXi host can access LUNs on the target.
ESXi hosts support the two methods of iSCSI target-discovery:
1. static discovery - the IP address (or the host name) and the iSCSI name of the target are
manually specified. The iSCSI initiator doesn’t have to perform the discovery.
2. dynamic discovery - all iSCSI targets associated with an IP address (or a host name) and
the iSCSI name are discovered. An ESXi host issues the iSCSI-standard SendTargets
command to the target. This target then responds with all the available targets and LUNs
to the ESXi host.
The following picture illustrates the iSCSI target-discovery process (source: VMware):
VMkernel port for iSCSI software initiator
An iSCSI software initiator requires a VMkernel port on a virtual switch to access
software iSCSI. A VMkernel port is configured with its own IP address, subnet mask and
default gateway to enable the ESXi host to access the iSCSI SAN device.
Here are the steps to create a VMkernel port on a standard virtual switch using vSphere
Web Client:
1. Select your ESXi host in the inventory and go to Manage > Networking. From the list
of virtual switches, select the switch on which you would like to create a VMkernel port
on and click the Add host networking icon:
2. The Add Network Wizard opens. Select VMkernel Network Adapter as the connection
type and click Next:
3. Select the Select an existing standard switch option and choose the switch you would
like to create a VMkernel port on:
4. In the Network label field, type iSCSI Storage. All other settings can keep the default
values:
5. It is recommended that you manually set up the IP settings. Select Use static IPv4
settings and type the IP address and subnet mask that the VMkernel port will use. Verify
that the default gateway and DNS server IP addresses are correct:
6. Click Finish to finish the wizard.
Configure iSCSI software initiator
Here are the instructions to enable a software iSCSI initiator on an ESXi host using
vSphere Web Client:
1. Select your ESXi host in the inventory and go to Manage > Storage. Under Storage
Adapters, click the Add new storage adapter icon and select Software iSCSI adapter:
2. Click OK in the Add Software iSCSI Adapter window that opens:
3. When the task is completed, select the vmhba<NUMBER> adapter that is listed under
iSCSI Software Adapter. Under Adapter Details > General, select the Edit button:
4. Note that the dialog box displays the iSCSI initiator name. Type a friendly name for the
iSCSI Alias and click OK:
5. You now need to associate the iSCSI software adapter with the VMkernel port created
in the previous step. Select the Network Port Binding tab and click the plus icon:
6. Select the VMkernel port you’ve created in the previous step and click OK:
7. Under Adapter Details, go to the Targets tab. Select Dynamic discovery and click Add:
8. Type the hostname (or the IP address) and the port of the iSCSI target and click OK:
9. When the task is completed, click the Rescan adapter icon and click OK to rescan the
adapters:
10. Select the iSCSI software adapter from the Storage Adapters list and go to the Paths
tab to make sure that the iSCSI target has been found:
iSCSI CHAP overview
Challenge Handshake Authentication Protocol (CHAP) is a widely supported
authentication method, where a password exchange is used to authenticate the source or
target of communication. CHAP uses a three-way handshake algorithm to verify the
identity of the ESXi host and, if applicable, of the iSCSI target. The verfication is based
on a shared secret key that both the initiator and the target are aware of. The actual
password is never sent over the wire; instead, CHAP uses the hash value of the secret.
iSCSI initiators on ESXi hosts can use CHAP for authentication purposes. Two CHAP
authentication methods are available:
1. unidirectional CHAP - also called one-way CHAP. With this method, only the target
authenticates the initiator; the initiator does not authenticate the target. You need to
specify the CHAP secret that will be shared by both the initiator and the target.
2. bidirectional CHAP - also called mutual CHAP. With this method, the target
authenticates the initiator, and the initiator also authenticates the target. You need to
specify different target and initiator secrets.
ESXi hosts support CHAP authentication at the adapter level, where all targets receive the
same secret key from the iSCSI initiator. For software iSCSI and dependent hardware
iSCSI initiators, the per-target CHAP authentication is supported.
2. Click the Properties tab at the bottom of the screen. Scroll down to the Authentication
section and click Edit:
3. The Edit Authentication window opens. Under Authentication method, select the Use
Unidirectional CHAP option. This method forces the initiator to authenticate to the target.
Enter the username in the Name field and the secret in the Secret field:
4. Rescan the host bus adapter:
CHAP also needs to be enabled at the iSCSI storage system. Check the documentation
of your storage provider to find out how to enable CHAP.
Chapter 8 - NFS
NFS (Network File System) overview
NFS components
Access controls in NFS
Configure NFS datastore
NFS (Network File System) overview
NFS (Network File System) is a file-sharing protocol used by ESXi hosts to communicate
with a NAS (Network Attached Storage) device over a standard TCP/IP network. A NAS
device is a specialized storage device connected to a network, providing data access
services to ESXi hosts through protocols such as NFS.
NFS datastores are used in much the same way as VMFS datastores. They can hold virtual
machine files, templates, ISO images, and other date. An NFS volume supports advanced
vSphere features such as vMotion, DRS, and HA. ESXi includes a built-in NFS client
that uses NFS v3 to communicate with the NFS server.
To use NFS as a shared repository, you must create a directory on the NFS server and then
mount that directory as a datastore on all hosts. Note that ESXi hosts must have the
read/write permissions on the NFS server and the read/write access must be allowed for
the root system account. We will learn how to do that in some later section.
In the picture above you can see that the datastore My NFS Datastore is using NFS 3 as its
file system.
NFS components
NFS (Network File System) is a file-sharing protocol used by ESXi hosts to communicate
with a NAS (Network Attached Storage) device over a standard TCP/IP network. An NFS
device contains directories shared with ESXi hosts over the network. The shared
directories hold virtual machine files, templates, ISO images, and other data. ESXi hosts
use VMkernel ports defined on virtual switches to access NFS devices.
Here is a description of NFS components used in an vSphere environment:
NFS device (server) - a storage device or a server that uses the NFS protocol to make
files available over the network.
NFS datastore - a shared folder on the NFS server that can be used to hold virtual
machine files.
NFS client - ESXi includes a built-in NFS client used to access an NFS device.
The following picture shows NFS components used in an vSphere environment (source:
VMware):
Access controls in NFS
There are some things you need to be aware of before creating and accessing an NFS
datastore:
ESXi hosts must be able to access the NFS server in read-write mode.
Sometimes, to protect NFS volumes from unauthorized access, the NFS volumes are
exported with the root_squash option enabled. When root_squash is on, root users are
downgraded to unprivileged file system access and the NFS server might refuse the ESXi
host access to virtual machine files on the NFS volume. The no_root_squash option must
be used instead of root_squash to export an NFS volume. This option allows root on the
client (the ESXi host) to be recognized as root on the NFS server.
Configure NFS datastore
An ESXi host requires a VMkernel port on a virtual switch in order to access an NFS
datastore. A VMkernel port is configured with its own IP address, subnet mask and default
gateway to allow the ESXi host to access the NFS datastore.
Here are the instructions to configure an NFS datastore on an ESXi host using vSphere
Web Client:
1. Select your ESXi host from the inventory and go to Related Objects > Datastores.
Click the Create a new datastore icon to start the wizard:
4. Name the new datastore. The name can be anything you want. In the Server field, type
the hostname or the IP address of the NFS server. In the Folder field, type the path to the
NFS datastore. Leave the Mount NFS read only check box deselected:
5. Click Finish.
You should see your new datastore in the Datastores window:
Chapter 9 - Fibre Channel
Fibre Channel overview
FC components
FCoE adapters explained
Fibre Channel overview
Fibre Channel (FC) is a high-speed network technology used to connect computer data
storage in an enterprise environment. FC works by encapsulating SCSI commands and
transmitting them between FC nodes. Each ESXi host is equipped with at least two Fibre
Channel host bus adapters (HBAs), devices that connect the ESXi host to the FC network
and support a very high throughput of up to 16 Gbps. Here is what a host bus adapter
looks like (source: Wikipedia):
Point-to-point (FC-P2P) - two devices connected directly to each other. Rarely used
today.
Arbitrated loop (FC-AL) - all devices are in a loop or ring. Rarely used today.
Switched fabric (FC-SW) - all devices are connected to FC switches, devices similar
to Ethernet switches, but compatible with the Fibre Channel (FC) protocol. Most
array designs today use this topology.
Although FC was initially developed to use only optical (fiber) cabling, copper cables
are also supported.
FC components
A Fibre Channel network consists of the following components:
Storage system - a set of physical hard disks (also called a disk array) and one or
more intelligent controllers. Storage system support the creation of LUNs.
LUN (Logical Unit Number) - a number used to identify a device (logical unit)
addressed by the FC protocol.
SP (Storage Processor) - partitions a JBOD (Just a bunch of disks) or RAID set into
one or more LUNs. It can restrict access of a particular LUN to one or more server
connections.
HBA - a device that connects the ESXi host to the FC network. A minimum of two
HBA adapters are used for fault-tolerant configurations. Virtual machines see
standard SCSI connections and are not aware of the underlying FC SAN being
accessed.
Fibre Channel switch - a device similar to an Ethernet switch, but compatible with
the Fibre Channel (FC) protocol. Used to interconnect FC nodes.
hardware FCoE adapters - also called converged network adapters, these adapters
contain network and FC funcionality on the same physical card.
software FCoE adapters - introduced in vSphere 5.x, a sofware FCoE adapter is
simply a software code that performs FCoE processing. A software FCoE adapter is
used with NICs that offer Data Center Bridging (DCB) and I/O offload capabilities.
Chapter 10 - Datastores
Create VMFS datastore
Browse VMFS datastores
Increase size of VMFS datastore
Expand VMFS datastore
Remove VMFS datastore
Unmount VMFS datastore
Delete VMFS datastore
VMware Virtual SAN overview
Requirements for Virtual SAN
Datastore in Virtual SAN
Configure Virtual SAN
Create VMFS datastore
VMFS datastores are used as repositories for virtual machines’ files. They can be set up on
any SCSI-based storage device that the ESXi host discovers, such as Fibre Channel,
iSCSI, or local storage devices.
Here is how you can create a VMFS datastore using vSphere Web Client:
1. Right-click the ESXi host in the inventory and select New Datastore:
2. The New Datastore wizard opens. First, select your ESXi host as the location and click
Next:
To show the content of a VMFS datastore, go to the Datastore inventory in vSphere Web
Client. You should see a list of datastores:
To browse a datastore, select a datastore from the inventory and go to Manage > Files:
To upload a file to the datastore, click the Upload a file to the Datastore icon on the right:
To delete a file or folder from the datastore, select the file and click the Delete selected file
or folder icon on the right:
add an extent to the VMFS datastore - you can add an extent (a partition on a LUN)
to any VMFS datastore. You can add up to 32 extents (LUNs) to the datastore, up to
64TB.
expand the VMFS datastore - you can increase the size of the datastore in its extent.
Extents must have free space immediately after them. A LUN can be expanded any
number of times, up to 64TB.
You don’t need to power off virtual machines when using either method of increasing the
VMFS datastore capacity.
2. The Increase Datastore Capacity wizard opens. Select the datastore you want to
expand:
3. In the Partition Configuration drop-down menu, change the value to Use ‘Free Space
XXXX GB’ to expand the datastore. The free space listed will be different in your
environment:
4. Click Finish to expand the datastore:
Remove VMFS datastore
You can remove a VMFS datastore from your ESXi host in two ways:
delete a VMFS datastore - destroys the pointers to the files on the datastore, so that
the files disappear from all ESXi hosts that have access to the datastore.
unmount a VMFS datastore - preserves the files on the datastore, but makes the
datastore inaccessible to the ESXi host.
Before deleting or unmounting a VMFS datastore, make sure that all virtual machines
whose disks reside on the datastore are powered off. Before unmounting a VMFS
datastore, make sure that the following prerequisites are met:
If you want to keep the data, back up the content of the datastore before you delete it.
Unmount VMFS datastore
Unmounting a VMFS datastore makes the datastore inaccessible to the ESXi host, but the
files on the datastore remain preserved. The datastore continues to appear on other hosts,
where it remains mounted.
Before unmounting a VMFS datastore, make sure that the following prerequisites are met:
Here is how you can unmount a VMFS datastore using vSphere Web Client:
1. Select your ESXi host from the Inventory and go to Related objects > Datastores.
Right-click the datastore you would like to unmount and select All vCenter Actions >
Unmount Datastore:
2. If the datastore is shared, specify which hosts should no longer access the datastore:
Up to five hard disk drives and one SSD drive can be used per disk group. VSAN
requires at least one SSD in each host for caching purposes.
Requirements for Virtual SAN
Although VSAN is easy to set up, there are certain things you need to be aware of before
deploying Virtual SAN in your environment:
you need to have vCenter Server installed. vCenter Server is used to manage VSAN.
a minimum of three ESXi hosts are required. The maximum number of ESXI hosts
that can use VSAN is eight.
a dedicated Virtual SAN network is required. 1 Gbps network can be used, but
10Gbps network is recommended, with two NICs for fault-tolerance purposes.
all ESXi hosts with local storage must have at least one SSD and one hard disk.
the SSDs must make up at least 10 percent of the total amount of storage.
Not every host in a VSAN cluster needs to have local storage in order to take
advantage of VSAN storage resources. Hosts without storage are used to compute
resources.
Datastore in Virtual SAN
After you enable VSAN on a cluster, a single VSAN datastore is created. This datastore
uses storage from every ESXi host in the VSAN cluster and contains all VM files.
OSFS (Object Store File System) enables VMFS volumes from each ESXi host to be
mounted as a single datastore. Data on a VSAN datastore is stored in the form of data
containers called objects, which are logical volumes that have their data distributed across
the entire cluster. An object can be a vmdk file, a snapshot, or the VM home folder. For
each VM on a VSAN datastore, an object is created for each of its virtual disks. A
container object is also created and holds a VMFS volume and stores the virtual machine
metadata files.
Although only a single VSAN datastore is created for the whole VSAN cluster, you
can have multiple datastore storage policies that can be configured with different
storage capabilities.
Configure Virtual SAN
Here is an overview of the steps required to configure Virtual SAN (VSAN) in your
vSphere environment:
1. You need to create a dedicated VMkernel network for the VSAN. The network has to be
accessible by all ESXi hosts. 1Gbps network can be used, but 10Gbps network is
recommended, with two NICs for fault-tolerance purposes.
2. You need to create a VSAN cluster. When creating a cluster using vSphere Web Client,
the VSAN option is available:
Automatic mode - all local disks are claimed by VSAN for the creation of the VSAN
datastore.
Manual mode - you must manually select disks to add to the VSAN datastore.
4a. If you configure the VSAN cluster in the Automatic mode, all ESXi hosts are scanned
for empty disks that are then configured for VSAN.
4b. If you configure the VSAN cluster in the Manual mode, you need to create disk groups
for VSAN.
Chapter 11 - Templates
What is a virtual machine template?
Create virtual machine template
Update virtual machine template
Customize guest OS
Deploy VM from template
Clone virtual machine
What is a virtual machine template?
A virtual machine template is a master copy of a virtual machine that usually includes the
guest OS, a set of applications, and a specific VM configuration. Virtual machine
templates are used when you need to deploy many VMs and ensure that they are are
consistent and standardized.
A virtual machine can be converted to a template in vCenter Server. The template can then
be used in vCenter Server to provide simplified provisioning of virtual machines. For
example, you can set up a master image of a frequently deployed server OS, such as
Windows Server 2012. This virtual machine can be customised to form a standard build
for your environment, and then all future Windows Server 2012 installations can be
deployed from the virtual machine template.
There are two options for creating virtual machine templates in vCenter Server:
Once the VM is in template format, the template cannot be powered on or have its
settings edited.
Create virtual machine template
Here are the instructions on how to create a virtual machine template using vSphere Web
Client:
1. Go to the VMs And Templates inventory view. Right-click the VM you want to use as a
template and select All vCenter Actions > Clone to Template or All vCenter Actions >
Convert to Template. Both options will create a template, but the original VM will be
retained only if you use the Clone to Template option:
2a. If you’ve selected the Convert to Template option, you should see the template in the
Inventory. Note that the original virtual machine was not retained:
2b. If you’ve selected the Clone to Template option, the wizard should open. Type the
template name and select the location for the template:
Same format as source - keeps the template’s virtual disks in the same format as the
VM that is being cloned.
Thick Provision Lazy Zeroed - the disk space will be fully allocated when the virtual
disk is created. The disk space will not be zeroed out upon creation.
Thick Provision Eager Zeroed - the disk space will be fully allocated when the virtual
disk is created. The disk space will be zeroed out upon creation.
Thin Provision - the virtual disks will occupy only as much space as is currently used
by the guest OS.
Unlike the Clone to Template option, the Covert to Template option doesn’t offer a
choice of format in which to store the VM’s virtual disks and leaves the VM disk files
intact.
Update virtual machine template
You can update your virtual machine template in order to include new patches or software.
Here are the steps for updating a VM template using vSphere Web Client:
1. Convert your template to a virtual machine by selecting the template from the inventory
and selecting the Convert to Virtual Machine option:
5. Enter the computer name. We’ve selected the Use the virtual machine name option to
keep the guest OS computer name matched up with the VM name:
6. Enter the license information and click Next:
9. If you have any commands you would like to run the first time a user logs on, supply
them here:
10. Choose the network settings:
2. The Deploy From Template wizard opens. Enter the name of the new virtual machine
and select a location (a datacenter or VM folder) for the VM:
3. Select the cluster or host on which the VM will run:
5. Select the clone options. If you want to customize the guest OS before deploying the
VM, select the Customize the operating system option:
6. (Optional) If you’ve selected the Customize the operating system option in the previous
step, select the customization specification:
2. Enter the new virtual machine name and select a location for the VM:
3. Select the cluster or host on which the cloned VM will run:
4. Select the virtual disk format and datastore for the cloned VM:
5. Select the clone options. If you want to customize the guest OS before deploying the
VM, select the Customize the operating system option:
6. If you’ve selected the Customize the operating system option, select the customization
specification:
7. Click Finish to start the cloning process:
Chapter 12 - Edit VM settings
Edit virtual machine settings
Add virtual hardware to VM
Remove virtual hardware from VM
Add raw LUN
Dynamically increase virtual disk’s size
Inflate thin disk
Edit VM options
VMware Tools options
VM boot options
Edit virtual machine settings
Sometimes you might want to modify an existing virtual machine’s configuration in order
to to meet new performance demands. For example, a virtual machine might need an
additional virtual network adapter or an increase in the CPU and memory resources. You
can edit these setting using the Edit Settings dialog box. You can edit the following VM
settings:
In most cases, modifying a VM requires that the VM is powered off, although some
hardware changes can be made while the VM is powered on, such as hot-adding a USB
controller, SATA controller, an Ethernet adapter, or a hard disk.
To open the Edit Settings dialog box in vSphere Web Client, right-click the VM and select
the Edit Settings option:
Add virtual hardware to VM
Sometimes you might want to add new virtual hardware to the virtual machine. In
some cases, virtual hardware can be added without powering off the virtual machine.
Examples of such hot-pluggable devices are USB and SATA controllers, Ethernet
adapters, hard disks, and SCSI devices.
In this lesson, we will show how to add an Ethernet adapter to the VM:
1. Launch the vSphere Web Client. From the Inventory, right-click the VM and select the
Edit Settings option:
2. From the New device drop-down box at the bottom of the screen, select Network and
click Add:
3. The New network options appear. Expand this option to gain access to additional
properties. You can choose the network adapter type, the network to which it should be
connected, and whether the network adapter should be connected at power on:
4. Review the settings and click OK to start adding new virtual hardware.
Remove virtual hardware from VM
You can remove virtual hardware from a virtual machine using the Edit Settings dialog
box. Here are the steps to remove an Ethernet adapter from a virtual machine using
vSphere Web Client:
1. Launch the vSphere Web Client. From the Inventory, right-click the VM and select the
Edit Settings option:
2. From the Virtual Hardware tab, select the network adapter you would like to remove
and click the Remove icon on the right:
3. Click OK to remove the network adapter.
Add raw LUN
Sometimes a storage device must be presented directly to the guest operating system. A
vSphere feature named Raw Device Mapping (RDM) enables a virtual machine to directly
access and use a logical unit number (LUN).
When you create an RDM, a file is created on a VMFS volume and acts as a proxy for the
raw physical device. This file has a .vmdk extension and contains only the mapping
information and not the actual data. Virtual machine data is stored directly on the storage
area network device. Storing the data this way is useful if you are running applications in
your virtual machines that are SAN-aware and must know the physical characteristics of
the SAN storage device.
Here is how you can add a raw LUN to a virtual machine using vSphere Web Client:
1. Select the VM from the inventory and choose the Edit Setting option:
2. The Edit Settings dialog box opens. From the New device drop-down box at the bottom
of the screen, select RDM Disk and click Add:
3. Choose the target LUN:
4. The New Hard disk options appear. Expand this option to gain access to additional
properties. You can set various RDM options. The most important one is the compatibility
mode. Two modes are available:
Physical - the VMkernel passes through all SCSI commands with the exception of
the REPORT LUNs command. This RDM mode is used when running SAN-aware
applications in a virtual machine. Physical mode RDMs can’t be included in a
vSphere snapshot.
Virtual - allows the guest OS to treat the RDM more like a virtual disk. Virtual mode
RDMs can be included in a vSphere snapshot.
5. Click OK to add a raw LUN to the virtual machine. After the process is finished, the
guest OS should see the new disk.
Dynamically increase virtual disk’s size
The size of a virtual disk can be dynamically increased while the virtual machine is
powered on. Note that, in order for this to be done, the virtual disk has to be a flat disk in
persistent mode, and the virtual machine must not have any snapshots.
Here are the steps to dynamically increase a virtual disk’s size using vSphere Web Client:
1. Select the VM from the inventory and select the Edit Setting option:
2. From the Virtual Hardware tab, select the hard disk and type the new hard disk size:
3. Click OK to save the changes.
After the hard disk size is increased, you need to log in to the guest operating system and
enable the file system to use the newly allocated disk space using tools such as the
diskpart utility in Windows and the fdisk or parted utilities in Linux.
Inflate thin disk
A virtual disk in thin format uses only as much space on the datastore as needed. This
means that, if you create a 10 GB virtual disk and place 3 GB of data in it, only the 3 GB
of space on the datastore will be used, but the performance will not be as good as with the
other two disk types.
You can convert a thin disk to a thick disk by inflating it to its full size. Here is how to
do that using vSphere Web Client:
1. Shut down the guest OS.
2. Browse to the datastore on which the virtual disk is stored and find the disk’s .vmdk file.
Right-click the virtual disk file and select the Inflate option:
Edit VM options
You can configure a number of virtual machine options by accessing the VM Options tab
of the Edit Settings dialog box. Some of the options you can configure are:
Changing the VM name doesn’t change the names of the VM directory or the VM
files.
To change virtual machine options, log into vSphere Web Client. Select the VM from the
inventory and select the Edit Settings option:
In the Edit Settings dialog box, choose the VM Options tab:
VMware Tools options
The VM Options tab of the Edit Settings dialog box contains a panel called VMware Tools.
This panel has multiple options that specify how VMware Tools in the virtual machine
respond to certain external events, such as restart or power-off. Note that the VM has to be
powered off in order to change these settings.
VMware Tools can also be set to run scripts when a certain event (such as a power-off)
occurs. With these options, you can control when the VM checks to see whether scripts
should be run.
The two other options that can be set in the VMware Tools panel are the update checks
and time synchronization.
To change the VMware Tools options, log into vSphere Web Client. Select the VM from
the inventory and select the Edit Settings option:
In the Edit Settings dialog box, select the VM Options tab and expand the VMware Tools
panel:
VM boot options
To control how a virtual machine starts, you can use the Boot Options panel of the VM
Options tab in the Edit Settings dialog box. The boot options that can be changed are:
Firmware - the firmware used to boot the virtual machine. Two options are available:
BIOS and EFI. If the guest OS supports both options, you can select the option that
will be used here.
Boot Delay - specifies the delay between the time a virtual machine is powered on
and the guest OS starts to boot.
Force BIOS (EFI) setup - forces the virtual machine to go straight into BIOS.
Failed Boot Recovery - when turned on, this option forces the virtual machine to retry
booting after 10 seconds (by default), if the VM fails to find a boot device.
To change the virtual machine boot options, log into vSphere Web Client. Select the VM
from the inventory and select the Edit Settings option:
In the Edit Settings dialog box, select the VM Options tab and expand the Boot Options
panel:
Chapter 13 - VM migration
What is VM migration?
VM migration types
vSphere vMotion explained
vSphere vMotion process
vMotion requirements
CPU compatibility for vMotion
Hide or expose NX/XD bit
VMware CPU identification utility
Create virtual switch and VMkernel port group for vMotion
Use vSphere vMotion
vSphere Storage vMotion explained
Use vSphere Storage vMotion
Enhanced vMotion explained
What is VM migration?
Migrating a virtual machine means moving a virtual machine from one host or datastore to
another host or datastore. vSphere supports a couple of VM migration types that can help
you to get better resource utilization across multiple physical hosts and shift workloads
between hosts in order to balance the resource utilization. The available migration types
are:
In the picture above you can see that the VM is being transfered from the source ESXi
host to the destination ESXi host (esx02). Here is a description of each step in the
migration process:
1. An ESXi administrator initiates a vMotion migration.
2. the VM’s memory state is copied from the source to the destination ESXi host over the
vMotion network. Users continue to access the VM and update pages in memory. A list of
modified pages is kept in a memory bitmap on the source host. This process occurs
iteratively.
3. After the VM’s memory is copied to the target host, the VM on the source host is
quiesced. This means that it is still in memory but is no longer servicing client requests for
data. The memory bitmap file and the VM device state is then transferred to the target.
4. The destination host (esx02) reads the addresses in the memory bitmap file and requests
the contents of those addresses from the source host.
5. After the content of the memory referred to in the memory bitmap file is transferred to
the destination host, the VM starts running on that host. A Reverse Address Resolution
Protocol (RARP) message is sent to notify the subnet that the VM’s MAC address is now
on a new switch port.
6. After the VM is successfully operating on the destination host, the memory the VM was
using on the source host is marked as free.
vMotion requirements
Before performing a vSphere vMotion migration, ensure that the following conditions are
met:
VM requirements
the VM must not be connected to any device physically available to only one ESXi
host, such as disk storage, CD/DVD drives, floppy drives, and serial ports.
the VM must have all disk, configuration, log, and NVRAM files stored on a
datastore accessible from both ESXi hosts.
if the VM uses RDM, the destination ESXi host must be able to access it.
Host requirements
at least a Gigabit Ethernet network interface card with a VMkernel port enabled for
vMotion on each ESXi host is required.
identically named virtual machine port groups connected to the same network. All
port groups to which the VM is attached must exist on both ESXi hosts. Note that the
port group naming is case sensitive.
CPUs in both ESXi hosts must be compatible. CPUs need to be from the same
vendor (AMD or Intel, for example), CPU family, and must support the same
features. Note that some features CPU features can be hidden by using compatibility
masks.
CPU compatibility for vMotion
In order to to perform a vSphere vMotion operation, CPUs in both hosts must be
compatible. CPUs must be from the same vendor (AMD or Intel, for example), must be in
the same family (P4 or Opteron, for example), and must support the same features.
However, there are some hardware extensions from Intel and AMD (such as CPU
masking) that can help you mitigate the CPU differences.
When a VM is migrated between ESXi hosts, the VM has already detected the type of
processor it is running on when it booted. Because the VM is not rebooted during the
vMotion process, the guest assumes the CPU instruction set on the target host is the same
as on the source host. Because of that, the CPUs in the two hosts that perform vMotion
must meet the following requirements (image source: VMware):
Hide or expose NX/XD bit
AMD NX (No Execute) and Intel XD (Execute Disable) are technologies used in CPUs to
mark certain areas of memory as non-executable, in order to prevent malicious software
exploits and buffer overflow attacks. These technologies are turned on (exposed) by
default for all guest operating systems that support them.
In order to increase the vMotion compatibility between hosts, you can mask (hide) the
NX/XD bit. For example, if you have two otherwise compatible hosts with the NX/XD bit
mismatch, you can mask the NX/XD bit from the VM. Masking this bit tells the VM that
there’s no NX/XD bit present. If the VM doesn’t know there’s an NX or XD bit on one of
the hosts, it won’t care if the target host has or doesn’t have that bit if you migrate that
VM using vMotion.
You can change the NX/XD setting using vSphere Web Client:
1. Select the VM from the inventory and select the Edit Settings option:
2. On the Virtual Hardware tab, expand the CPU tab. The NX/XD bit settings are
specified under the CPUID Mask option:
VMware CPU identification utility
The CPUs on ESXi hosts need to be similar in order for vSphere vMotion to work.
Although you can use the server hardware’s CPU specification to determine the CPU
features, VMware also offers you a free tool called VMware CPU identification utility that
can help you to determine whether the CPU contain features that can affect vMotion
compatibility.
To download this tool, go to vmware.com, seach for CPU identification utility, and
download the .zip file.
Inside the downloaded .zip file you will find a file called cpuid.iso. Burn this file into CD
and boot the ESXi server with the CD. If you are running ESXi inside another
virtualization solution, such as VMware Player or VMware Workstation, you can also run
the virtual machine with the ISO image attached and boot from it:
After you reboot your ESXi machine, you should the report about the CPU features:
Create virtual switch and VMkernel port group for vMotion
A virtual switch with a VMkernel port enabled for vSphere vMotion must be created on
both ESXi hosts in order for a vMotion migration to work. Here is how you can create
them using vSphere Web Client:
1. Select the ESXi host from the inventory and go to Manage > Networking. In the
Networking window, click the Add Host Networking icon:
2. The Add Networking wizard opens. Select VMkernel Network Adapter as the connection
type and click Next:
4. Under Assigned adapters, click the + sign and select the physical network adapter you
would like to add to the switch:
5. Type vMotion as the network label and select the vMotion traffic checkbox under
Available Services:
6. Specify whether you would like to obtain the IPv4 settings automatically or manually
enter the IP address that will be used for vMotion:
3. The Migrate Virtual Machine wizard opens. Select the Change host option as the
migration type:
4. Next, you need to select the ESXi host to which you want to migrate the virtual
machine. Expand the inventory view and select the destination ESXi host:
Note that the compatibility check is performed. If you receive any error message, the
migration will not continue. The warning messages will not prevent the migration.
5. Choose the vMotion priority. Mark the first option if you want the migration to receive
a reserved share of CPU resources:
storage maintenance - you can move your virtual machines from a storage device to
allow maintenance or reconfiguration of the storage device without downtime.
storage load redistribution - you can redistribute virtual machines or virtual disks to
different storage volumes to balance capacity and improve performance.
datastore upgrade - you can use Storage vMotion to migrate virtual machines when
you upgrade datastores from VMFS2 to VMFS5.
Migration with Storage vMotion renames virtual machine files on the destination datastore
to match the inventory name of the virtual machine. The migration renames all virtual
disk, configuration, snapshot, and .nvram files. This feature can not be turned off.
During a migration, you can choose to transform virtual disks from Thick-Provisioned
Lazy Zeroed or Thick-Provisioned Eager Zeroed to Thin-Provisioned or the reverse.
The following requirements must be met in order for a Storage vMotion migration to
succeed:
virtual machine disks (.vmdk files) must be in persistent mode or be raw device
mappings (RDMs).
you cannot move virtual disks greater than 2TB from a VMFS5 datastore to a
VMFS3 datastore.
the host on which the virtual machine is running must have access to both the source
and destination datastores.
the host on which the virtual machine is running must be licensed to use Storage
vMotion.
Use vSphere Storage vMotion
vSphere Storage vMotion is a vSphere migration mechanism used to migrate a powered-on
VM’s files from one datastore to another. Here is how you can migrate VM’s files with
Storage vMotion using vSphere Web Client:
1. (Optional) If your VM is powered-off, power it on.
2. Right-click the VM whose virtual disks you want to migrate and select the Migrate
option:
3. The Migrate Virtual Machine wizard opens. Select the Change datastore option as the
migration type:
4. Select the desired virtual disk format and the destination datastore:
5. Review the settings and click Finish to start the migration:
After the migration is completed, the VM should reside on the new datastore:
Enhanced vMotion explained
Enhanced vMotion enables migration to another ESXi host and datastore, even in vSphere
environments without shared storage. This feature combines vSphere vMotion and Storage
vMotion into a single operation and can be used by VMware administrators to move
workloads from host to host, without the need for expensive shared storage solutions.
In order to use Enhanced vMotion, both hosts must be on the same layer 2 network and
vSphere Web Client must be used. Here are the steps:
1. (Optional) If your VM is powerd-off, power it on.
2. Right-click the VM you want to migrate and choose the Migrate option:
3. The Migrate Virtual Machine wizard opens. Choose the Change both host and datastore
option as the migration type:
4. Next, you need to select the ESXi host to which you want to migrate the virtual
machine. Expand the inventory view and select the destination ESXi host:
Note that the compatibility check is performed. If you receive any error message, the
migration will not continue. Warning messages do not prevent the migration.
5. Select the desired virtual disk format and the destination datastore:
6. Select the vMotion priority. Mark the first option if you want the migration to receive a
reserved share of CPU resources:
After the migration is completed, the VM should reside on the new datastore and the new
host:
Chapter 14 - VM snapshots
Virtual machine snapshot
VM snapshot files
Take snapshots
Revert snapshot
Delete snapshot
Consolidate snapshots
Remove virtual machine
Virtual machine snapshot
VM snapshots enable you to preserve the state of a VM so you can return to the same state
later. A snapshot captures the memory, setting and disk states of a virtual machine. They
can be taken while a VM is powered-on, powered-off, or suspended.
Here are the steps to take a snapshot using vSphere Web Client:
1. Right-click a VM in the inventory and choose the Take Snapshot option:
2. The Take Virtual Machine Snapshot wizard opens. Enter the name and description for
the snapshot. Two other options are available:
Snapshot the virtual machine’s memory - specifies whether the RAM of the VM
should also be captured.
Quiesce guest file system (Needs VMware Tools installed) - specifies whether to
quiesce the file system in the guest OS. Use this option if you want to ensure that the
data within the guest file system is intact in the snapshot. Note that the running
application are not quiesced. The first option (Snapshot the virtual machine’s
memory) needs to be deselected in order for this option to become available.
After the process completes, you can view VM’s snapshots by right-clicking the VM from
the inventory and selecting Manage Snapshots. This opens up the Snapshot Manager:
VM snapshot files
Each virtual machine snapshot in vSphere consists of a number of files, such as:
delta disk file - holds the state of the virtual disk at the time the snapshot was taken.
The VM’s original .vmdk file is placed in read-only mode to preserve its state.
memory state file - holds the memory state at the time the snapshot was taken. The
size of this file is the size of the VM’s maximum memory. The memory state file has
an .vmsn extension.
disk descriptor file - a small text file that contains information about the snapshot.
snapshot delta file - contains the changes to the virtual disk’s data at the time the
snapshot was taken. This delta disk is used for all disk writes, since the VM’s original
.vmdk file is placed in read-only mode. This file has an delta.vmdk suffix.
snapshot list file - created at the time the VM is created, this file keeps track of the
VM’s snapshots.
Take snapshots
VM snapshots enable you to preserve the state of a VM so you can return to the same state
later. A snapshot captures the memory, setting and disk states of a virtual machine. They
can be taken while a VM is powered-on, powered-off, or suspended.
Here are the steps to take a snapshot using vSphere Web Client:
1. Right-click a VM in the inventory and choose the Take Snapshot option:
2. The Take Virtual Machine Snapshot wizard opens. Enter the name and description for
the snapshot. Two other options are available:
Snapshot the virtual machine’s memory - specifies whether the RAM of the VM
should also be captured.
Quiesce guest file system (Needs VMware Tools installed) - specifies whether to
quiesce the file system in the guest OS. Use this option if you want to ensure that the
data within the guest file system is intact in the snapshot. Note that the running
application are not quiesced. The first option (Snapshot the virtual machine’s
memory) needs to be deselected in order for this option to become available.
After the process completes, you can view VM’s snapshots by right-clicking the VM from
the inventory and selecting Manage Snapshots. This opens up the Snapshot Manager:
Revert snapshot
The Snapshots Manager enables you to view or delete active VM’s snapshots in vSphere.
You can also use it to revert to an earlier snapshot. Here is how you can do that using
vSphere Web Client:
1. Right-click the VM from the inventory and choose the Manage Snapshots option:
2. The Snapshots Manager window opens. Select the appropriate snapshot and click the
Revert to button:
3. Click Yes to confirm the action:
Any data that was written and any application that was installed since the snapshot was
taken will no longer be available after you revert to a snapshot.
Delete snapshot
You can delete a virtual machine’s snapshot using the Snapshot Manager. Note that
deleting a snapshot consolidates the changes between snapshots and the previous disk
states. Here are a couple of examples (image source: VMware):
1. If you delete a snapshot above the You are here moment, that snapshot is deleted and its
data is commited into the previous state, so the foundation for subsequent snapshots (in
this case snap02) is retained:
2. If you delete the current snapshot (at the You are here moment), the changes are
commited to the parent snapshot. In this case, the snap02 data is commited into snap01
data:
3. If you delete a snapshot below the You are here moment, the subsequent snapshot point-
in-time moments are deleted and you won’t be able to return to those moments:
4. If you use the Delete All option, all intermediate snapshots before the You are here
moment are commited to the base disk. All snapshot after the You are here moment are
discarded:
Consolidate snapshots
Most often, snapshot commit operations work as expected, but sometimes you may
encounter problems that can cause the snapshot delta files to remain on the datastore. If
that happens, you can use the Consolidate option, introduced in vSphere 5, to clean
unneeded snapshot delta files from a datastore. This option commits a chain of snapshot
indicated by the delta files to the original virtual machine and then removes them. If you
do not perform the snapshot consolidation, the snapshot delta files could continue to grow
and consume all space on the datastore.
If a snapshot commit operation fails, you will receive a warning on the virtual machine’s
Summary tab:
To perform the consolidation, right-click the VM from the inventory and select All
vCenter Actions > Snapshots > Consolidate:
In the Confirm Consolidate window, click Yes.
Remove virtual machine
Two options are available to remove a virtual machine in vSphere:
Remove from Inventory - this option unregisters the VM from the host and the
vCenter Server inventory, but the VM’s files remain on the datastore. You can later
re-register the VM to the inventory.
Delete from Disk - this option removes the VM from the inventory and delete its files
from the datastore.
Here is how you can remove a VM from the inventory using vSphere Web Client:
1. To only remove a VM from the inventory, right-click the VM and select All vCenter
Actions > Remove from Inventory:
2. Click Yes to confirm the removal:
3. The VM will no longer be present in the inventory or on the datastore. Note that this
action is irreversible.
Chapter 15 - vApps
vApps explained
Create vApp
vApp settings
vApps explained
With vApps, you can combine multiple VMs into a single unit. vApps are represented as
objects in the vCenter Server inventory and can be managed as any other virtual machine
(powered on, powered off, cloned, etc.).
Why would you use a vApp? Well, today’s enterprise applications are rarely constrained to
a single VM and usually have components spread across multiple VMs. For example, you
might have a front-end web server running on one virtual machine, an application server
running on another VM, and a backend database server running on yet another VM.
Because these components have certain dependencies (such as a specific start order), you
can use vApps to combine multiple VMs into a single unit and manage them as such.
You must have vCenter Server installed in order to create vApps. A vApp is represented as
an object in the Hosts and Clusters view:
Create vApp
You must have vCenter Server installed in order to create vApps. Here are the steps
to create a vApp using vSphere Web Client:
1. Go to vCenter > vApps and click on the Create a New vApp icon:
2. The New App wizard opens. Select the Create a new vApp option and click Next:
3. Select the ESXi host or cluster on which the vApp will run:
4. Enter the vApp name and select the folder or datacenter where the vApp will be located:
5. Choose the resource allocation settings for the vApp. By default, a new vApp will be
given a normal priority, no reservation, and no limit on CPU or memory usage:
CPU Resources - you can specify the priority and CPU limits and reservation for the
vApp.
Memory Resources - you can specify the priority and memory limits and reservation
for the vApp.
IP allocation - you can specify the IP allocation policy for the vApp. IP addresses can
be allocated for the vApp in three ways:
Start order - you can change the order in which the virtual machines in this vApp are
started and shut down.
Advanced settings - settings such as product and vendor information and custom
properties.
For example, to change the start order of the VM’s inside a vApp, right-click the vApp
from the inventory and select the Edit Settings option:
Expand the Start order tab. The virtual machines are assigned to groups. All virtual
machines in the same group are started before the VM’s in the next group. Note that the
shutdown is done in reverse order:
In the example above you can see that the VM named Linux-VM will start first. After 120
seconds, the second VM, Windows VM, will start.
Chapter 16 - Security
Security Profile services
Configure ESXi firewall
Lockdown mode explained
Integrate ESXi host with Active Directory
Access control system
Users and groups
Roles explained
Create custom role
Objects explained
Assign permissions
Security Profile services
You can use the Security Profile window to manage services (daemons) running on the
ESXi host. You can start, stop, or restart services and control their startup behaviour. You
can choose between the three startup policies, based on the status of the firewall ports:
Start and stop with host - a service starts shortly after the host starts and closes
shortly after the host shuts down. The service will regularly attempt to complete its
tasks, such as contacting the NTP server. If the port was closed but is subsequently
opened, the client begins completing its tasks shortly thereafter.
Start and stop manually - the administrator determines the service status. Port
availability is not taken into consideration and the status of the service will be
preserved even after the ESXi host is rebooted.
Start and stop with port usage - this option (recommended by VMware) causes a
service to attempt to start if any port is open and continue to attempt to start until it
successfully completes. The service stops when all ports are closed.
You can manage the services running on your ESXi host using vSphere Web Client. Select
your ESXi host from the inventory, go to Manage > Settings > Security Profile and click
the Edit button for services:
In the Edit Security Profile window, select the service you would like to manage:
In the picture above you can see that the SSH service is running and it is configured to
start and stop with the host.
Configure ESXi firewall
The ESXi management interface is protected by a firewall that sits between the
management interface and the network. The firewall is enabled by default and blocks all
ports, except ports needed for the management services, such as SSH, DNS, DHCP, NFS,
vMotion, etc.
You can manage the ESXi firewall using vSphere Web Client. Here is how you can do
this:
Select your ESXi host from the inventory and go to Manage > Settings > Security Profile
and click the Edit button for the firewall:
To enable a particular type of traffic through the ESXi firewall, select the check box next
to that traffic type. You can also disable a type of traffic by deselecting the check box for
that traffic type. You can also specify the particular source addresses from which traffic
should be allowed:
Lockdown mode explained
The Lockdown mode can be used to increase the security of an ESXi host by limiting the
access allowed to the host. When this mode is turned on, the ESXi host can only be
accesses through vCenter Server or Direct Console User Interface (DCUI). The ESXi host
can no longer be managed using vSphere CLI commands, vSphere Management Assistant
(vMA), or vSphere Client.
You can enable the Lockdown mode using vSphere Web Client. Here is how you can do
that:
Select your ESXi host from the inventory and go to Manage > Settings > Security Profile
and click the Edit button for the Lockdown mode:
In the Lockdown Mode window that opens, check the checkbox beside Enable Lockdown
Mode and click OK:
Integrate ESXi host with Active Directory
An ESXi host can be configured to use a directory service (such as Active Directory) to
manage user and group permissions, in order to simplify the ESXi host’s administration
and security.
To configure an ESXi host to use Active Directory, the following prerequisites must be
met:
the Active Directory domain controllers and domain name must be resolvable by the
DNS servers configured for the host.
ESXi hostname must be fully qualified with the domain name of the Active Directory
forest, for example, esxi1.mydomain.local.
the time has to be synchronized between the ESXi host and the domain controllers.
Here is how you can integrate an ESXi host with Active Directory using vSphere Web
Client:
1. Select your ESXi host from the inventory. Go to Manage > Settings > Authentication
Services and click the Join Domain button:
2. The Join Domain window opens. Enter the domain name and choose the method to join
the ESXi host to the Active Directory domain. Two methods are available:
Using credentials - the AD credentials and the domain name of the Active Directory
server are entered.
Using proxy server - the domain name of the Active Directory server and the IP
address of the authentication proxy server are entered. This method allows you to
avoid storing Active Directory credentials on the ESXi host.
The user types listed above are entirely independent of the other. For example, a direct-
access user on an ESXi host could have no access to the vCenter Server used to manage
the same ESXi host.
You can also use a directory service, such as Active Directory, to manage users and groups
for both a vCenter Server system and ESXi hosts. Note that, by default, all Domain
Administrators in an Active Directory domain have full administrative privileges over all
ESXi hosts and VMs managed by vCenter Server.
Roles explained
In vSphere, roles are collections of privileges that enable users to perform tasks such as
power on a virtual machine, configure a network, create an alarm, etc. ESXi comes with
three built-in roles:
The three roles described above are permanent, meaning that they cannot be modified in
any way. There are also six default sample roles that can be used as is or as guidelines for
creating custom roles. These roles are:
You can display the list of roles using vSphere Web Client. On the Home screen, select
Administration > Roles:
A role can be assigned to a user or a group.
Create custom role
Although you can use the three system roles and the six sample roles already included in
vCenter Server, you might want to create create your own custom roles that will better suit
your needs. The roles you define should use the smallest number of privileges possible in
order to maximize your vSphere environment’s security. Also, the role name should
indicate its purpose.
For example, let’s say that we want to create a role that will allow a user to create virtual
machines. We can create that role using vSphere Web Client. Here are the steps:
1. From the Home screen, go to Administration > Roles and click on the Create Role icon:
2. The Create Role wizard opens. Enter the role name and assign the following privileges:
In the picture above you can see that the Administrator role for this object has been
granted to the root and Administrator users and the domain group ESXi Administrators.
Assign permissions
A permission grants a user or a group the rights to perform the actions specified in the role
for the inventory object to which the role is assigned. Objects include datacenters, clusters,
ESX/ESXi hosts, vApps, resource pools, VMs, clusters, datastores, networks, and folders.
Here are the steps to assign a permission on an vCenter Server object using vSphere Web
Client:
1. Select an object from the inventory and go to Manage > Permissions. In the
Permissions window, click the green plus sign:
2. The Add Permission window opens. Click Add to select a user or group:
3. Choose the domain, find the desired user, and click Add:
In the picture above you can see that we’ve selected our AD domain named MYDOMAIN
and the user jdoe.
4. Next, you need to assign a role to the user. Select the desired role on the right. Notice
that you can force the permission to propagate down the object hierarchy by checking the
Propagate to children check box:
The new permission should now appear in the Permissions tab:
can see that the Administrator role for this object has been granted to the root and
Administrator users and the domain group ESXi Administrators.
Chapter 17 - Manage resources
Memory virtualization explained
Memory overcommitment explained
Memory management technologies
Virtual SMP (vSMP) explained
Enable hyperthreading
Resource management overview
Shares explained
Resource pools explained
Resource pool attributes
How resource pools work?
Expandable reservation parameter
Create resource pool
Memory virtualization explained
VMkernel (the hypervisor used by ESXi) manages all machine memory. It dedicates part
of this managed machine memory for its own use, while the rest is available for use by
virtual machines.
VMkernel creates a contiguous addressable memory space for a running virtual machine.
The memory space has the same properties as the virtual memory address space presented
to applications by the guest operating system. This memory space enables VMkernel to
run multiple VMs simultaneously while protecting the memory of each VM from being
accessed by other VMs.
In vSphere, three layers of memory are present:
Guest operating system virtual memory - presented to applications by the guest OS.
Guest operating system physical memory - presented to the VM by VMkernel.
ESXi host machine memory - provides a contiguous addressable memory space for
use by the VM.
memory from idle virtual machines is transfered to virtual machines that need more
memory.
memory compression is enabled by default on ESXi hosts in order to improve virtual
machine performance when memory is overcommitted.
memory overhead is stored in a swap file (.vswp) on the datastore.
Memory management technologies
VMkernel (the hypervisor used by ESXi) employs these five memory-management
technologies in order to economize the physical server’s RAM usage:
shares - specify the relative priority of a VM’s access to a given resource. If an ESXi
host comes under contention and must decide which VM gets access to which
resources, VMs with higher shares assigned will have higher priority, and therefore
greater access, to the ESXi host’s resources.
limits - restrict the amount of a given resource that a VM can use. Examples are
maximum consumption of CPU cycles or host physical memory. This option is used
to prevent a virtual machine from using more resources than specified.
reservations - specify a specific amount of the resource for the virtual machine. This
option is used to guarantee a minimum allocation of CPU and memory for a virtual
machine. A VM will start only if its reservation can be guaranteed.
Shares explained
Shares in vSphere specify the relative priority of a VM’s access to a given resource (such
as CPU, memory, or storage). If an ESXi host comes under contention and must decide
which VM gets access to which resources, VMs with higher shares assigned will have
higher priority, and therefore greater access, to the ESXi host’s resources.
Note that the share mechanism operates only when VMs are contenting for the same
resource. If an ESXi host has plenty of the resource available, shares will not play a role.
However, when the resource is scarce and ESXi must decide which VMs should be given
access to it, shares can establish a proportional share system. For example, if two VMs
want more of the resource than their reservation limit and the ESXi host can’t satisfy both
of them, the VM with the greater share value will get higher-priority access to the resource
in the ESXi host than the other.
To understand what a proportional share system really means, consider the following
example (image source: VMware):
In the first row you can see that, in the beginning, each virtual machine has the same
number of shares (1000). This means that each virtual machine will receive an
equal quantity of the resource (33%) from the ESXi host, if the host comes under
contention.
In the second row you can see that the number of the shares has been changed. Now, the
total number of shares is 5000. Notice that VM B has more shares (3000) than the other
two VMs (1000). In the case of contention, VM B will receive 60% of the resource from
the ESXi host.
In the third row you can see that the fourth VM, VM D, has been powered on. Now, the
total number of shares is 6000. The other virtual machines will decline in value: VM B
will receive only 50% of the resource, while other three virtual machines will receive
about 16%, or one-sixth of the resource.
In the fourth row you can see that VM C has been deleted. Since fewer total shares
remain, the surviving VMs will receive more quantity of the resource.
Resource pools explained
In vSphere, resource pools can be used to partition the CPU and memory resources of
ESXi hosts or DRS clusters. Resource pools are created as objects on standalone hosts (or
clusters) to hierarchically partition available CPU and memory resources. Each resource
pool can have shares, limits, and reservations specified.
Resource pools offer a convenient way to separate resources along requirements
and control the resource usage of multiple virtual machines at once. For example, you
could create two resource pools: the Production resource pool and the Test resource pool
and place your production VMs and test VMs accordingly. You could then give higher-
priority access to the Production resource pool, in the case of contention.
Resource pools are hierarchically organized. Each ESXi host or DRS cluster has an hidden
root resource pool that groups the resources of that host or cluster:
In the picture above you can see that the root resource pool 192.168.5.116 contains two
child resource pools: Another example RP and Example RP.
Resource pool attributes
Just like VMs, resource pools can have shares, limits, and reservations specified:
shares - specify the relative priority of a resource pool’s access to a given resource. If an
ESXi host comes under contention, resource pools with higher shares assigned will have
higher priority, and therefore greater access, to the resource.
limits - specifies the maximum amount of a given resource that a resource pool can use.
Examples are maximum consumption of CPU cycles or host physical memory.
reservations - specify the minimum amount of resources required by a resource pool (for
example, the minumum amount of CPU that the pool must have).
There is one additional attribute specific to resource pools: the Expandable
reservation attribute. It can be used to allow a resource pool that does not have the
required resources to request resources from its parent or ancestors.
How resource pools work?
To better understand how resource pools in vSphere work, consider the following example
(image source: VMware):
In the picture above you can see that the root resource pool is a standalone ESXi
host Svr001. It has 12,000 Mhz of CPU and 4GB of RAM. A child resource pool named
Engineering pool has been created, with the CPU reservation of 1,000 Mhz, the CPU limit
of 4,000 Mhz, and the Expandable reservation parameter set. Two VMs are added to the
Engineering pool: Eng-Test VM and Eng-Prod VM. Note that Eng-Prod VM has more
shares (2000) than Eng-Test VM (1000). This means that, in the case of contention, Eng-
Prod VM will get more CPU than Eng-Test VM.
However, keep in mind that the resource allocation occurs at each level in the hierarchy. In
this case, Eng-Prod VM will receive 66% of the Engineering pool CPU shares, not the
total number of CPU shares on the ESXi host. For example. if we create another resource
pool and assign 1000 CPU shares to it, the total number of CPU shares available for VMs
in the Engineering pool will be reduced.
Expandable reservation parameter
The Expandable reservation parameter can be used to allow a resource pool that does not
have the required resources to request resources from its parent or ancestors. The search
for resources goes through the ancestry of the root resource pool or to the first resource
pool that does not have the Expandable reservation option turned on.
Here is an example (image source: VMware):
Note that all resource pools have the Expandable reservation option enabled. What
happens if we power on all virtual machines in the eCommerce Apps pool? Because the
total amount of VM CPU reservation (500+500+500+500=2000) in this pool is greater
than the amount of CPU reserved for the pool (1200), the remaining 800 Mhz will be
taken from the Retail resource pool, who has 800 MHz to give. If the Retail pool had no
more reservation to give, the amount of CPU needed would be taken from its parent, the
root resource pool.
Use the Expandable reservation option carefully, since a single child resource pool can
use all of its parent’s resources, if configured incorrectly.
Create resource pool
Resource pools in vSphere offer a convenient way to separate resources along
requirements and control the resource usage of multiple virtual machines at once. Here are
the steps to create a resource pool using vSphere Web Client:
1. Go to Home > vCenter > Hosts and Clusters. Right-click the ESXi host and select All
vCenter Actions > New Resource Pool:
2. The New Resource Pool wizard opens. Type the name for the resource pool and choose
its settings.
3. After the resource pool has been created, you can add virtual machines to it. Simply
drag a VM to the resource pool you’ve created:
Chapter 18 - Reporting
Performance charts in vCenter Server
Monitor CPU utilization
Monitor active memory utilization
Monitor disk usage
Monitor network performance
Real-time and historical statistics
Log levels in vCenter Server
Performance charts in vCenter Server
vCenter Server offers various performance charts for ESXi hosts and virtual machines.
These charts can help you determine whether a VM is contrained by a resource or they can
used for trend analysis. Two kinds of charts are available: Overview charts and Advanced
charts.
Overview charts
Overview charts provide a summary view of how your ESXi host or virtual machine are
doing. These charts consist of a predefined view that can be selected from a drop-down
menu.
To access the Overview charts from vSphere Web Client , select your VM or ESXi host
from the inventory and go to Monitor > Performance:
Advanced charts
Advanced charts are extremely customizable and can display data counters not shown in
the overview charts. They can also be exported or printed.
To access the Advanced charts from vSphere Web Client, select your VM or ESXi host
from the inventory and go to Monitor > Performance > Advanced:
Click the blue Chart Options link to create a custom chart.
Monitor CPU utilization
It’s always a good thing to keep an eye on the CPU utilization. If the CPU usage is
continously high, the VM might be constrained by CPU. A good indicator of a CPU-
constrained virtual machine is the CPU ready time value. This value shows how long a
VM is waiting to be scheduled on a logical processor. The value varies from workload to
workload, but a VM waiting time of thousands of milliseconds might indicate that the
ESXi host is overloaded or that the VM doesn’t have enough CPU shares.
You can display the CPU ready time values using vSphere Web Client:
1. Select the ESXi host from the inventory and select Monitor > Performance >
Advanced. In the Advanced window, click the Chart Options link:
2. The Chart Options wizard opens. Select CPU as the chart metric. Set the timespan as
Real-time and Line Graph as the chart type. Select only your ESXi host under Select
object for this chart. Under Select counters for this chart, select Ready:
Your chart should look like this one below:
Monitor active memory utilization
Host active memory is the amount of physical memory that is actively being used by VMs
and VMkernel. It is recommended to monitor this memory counter, since high active
memory usage of certain VMs might cause those VMs to become memory-constrained.
You can display active memory using vSphere Web Client:
1. Select the ESXi host from the inventory and select Monitor > Performance >
Advanced. In the Advanced window, click the Chart Options link:
2. The Chart Options wizard opens. Select Memory as the chart metric. Set the timespan
as Real-time and Stacked Graph (per VM) as the chart type. Select the host and all virtual
machines under Select object for this chart. Under Select counters for this chart, select
Active:
Your chart should look like the one below:
Monitor disk usage
Disk-intensive applications can cause performance problems by saturating the storage.
The two disk latency data counters that should be monitored in order to determine disk
performance problems are:
Kernel command latency - the average time spent in VMkernel per SCSI command.
Numbers greater than 2 ms might indicate a problem.
Physical device command latency - the average time the physical device takes to
complete a SCSI command. Number greater than 15 ms might indicate a problem.
You can display these two values using vSphere Web Client:
1. Select the ESXi host from the inventory and select Monitor > Performance >
Advanced. In the Advanced window, click the Chart Options link:
2. The Chart Options wizard opens. Select Disk as the chart metric. Set the timespan as
Real-time and Line Graph as the chart type. Select the host and disk controllers under
Select object for this chart. Under Select counters for this chart, select Kernel command
latency and Physical device command latency:
Your chart should look something like this:
Monitor network performance
You can use vCenter Server performance charts to monitor network performance. For
example, you can measure outgoing and incoming network traffic from a VM or an ESXi
host to get an idea of how much network traffic is being generated. Here is how you can
display such statistics using vCenter Server charts:
1. Select the ESXi host from the inventory and select Monitor > Performance >
Advanced. In the Advanced window, click the Chart Options link:
2. The Chart Options wizard opens. Select Network as the chart metric. Set the timespan
as Real-time and Line Graph as the chart type. Select your ESXi host under Select object
for this chart. Under Select counters for this chart, select Data receive rate, Data transmit
rate, and Usage:
Your chart should look like the one below:
Network performance counters are available only for VMs and ESXi hosts; they are
not available for datacenter objects, clusters, or resource pools.
Real-time and historical statistics
You can use vSphere Web Client to display two kinds of statistics:
real-time statistics - information generated for the past hour at the 20-second
specificity.
historical statistics - information generated for the past day, week, month, or year, at
varying specificities.
The real-time statistics are stored in a flat file on the ESXi host and in memory on the
vCenter Server system. ESXi hosts collect the real-time statistics for the host and its
virtual machines every 20 seconds.
The historical statistics are stored in the vCenter Server database. It is possible
to configure how much statistical data is collected and stored.
The following table shows how much statistics are stored at different specificities (image
source: VMware):
In the picture above you can see that past-day statistics show one data point every five
minutes (288 samples), while past-year statistics show one data point per day (365
samples).
Log levels in vCenter Server
It is possible to control the quantity and type of information being logged by vCenter
Server. By default, the information level is used. You might want to increase the log level
when troubleshooting your system or working with VMware Support. The available log
levels are:
You can change the log level using vSphere Web Client. Here are the steps:
1. Select your vCenter Server from the inventory and go to Manage > Settings > General.
On the General tab, click the Edit button:
2. The Edit vCenter Server Settings window opens. Select Logging settings on the left and
choose the new log level:
3. The change takes effect immediately. No restart of vCenter Server is required.
Chapter 19 - Alarms
Alarms in vSphere
Alarm trigger types
Actions explained
Notifications explained
Create alarms
Acknowledge alarm
What is vCenter Operations Manager?
Alarms in vSphere
In vSphere, alarms are notifications that occur in response to certain events or conditions
that occur with an object in vCenter Server. It is possible to create alarms for vCenter
Server objects such as VMs, ESXi hosts, networks, and datastores. Based on the object,
these alarms can monitor resource consumption or the state of the object and alert you
when certain conditions have been met, such as high resource usage or low disk space.
Alarms are very useful because they allow you to be more proactive in the administration
of your vSphere environment.
Each alarm type has three types of actions in common:
vCenter Server has a number of built-in alarms. They can be used for generic purposes,
such as informing you when a host’s power status changes, a datastore runs low on disk
space, a VMs CPU usage is high, etc. You can also create your own alarms, if the default
alarms are too generic for your purposes.
Virtual Machines
Hosts
Clusters
Datacenters
Datastores
Distributed Switches
Distributed Port Groups
Datastore Clusters
vCenter Server
The Host alarm type has the following additional types of actions:
The Virtual Machine alarm type has the following additional types of actions:
Power on VM
Power off VM
Suspend VM
Reset VM
Migrate VM
Normal → Warning
Warning → Critical
Critical → Warning
Warning → Normal
The states listed above are denoted by colors and shapes in the Actions tab of the New
Alarm Definition wizard:
As you can see from the picture above, a green circle represents the Normal state, a
yellow triangle the Warning state, and a red diamond represents the Critical state.
An option can be specified for each color transition:
The Edit vCenter Server Settings window opens. Select Mail on the left and enter the
SMTP server hostname and the sender account:
You can also configure an alarm to send an SNMP trap when it’s triggered. vCenter Server
includes an SNMP agent that needs to be configured to send SNMP traps. The
configuration can be done using vSphere Web Client. Select vCenter Server from the
invantory and go to Manage > General and click the Edit button on the right:
The Edit vCenter Server Settings window opens. Select SNMP Receivers on the left and
enter the SNMP receiver hostname and port:
Notifications are set up on the Actions page of the alarm definition wizard:
Create alarms
Alarms are notifications that occur in response to certain events or conditions that occur
with an object in vCenter Server. The objects can be VMs, ESXi hosts, networks, and
datastores in the vCenter Server inventory.
You can create a vCenter Server alarm for a virtual machine with vSphere Web Client:
1. Select your VM from the inventory and go to Manage > Alarm Definitions. Click on
the Add icon:
2. The New Alarm Definition window opens. Enter the name for the new alarm and select
Monitor for specific conditions or state, for example CPU usage:
3. Next, click the Add button (the green plus sign) and select the condition that will trigger
the alarm:
In the picture above you can see that we’ve specified the Warning Condition as the VM
CPU usage of more than 20% for 30 seconds. The Critical condition has been specified as
the VM CPU usage of more than 30% for 30 seconds.
4. Next, we need to define actions for the Warning and Critical conditions:
In the picture above you can see that an email will be sent to Administrator if the VM
CPU usage goes above 20% for more than 30 seconds (the Warning condition), but only
once. If the VM CPU usage goes above 30% for more than 30 seconds minutes (the
Critical condition), an email will be sent again and then every 5 minutes, until the alarm
is manually reset to green or the CPU usage drops below 30%.
Acknowledge alarm
After the problem has been resolved, you can manually acknowledge the triggered alarm.
This action suppresses the alarm actions from occuring, but it doesn’t reset the alarm to
the Normal state. When an alarm is acknowledged, the time the alarm was acknowledged
and the user that acknowledged the alarm are recorded.
To acknowledge an alarm, select an object from the vCenter Server inventory and go to
Monitor > Issues > Triggered Alarms. Right click an alarm and select Acknowledge:
Note that the Acknowledged and Acknowledged By columns now show when and who
acknowledged the alarm:
As long as the alarm condition persists, the alarm will not return to the Normal state. In
such cases, you can manually reset the alarm to return it to the Normal state. This can be
done using the Reset To Green option. This option removes the activated alarm from the
Triggered Alarms view, even if the event that caused the alarm hasn’t actually been
resolved:
combining key metrics into single scores for environmental health and efficiency.
This product is deployed as a vApp (in Open Virtualization Format) and includes two
virtual machines. Customers that have vSphere licensing are able to download the vApp
from the VMware website and install this product into the Foundation mode without the
need for a license key. There are three additional editions with extra features which can be
purchased: Standard, Advanced, and Enterprise.
The two virtual machines in the vApp are:
User Interface VM - provides the user interface for vCenter Operations Manager.
Analytics VM - provides the data collection and processing for vCenter
Operations Manager.
Chapter 20 - High Availability
vSphere High Availability explained
Protect agains ESXi host failures
Create clusters
Enable vSphere HA
Host Monitoring option
Admission Control explained
Admission Control policy
VM Monitoring explained
Datastore Heartbeating explained
Advanced Options in vSphere HA
VM overrides
Network maintenance and vSphere HA
Redundant heartbeat networks
Monitor vSphere HA cluster
vSphere High Availability explained
A highly available system is one that is continuously operational for an optimal period of
time. There are multiple ways to achieve high availability for systems, such as using HA
applications, redundant NICs, server clusters, redundant power supplies, etc. You can also
achive high availability at the virtualization layer. In vSphere, a feature called vSphere
High Availability is used to provide high availability at the virtualization layer.
vSphere HA protect against the following types of failures:
ESXi host failure - if an ESXi host fails, VMs that were running on that host are
automatically restarted on other ESXi hosts.
Guest OS failure - if the VM Monitoring option is enabled and the VM stops sending
heartbeats, the guest OS is reset. The VM stays on the same ESXi host.
Application failure - the agent on an ESXi host can monitor heartbeats of applications
running inside a VM. If an application fails, the VM is restarted, but it stays on the
same host. This type of monitoring requires a third-party application monitoring
agent and VMware Tools.
With vSphere HA, there is a certain period of downtime when a failure occurs. Another
VMware feature, vSphere Fault Tolerance, provides zero downtime.
Protect agains ESXi host failures
Although vSphere High Availability can also be used to to protect against VM- and
application-level failures, it is primarily used to protect agains ESXi host failures. If an
ESXi host crashes or doesn’t see network traffic coming from other hosts in the cluster,
the VMs that were running on the affected host will be restarted on other hosts in the
cluster.
To implement vSphere HA, the following requirements must be met:
all ESXi hosts in a vSphere HA cluster must have access to the same shared
storage locations used by all VMs on the cluster. This includes all Fibre Channel,
FCoE, iSCSI, and NFS datastores.
If a new switch is added to one host, the same new switch must be added to all hosts.
With vSphere HA, there will be a period of downtime when an ESXi hosts fails. There
is also a possibility of data loss or filesystem corruption because of the VM unplanned
restart, so make sure you are using journaling filesystems in your guest operating
systems.
Create clusters
A cluster in vSphere is a collection of ESXi hosts configured to share their resources.
Clusters are used to enable some of the more powerful features in vSphere, such as High
Availability (HA), Distributed Resource Scheduler (DRS), Fault Tolerance (FT), and
vMotion.
The cluster resources are managed by vCenter Server as a single pool of resources. When
a host is added to a cluster, the host’s resources become part of the cluster’s resources.
Here is how you can create a cluster using vSphere Web Client:
1. Go to Home > vCenter > Hosts and Clusters. In the inventory, right-click your
datacenter and click New Cluster:
2. The New Cluster wizard opens. Type the name for the cluster and select whether you
would like enable DRS, vSphere HA, EVC, and Virtual SAN options:
3. After the cluster is created, you need to add ESXi hosts to it. Simply drag and drop
ESXi hosts to the cluster object in the inventory:
Enable vSphere HA
You can enable vSphere HA during a cluster creation or by modifying an existing cluster.
To enable vSphere HA on an existing cluster, select your cluster from the inventory, go to
Manage > Settings > vSphere HA and click the Edit button on the right:
The Edit Cluster Settings dialog box opens. Check the Turn ON vSphere HA checkbox:
VM restart priority - determines the relative order in which virtual machines are
restarted after a host has failed. This option allows you to prioritize VMs and assign
higher priority to the more important VMs. You can define a default restart priority
for the entire cluster and use the VM Overrides section of the cluster settings window
to define a per-VM restart priority. For example, you can set the VM restart priority
to Medium for the cluster and to Low for a particular VM that is less important. Note
that if Disabled is selected, the VMs will not be restarted on another ESXi host in a
case of an ESXi host failure.
Host isolation response - determines what happens when a host loses its management
network connection, but continues to run.
Host Monitoring can be disabled for network or ESXi host maintenance, in order to
avoid host isolation responses.
Admission Control explained
The Admission Control feature in vSphere HA ensures that sufficent resources are
available in a HA cluster to provide failover protection. You can use Admission Control to
determine whether a user will be allowed to power on more VMs than the HA cluster has
the capacity to support.
Admission Control ensures that resources will always be available on the remaining hosts
in the HA cluster to power on the virtual machines that were running on a failed host. If
you enable this feature, the VM power-on operations that violate availability constraints
will be disallowed.
To better understand the Admission Control concept, consider the following example.
Let’s say that we have a cluster of four identical ESXi hosts running identically configured
virtual machines. A cluster acts as a single pool of resources and the VMs consume a total
of 75% its resources. The cluster is configured for a single ESXi host failure.
Let’s say that we want to power on one more VM. This means that the resource
consumption will increase above 75%. If Admission Control is enabled, we will not be
able to power on the new VM. Why? Well, each host of our four hosts in the cluster is
equal to 25% of the cluster capacity. Because the cluster is at the limit of the capacity it
can support if one host fails, Admission Control will prevent us from starting more VMs
than it has resources to protect.
If Admission Control was disabled, we would be able to power on VMs until all of the
cluster resources are allocated. But if an ESXi host fails, it’s possible that some of the
VMs would not be able to be restarted because there wouldn’t be sufficient resources to
power on all the VMs.
Admission Control policy
You can choose between these four policies to define how Admission Control will ensure
capacity for the cluster:
Define failover capacity by static number of hosts - a number of hosts that may fail is
specified. Spare capacity is calculated using a slot-based algorithm. A slot represents
the amount of memory and CPU assigned to powered-on virtual machines. This
option is recommended in vSphere environments that have VMs with similar CPU
and memory reservations.
Define failover capacity by reserving a percentage of the cluster resources - a
percentage of the cluster’s aggregate CPU and memory resources that will be
reserved for recovery from ESXi host failures is specified. The specified percentage
indicates the total amount of resources that will remain unused for vSphere HA
purposes. This option is recommended in vSphere environments that have VMs with
highly variable CPU and memory reservations.
Use dedicated failover hosts - one or more hosts are used exclusively for failover
purposes. The failover hosts cannot have powered-on virtual machines, because they
are used for failover purposes only.
Do not reserve failover capacity - VMs can be powered on, even if the availability
constraints are violated. This option basically disables Admission Control.
You can configure Admission Control using vSphere Web Client. Select your cluster from
the inventory, go to Manage > Settings > vSphere HA and click on the Edit button on the
right:
The Edit Cluster Settings dialog box opens. Check the Admission Control checkbox and
expand the panel. This should open up the Admission Control Policy window. In
our example, we will use the Define failover capacity by reserving a percentage of the
cluster resources option to reserve 30% of the cluster’s CPU and memory resources for
failover purposes:
VM Monitoring explained
vSphere HA can be used to monitor virtual machines and protect against guest OS failures
using the feature called VM Monitoring. It works by monitoring VMware Tools heartbeats
and I/O activity of the guest OS. If hearbeats from the guest OS are not received and there
is no disk I/O activity for a period of time, the guest OS has likely failed and the VM is
restarted by vSphere HA. Note that the VM will stay on the same ESXi host.
You can enable VM Monitoring using vSphere Web Client. Select your cluster from the
inventory, go to Manage > Settings > vSphere HA and click the Edit button on the right:
The Edit Cluster Settings dialog box opens. Under VM Monitoring, choose VM
Monitoring Only to enable it:
The level of monitoring sensitivity can also be configured. You can adjust the slider bar to
use the predefined options, or select the Custom option and specify your own values. The
following parameters can be specified:
Failure interval - if no heartbeats or disk I/O activity is detected within this time
frame, the VM will be restarted.
Minimum uptime - the time vSphere HA will wait after the VM has been powered on
before starting to monitor VMware Tools heartbeats.
Maximum per-VM resets - the maximum number of times vSphere HA will restart a
VM within the specified Maximum resets time window. If, for example, this
parameter is set at 3 and a VM fails the fourth time within the specified Maximum
resets time window, it will not be automatically restarted. This prevents endless VM
resets.
Maximum resets time window - vSphere HA will restart the VM only a maximum
number of times (Maximum per-VM resets) within this time frame.
Datastore Heartbeating explained
Datastore Heartbeating in vSphere enables the master host to better determine the true
state of a slave host. It is used when the master can no longer communicate with a slave
over the management network (no network heartbeats from the slave are being received).
By using Datastore Hearbeating, the master can determine whether the slave has failed or
is isolated. If the slave is not generating datastore heartbeats, then the slave is considered
failed and its VMs will be restarted on another host in the HA cluster.
You can specify which datastores should be used by vSphere HA for heartbeating using
vSphere Web Client:
1. Select your cluster from the inventory and go to Manage > Settings > vSphere HA and
click the Edit button on the right:
2. The Edit Cluster Settings dialog box opens. Expand the Datastore Heartbeating option.
You can choose between these three heartbeat datastore selection policies:
Automatically select datastores accessible from the host - heartbeat datastores are
automatically selected by vSphere HA.
Use datastores only from the specified list - only those datastores selected from the
list of datastores will be used for datastore heartbeating. If one of those datastores
becomes unavailable, vSphere HA will not perform datastore heartbeating through a
different datastore.
Use datastores from the specified list and complement automatically if needed - the
administrator selects the preferred datastores that vSphere HA should use for
datastore heartbeating. vSphere HA chooses from among the datastores in that list. If
one of the datastores becomes unavailable, vSphere HA will choose a different
datastore. When none of the preferred datastores are available, vSphere HA will
choose any available cluster datastore.
Advanced Options in vSphere HA
You can use vSphere HA Advanced Options to configure extra parameters for your HA
cluster, such as the address to ping to determine if a host is isolated from the network, the
minumum amount of CPU sufficent for any VM to be usable, etc.
Here is a list of all available parameters (image source: VMware):
You can configure these parameters using vSphere Web Client. For example, here is how
we can configure the isolation response address:
1. Select your cluster from the inventory and go to Manage > Settings > vSphere HA and
click the Edit button on the right:
2. The Edit Cluster Settings dialog box opens. Expand the Advanced Options panel and
click the Add button:
3. Under Option, type das.isolationaddress. Under Value, type the IP address to be used as
the isolation response address:
VM overrides
During the vSphere HA setup options such as the VM restart priority or Host isolation
response are configured. These options are set for the entire cluster. You can override
these settings for individual virtual machines by using the VM Override section of the
cluster settings window.
You can define a default restart priority for the entire cluster and use the VM Overrides
section of the cluster settings window to define a per-VM restart priority. For example.
you can set the VM restart priority to Medium for the cluster and to Low under the VM
Override section for a less important VM. Here is how you can do that using vSphere Web
Client:
1. Select your cluster from the inventory, go to Manage > Settings > VM Overrides and
click the Add button:
2. The Add VM Overrides window opens. Click the green plus icon to select the virtual
machine:
3. Select the VM and click OK:
disable the Host Monitoring feature to prevent unwanted failover of virtual machines
running on the affected host.
place the host in the maintenance mode. vSphere HA will not fail over VMs to a host
that is in the maintenance mode.
To disable the Host Monitoring feature, select your cluster from the inventory, go to
Manage > Settings > vSphere HA and click the Edit button on the right:
The Edit Cluster Settings dialog box opens. Under Host Monitoring, deselect the Enable
Host Monitoring checkbox:
To place a host in the maintenance mode, right-click the host and select the Enter
Maintenance Mode option:
The VMs running on a host entering the maintenance mode need to be migrated to
another host or shut down.
Redundant heartbeat networks
Heartbeats in vSphere HA are sent between the master and slave hosts in order to
determine a host’s failure. A host is deemed to have failed if these events occur:
As you can see from the picture above, the Hosts section lists the vSphere HA master and
the number of slave hosts connected to the master, along with other information about
hosts. The Virtual Machines section shows the number of protected and unprotected VMs.
The Heartbeat page displays which datastores are currently being used by vSphere HA for
heartbeating:
The Configuration Issues page displays configuration issues and errors:
Chapter 21 - Fault Tolerance
vSphere Fault Tolerance (FT) explained
vSphere FT requirements
VMware vLockstep
Enable vSphere FT
vSphere Replication explained
vSphere Fault Tolerance (FT) explained
vSphere Fault Tolerance (FT) provides a higher level of business continuity than vSphere
HA. It works by creating a duplicate (secondary) copy of the virtual machine on a
different host and keeping the two VMs in sync. The secondary VM can immediately take
over in the event of an ESXi host failure and the entire state of the virtual machine will be
preserved.
Because FT provides zero downtime and zero data loss, it is usually used for business-
critical applications that must be available all the time. It is also sometimes used for
applications that have no native capability for clustering.
vSphere FT also has some disadvantages. Here are the main ones:
increased resource usage. An FT-protected VM will use twice as much resources. For
example, if the primary VM uses 2GB of RAM, the secondary VM will also use 2GB
of RAM.
only virtual machines with a single vCPU are compatible with Fault Tolerance.
FT does not protect virtual machines from the guest OS or application failures. If the
guest OS in the primary VM fails, then the secondary VM will fail also.
vSphere FT requirements
vSphere Fault Tolerance (FT) has the requirements and limitations at the cluster, host, and
virtual machine levels. Here is a list of all these requirements:
vSphere FT cluster requirements
the ESXi hosts in the cluster must have access to the same datastores and networks.
a minimum of two FT-certified ESXi hosts with the same FT version or host
build number must be used.
the ESXi hosts must have FT logging and vMotion networking configured.
host certificate checking must be enabled in the vCenter Server settings. This is the
default for vCenter Server 4.1 and later.
the configuration for each host must have Hardware Virtualization (HV) enabled in
the BIOS.
only VMs with a single vCPU are supported with vSphere FT.
the VM’s virtual disks must be in thick provisioned format or a Virtual mode RDM.
Physical mode RDMs are not supported.
the VM cannot have any USB devices, sound devices, serial ports, or parallel ports in
its configuration.
In the picture above you can see that the host on which the primary VM was running
failed. The secondary VM becomes the new primary VM and a new secondary VM is
created on another host.
Enable vSphere FT
Before you enable vSphere FT for a VM, a VMkernel port needs to configured to support
Fault Tolerance Logging. You can do this using vSphere Web Client:
1. Select your ESXi host from the inventory and go Manage > Networking > VMkernel
adapters. Select the VMkernel port and click the Edit settings icon:
2. Under Enable services, select the Fault Tolerance Logging check box and click OK:
at startup, DRS attempts to place each VM on the host that is best suited to run that
virtual machine.
If a DRS cluster becomes unbalanced, DRS can migrate VMs from overutilized ESXi
hosts to underutilized hosts. DRS performs these migrations of VMs across hosts in the
cluster without any downtime by using vMotion. You can determine whether DRS will
just display migration recommendations or automatically perform the migration when the
cluster becomes unbalanced by defining the automation level.
vSphere Distributed Resource Scheduler (DRS)
requirements
Before using vSphere DRS, the following requirements must be met:
to use DRS for load balancing, hosts in the DRS cluster must be part of a vMotion
migration network.
all hosts should use shared storage, with volumes accessible by all hosts.
shared storage needs to be large enough to store all virtual disks for the VM.
Manual - when a virtual machine is powered on, DRS will display a list of
recommended hosts on which you can place the VM. If the DRS cluster becomes
unbalanced, DRS will display recommendations for VM migration.
Partially Automated - when a VM is powered on, DRS will place it on the best-suited
host, without prompting the user. If the DRS cluster becomes unbalanced, DRS will
display recommendations for VM migration.
Fully Automated - when a VM is powered on, DRS will place it on the best-suited
host, without prompting the user. If the DRS cluster becomes unbalanced, DRS will
automatically migrate VMs from overutilized hosts to underutilized hosts.
The Migration Threshold slider bar determines how aggressively DRS select to migrate
VMs. For the Fully Automated level, five options are available:
Before enabling EVC, make sure that your VMs don’t use some advanced CPU
features that could be disabled after EVC is turned on.
Enhanced vMotion Compatibility (EVC) requirements
Before enabling EVC for a cluster, the following requirements must be met:
EVC only works with different CPUs in the same family, for example with different
AMD Operon families. Mixing AMD and Intel processors is not allowed.
for Intel CPUs, use the CPUs with Core 2 micro architecture and newer. For AMD
CPUs, use first-generation Opteron CPUs and newer.
enable AMD No eXecute (NX) or Intel eXecute Disable (XD) technology on all
hosts.
affinity rules - DRS will try to keep certain VMs together on the same host. These
rules are often used in multi-virtual machine systems to localize the traffic between
virtual machines.
anti-affinity rules - DRS will try to keep certain VMs are not on the same host. These
rules are often used to keep the VMs separated for availability reasons.
VM to host rules - specify whether VMs can or can’t be run on a host. They can be
preferential or required. These rules are used in conjunction with DRS groups for
ease of administration. A DRS group can either consist of one or more VMs or one or
more ESXi hosts.
If two rules are in conflict with each other, they will not be enabled.
Preferential DRS rules
The VM to host DRS rules can either be preferential or required. A preferential rule is
softly enforced and can be violated if necessary, for example to ensure the proper
functioning of DRS, HA, or DPM. Consider the following example (image source:
VMware):
As you can see from the picture above, we have created two DRS groups for virtual
machines (Group A and Group B) and two DRS groups for ESXi hosts (Blade Chassis A
and Blade Chassis B).
The goal of this design is to force the virtual machines in Group A to run on the hosts in
Blade Chassis A and to force the VMs in Group B to run on the hosts in Blade Chassis B.
But if the hosts in Blade Chassis A fail, the VMs from Group A will be moved to hosts in
Blade Chassis B.
Required DRS rules
The VM to host DRS rules can either be preferential or required. A required rule is stricty
enforced and can never be violated, unlike a preferential rule. Required rules are often
used to enforce host-based licensing. For example, if the software that is running in your
virtual machines has licensing restrictions, you can use a required rule to run those VMs
only on hosts that have the required licenses.
Here is an example (image source: VMware):
In the picture above you can see that we’ve created a DRS group for virtual machines
named Group A and a DRS group for hosts named ISV-Licensed. The goal of this design
is to force the VMs from Group A to run only on hosts in the ISV-Licensed DRS group
because these hosts have the required licenses. But if the hosts in the ISV-Licensed group
fail, VMs from Group A will not be moved to other host DRS groups.
Enable DRS
vSphere Distributed Resource Scheduler (DRS) is a feature that enables a virtual
environment to automatically balance itself across your ESX host servers in a cluster in an
effort to eliminate resource contention. Here are the steps to enable DRS on a cluster using
vSphere Web Client:
1. Select your cluster from the inventory, go to Manage > Settings > vSphere DRS, and
click the Edit button on the right:
2. The Edit Cluster Settings window opens. Select vSphere DRS on the left and check the
Turn ON vSphere DRS checkbox:
3. Expand the DRS Automation option. You can set the automation level and determine
how aggressively DRS will select to migrate VMs:
4. Click OK to enable DRS.
To verify DRS functionality, go to the Summary page of your cluster. You should see the
vSphere DRS panel:
Notice that the gauge shows that the cluster is imbalanced. To display DRS
recommendations, go to Monitor > vSphere DRS:
As you can see in the picture above, DRS recommends to migrate the virtual machine to
another host.
Create DRS affinity rule
The DRS affinity rules are used in DRS clusters to keep certain virtual machines together
on the same ESXi host. You can create these rules using vSphere Web Client:
1. Select your cluster from the inventory, go to Manage > Settings > DRS Rules, and click
the Add button:
2. The Create DRS Rule window opens. Enter the name for the rule and choose the rule
type. In this example, we will create a rule that will keep two VMs on the same host. Click
Add to add the VMs:
3. Select the VMs you would like to run on the same host and click OK:
2. The Create DRS Rule window opens. Enter the name for the rule and choose the rule
type. In this example, we will create a rule that will keep two VMs on separate hosts.
Click Add to add the VMs:
3. Select the VMs you would like to run on different hosts and click OK:
2. The Create DRS Group window opens. First we will create a VM DRS group. Enter the
name for the group, select VM DRS Group as the group type, and click the Add button:
3. Select one or more VMs and click OK:
Hosts can be also placed in the Standby mode. When a host is placed in this mode, it is
powered off. The vSphere DPM (Distributed Power Management) uses the Standby
mode to optimize power usage.