Vsphere ICM 8 Lab 25
Vsphere ICM 8 Lab 25
0
INSTALL, CONFIGURE, MANAGE
Contents
Introduction ................................................................................................................................................ 3
Objective ..................................................................................................................................................... 3
Lab Topology ............................................................................................................................................... 4
Lab Settings ................................................................................................................................................. 5
1 Prepare the Lab Environment ............................................................................................................. 6
2 Configure vSphere HA in a Cluster .................................................................................................... 21
3 View Information About the vSphere HA Cluster ............................................................................. 25
4 Configure Management Network Redundancy ................................................................................ 29
5 Reconfigure the VMKernal Gateway ................................................................................................ 32
6 Configure Storage Redundancy ........................................................................................................ 35
7 Confirm vSphere HA Configuration .................................................................................................. 39
8 Test the vSphere HA Functionality ................................................................................................... 42
9 View the vSphere HA Cluster Resource Usage ................................................................................. 46
10 Configure the Percentage of Resource Degradation to Tolerate ..................................................... 49
Introduction
In this lab, you will configure vSphere High Availability (HA) and test its functionality.
vSphere HA is a feature in VMware vSphere that provides automatic failover protection for Virtual
Machines (VMs) in the event of a host failure. It ensures that the VMs running on a failed host are
restarted on another host in the cluster, without manual intervention. This helps minimize downtime
and ensures that applications running in VMs are highly available. vSphere HA uses network heartbeats
and other methods to detect host failures, and automatically restarts VMs on alternative hosts within
the cluster.
Objective
Lab Topology
Lab Settings
The information in the table below will be needed to complete the lab. The task sections further below
provide details on the use of this information.
In this task, you will increase the datastore capacity of iSCSI-Datastore and migrate LinuxGUI-01 and
LinuxGUI-02 to the iSCSI-Datastore.
To launch the console window for a VM, either click on the machine’s
graphic image from the topology page, or click on the machine’s
respective tab from the Navigator.
2. Launch the Mozilla Firefox web browser by either clicking on the icon found in the bottom toolbar
or by navigating to Start Menu > Internet > Firefox Web Browser.
If the VMware Getting Started webpage does not load, please wait an
additional 3 - 5 minutes, and refresh the page to continue. This is
because the vCenter Server Appliance is still booting up and requires
extra time to initialize.
4. To log in to the vCenter Server Appliance, enter [email protected] as the username and
NDGlabpass123! as the password. Click LOGIN.
5. In the Navigator, on the Hosts and Clusters tab, and select sa-vcsa.vclass.local. In the right pane,
select Datastores and right-click iSCSI-Datastore. In the Actions pull-down menu, click Increase
Datastore Capacity….
6. In the Increase Datastore Capacity window on the Select Device step, select LUN 3 and click NEXT.
7. On the Specify Configuration step, leave the defaults, and click NEXT.
8. On the Ready to Complete step, review the information, and click FINISH.
9. Repeat steps 5 – 8, and expand iSCI-Datastore using LUN 4 and Lun 2. For Lun 2, you will only
increase the size by 10 GB for lab purposes.
10. Ensure you are still viewing the Datastores tab. Verify the iSCSI-Datastore is showing a capacity of
59 GB and at least 32 GB of free space.
11. In the Recent Tasks pane, verify the iSCSI-Datastore tasks have successfully completed.
13. In the ICM-Datacenter main workspace, click the Virtual Machines tab, and select LinuxGUI-01 and
LinuxGUI-02.
16. In the 2 Virtual Machines – Migrate window on the Select a migration type, select Change storage
only. Click NEXT.
17. On the Select storage step, select the iSCSI-Datastore. In the Select virtual disk format drop-down
menu, select Thin Provision. Click NEXT.
18. On the Ready to complete step, review the information, and click FINISH.
19. Monitor the Recent Tasks pane, and confirm that both Linux-GUI VMs have successfully migrated.
This will take 2 - 4 minutes to complete.
Confirm that the LinuxGUI-01 and LinuxGUI-02 VMs are located on the iSCSI-
Datastore as their storage source.
21. In the Rename window, enter ICM-Compute-01 for the name, and click OK.
22. In the ICM-Compute-01 main workspace, select the Configure tab. Navigate to Configuration >
Quickstart. Click EDIT in the Cluster basics pane.
23. In the Edit Cluster Settings window, turn the vSphere DRS toggle button on. Click OK.
25. On the Add Hosts window, Add Hosts step, click the Existing hosts (0 from 2) tab.
26. Check the sa-esxi-01.vclass.local and sa-esxi-02.vclass.local boxes, and click NEXT.
27. On the Host summary step, review the information, and click NEXT.
29. Monitor the Recent Tasks pane, and wait for the tasks to complete.
30. In the Navigator, expand the ICM-Compute-01 cluster. Verify that esxi-01.vclass.local and sa-esxi-
02.vclass.local are located in the cluster. Notice the icon, and that both hosts are in Maintenance
Mode.
31. Right-click on sa-esxi-01.vclass.local and navigate to Maintenance Mode > Exit Maintenance
Mode.
32. Right-click on sa-esxi-02.vclass.local and navigate to Maintenance Mode > Exit Maintenance
Mode.
33. Verify that each host has exited Maintenance Mode by the change in their icon.
34. In the Navigator, select the ICM-Compute-01 cluster. On the Configure tab, navigate to
Configuration > Quickstart.
35. In the Cluster Quickstart main workspace, locate the Configure cluster pane, and click CONIGURE.
36. On the Configure cluster window, Distributed switches pane, select Configure networking settings
later. Click NEXT.
37. In the Advanced options window, select Manual from the Automation level drop-down menu. In
Manual Mode, vSphere DRS presents you with VM placement recommendations that you can
choose from. Select 5 from the Migration threshold drop-down menu. Click NEXT.
38. On the Review step, review the information, and click FINISH.
39. Verify that the vSphere DRS settings are correct. In the ICM-Compute-01 main workspace, click the
Summary tab. Locate the vSphere DRS pane, and click VIEW DRS SETTINGS.
40. In the DRS Parameters pane, verify that the Migration automation level is set to Manual. Since you
selected level 5 for the migration threshold, verify that recommendations from priority 1 through 5
will be applied. Click the Close icon.
41. Leave the vSphere Client open, and continue to the next task.
In this task, you will configure vSphere HA on the Lab Cluster to achieve higher levels of VM availability
than each ESXi host can provide individually.
Business Continuity: By providing automatic failover protection for VMs, vSphere HA helps
ensure that business critical applications continue to run even in the event of a host failure. This
helps minimize downtime and avoids business disruptions.
Increased Reliability: vSphere HA helps ensure that VMs are highly available, which increases
the reliability of the infrastructure. This helps minimize the risk of unplanned downtime and
reduces the risk of data loss.
Improved Performance: vSphere HA helps distribute VMs evenly across multiple hosts in a
cluster, which helps to balance resource utilization and improve overall performance.
Cost Savings: By reducing downtime and increasing the availability of VMs, vSphere HA can help
save costs by reducing the need for manual intervention, reducing the risk of data loss, and
increasing the efficiency of resource utilization.
In summary, vSphere HA helps ensure that VMs are highly available and protected from host failures,
which helps improve business continuity, reliability, performance, and cost efficiency.
1. In the Navigator, select LinuxGUI-01. In the LinuxGUI-01 main workspace, click Power On.
2. In the Power On Recommendations window, select the recommendation that places LinuxGUI-01
on sa-esxi-01.vclass.local. Click OK.
3. In the Navigator, select LinuxGUI-02. In the LinuxGUI-02 main workspace, click Power On.
4. In the Power On Recommendations window, select the recommendation that places LinuxGUI-02
on sa-esxi-02.vclass.local. Click OK.
6. In the ICM-Compute-01 main workspace, select the Configure tab. Navigate to Services > vSphere
Availability. Click the EDIT button to the right of vSphere HA is Turned OFF.
7. In the Edit Cluster Settings window, turn the vSphere HA toggle button on. Click OK.
8. Monitor the Recent Tasks pane, and wait for the vSphere HA configuration tasks to complete. This
will take 2 - 3 minutes.
9. View the Configure tab, and verify that vSphere HA is Turned ON.
10. Leave the vSphere Web Client open and continue to the next task.
In this task, you will view status and configuration information about the ICM-Compute-01 cluster. You
will notice that the ESXi hosts in the cluster have only one management VMkernel adapter.
It is important to view information about the vSphere HA cluster for several reasons:
In summary, viewing information about the vSphere HA cluster helps administrators monitor the health
and status of the cluster, plan for future capacity needs, ensure compliance, and optimize
performance.
1. In the ICM-Compute-01 main workspace, click the Monitor tab. Navigate to vSphere HA >
Summary. Locate the Hosts pane. Record the name of the Primary host. If the Primary host is not
listed, click the Refresh icon at the top of the window.
2. In the Summary tab, locate the Virtual Machines pane, record the number of Protected VMs. You
may need to scroll down the Summary to locate the Virtual Machines pane.
If both hosts are added to the cluster and no errors occur on the cluster, the
number of protected VMs should equal the number of powered-on VMs. The
number of protected VMs includes the vCLS VMs.
5. In the ICM-Compute-01 main workspace, select the Monitor tab. Navigate to vSphere HA >
Heartbeat.
7. Under vSphere HA, select Configuration Issues and review errors or warnings that are displayed.
You should see warning messages on both sa-esxi-01 and sa-esxi-02 hosts.
The first warning you see is a management network redundancy error. To fix
this error, you will add a second vmnic to the management network.
Configuring management network redundancy is also a best practice.
The second warning you see is a network configuration error. To fix this error,
you will reconfigure the TCP-IP Stack.
The third warning you see is a storage configuration error. To fix this error, you
will add another shared datastore.
It is important to fix all configuration issues with vSphere HA because they can
impact the overall availability and reliability of the virtual infrastructure.
Improperly configured vSphere HA components can lead to issues such as VM
downtime, network connectivity loss, and data corruption. This can have
serious consequences, including loss of productivity, revenue, and data. Fixing
configuration issues ensures that vSphere HA is configured correctly and can
provide the necessary level of HA for the virtual infrastructure. This helps
prevent downtime and data loss, improving the overall reliability and
availability of the virtual infrastructure.
8. Leave the vSphere Client open, and continue to the next task.
In this task, you will configure network management redundancy by adding a second physical adapter
(vmnic) to the Management Network port group. Adding a second vmnic creates redundancy and
removes the single point of failure.
Network redundancy is important in vSphere HA because it helps ensure that the virtual infrastructure
remains operational in the event of a network failure. With multiple redundant network connections,
vSphere HA can detect network failures and automatically switch to an alternate network path to
maintain network connectivity. This helps ensure that VMs continue to run and that services remain
available, even if a network component fails. Network redundancy is essential for maintaining the HA
of virtual infrastructure and preventing downtime in the event of a network failure.
1. In the Navigator, select sa-esxi-01.vclass.local. On the Configure tab, navigate to Networking >
Virtual Switches.
2. In the Virtual Switches pane, expand Standard Switch: vSwitch0. Click MANAGE PHYSICAL
ADAPTERS.
3. In the Manage Physical Network Adapters window, place vmnic1 under the Standby adapters
section by selecting it and clicking MOVE DOWN. Click OK.
4. Verify that vSwitch0 has two physical adapters, vmnic0 and vmnic1.
5. Repeat steps 1 - 4 for sa-esxi-02.vclass.local. Do not proceed to the next task until you have
confirmed that vSwitch0 on sa-esxi-02.vclass.local has two physical adapters.
6. Leave the vSphere Client open, and continue to the next task.
In this task, you will re-configure the gateway of the default TCP/IP stack.
Configuring and confirming that the networking is correct in vSphere HA is important to ensure the
proper functioning of the vSphere HA cluster. Networking is critical for communication between
vSphere HA components and for ensuring that VMs can be restarted on other hosts in the event of a
failure. Incorrect networking configuration can result in vSphere HA failing to restart VMs or not
functioning properly, leading to downtime and potential data loss. Therefore, it is important to confirm
that the networking is configured correctly and that all network settings and connectivity are working
as expected before deploying vSphere HA.
1. In the Navigator, select sa-esxi-01.vclass.local. On the Configure tab, navigate to Networking >
TCP/IP configuration.
2. In the TPC/IP Configuration pane, review the Default TCP/IP Stack. Click the 3 dot ellipses and
select Edit.
3. In the Edit TCP/IP Stack Configuration window, select the Routing step. Change the VMKernel
gateway to 172.20.10.10 and click OK.
4. In the TCP/IP Configuration main workspace, verify that the Default TCP/IP stack has an IPv4
Gateway of 172.20.10.10.
5. Repeat Steps 1 - 4 for sa-esxi-02.vclass.local. Do not proceed to the next task until you have
confirmed the s has been changed to 172.20.10.10 on sa-esxi-02.vclass.local.
6. Leave the vSphere Client open, and continue to the next task.
In this task, you will configure storage redundancy by adding NFS storage as a shared datastore to
ensure VM availability in case of a storage failure.
Two datastores are recommended for vSphere HA to provide redundancy and ensure that VMs can
continue to run in case of a single datastore failure. By having two datastores, the VMs can be moved
to another datastore in the event of a failure, ensuring no data loss and minimizing downtime.
Additionally, having multiple datastores can also improve performance and better manage storage
utilization in a vSphere environment.
b. In the New Datastore window, Type step, select NFS. Click NEXT.
d. On the Name and configuration step, use the information below to configure the
settings:
i. Name: NFS-Datastore
ii. Folder: /mnt/NFS-Pool
iii. Server: 172.20.13.10
iv. Click NEXT
f. On the Ready to complete step, review the information, and click FINISH.
g. Monitor the progress in the Recent Tasks pane. On the Storage tab, expand sa-
vcsa.vclass.local and ICM-Datacenter. Verify that your NFS-Datastore is listed in the
Navigator to confirm that it is now included in the inventory.
3. Leave the vSphere Client open, and continue to the next task.
In this task, you will confirm that vSphere HA does not have any configuration errors present.
Confirming that there are no configuration errors for vSphere HA is important to ensure that the
vSphere HA cluster is properly configured and functioning as expected. Configuration errors can cause
vSphere HA to not work as intended or even fail, leading to downtime and potential data loss. This is
why it is important to confirm that there are no configuration errors before deploying vSphere HA and
to regularly check for errors after deployment. By eliminating configuration errors, you can ensure that
vSphere HA is functioning correctly and that VMs are protected and available in the event of a host
failure.
1. In the Navigator, on the Hosts and Clusters tab, right-click on sa-esxi-01.vclass.local and select
Reconfigure for vSphere HA.
2. Monitor the Recent Tasks pane, and confirm that the reconfiguration task is complete.
4. Monitor the Recent Tasks pane, and confirm that the reconfiguration task is complete.
5. In the Navigator, select the ICM-Compute-01 cluster. Click the Monitor tab and navigate to
vSphere HA > Configuration Issues. Click the Refresh icon at the top right of the window. Notice
there are no issues or warnings present.
Please allow 5 minutes to pass if you do not immediately see all of the
configuration issues cleared.
6. Leave the vSphere Client open, and continue to the next task.
In this task, you will set up vSphere HA to monitor the cluster environment and detect hardware
failures.
When an ESXi host outage is detected, vSphere HA automatically restarts the VMs on the other ESXi
hosts in the cluster.
1. While in the ICM-Compute-01 cluster pane, click on the Monitor tab. Under vSphere HA, select
Summary and verify the name of the Primary master host as sa-esxi-01.vclass.local.
2. On the Hosts and Clusters tab, select sa-esxi-01.vclass.local. Click on the Virtual Machines tab and
verify that LinuxGUI-01 appears under sa-esxi-01.vclass.local.
3. Simulate a host failure by rebooting the master ESXi host. In the Navigator, right-click sa-esxi-
01.vclass.local and select Power > Reboot.
Ensure that you reboot the system. Do not shut down the system
4. In the Reboot Host window, a warning message appears stating that you chose to reboot the host,
which is not in Maintenance Mode. Enter Testing vSphere HA for the reason, and click OK.
5. In the Navigator, select the ICM-Compute-01 cluster, and click the Monitor tab. Navigate to Tasks
and Events > Events in the middle pane.
6. Notice the cluster entries are sorted by time, and that the entries appear when the host failure was
detected. The initial messages from the hosts might show failures. These messages indicate that
the VMs on the downed host have failed. The VMs may take 3 - 5 minutes to restart on the new
host successfully. You may need to refresh the vSphere Client to view these events.
7. Wait for the sa-esxi-01.vclass.local host to reboot and successfully come back online. Please allow
2 - 4 minutes for the reboot to complete.
8. In the Navigator, select sa-esxi-01.vclass.local. Select the VMs tab and ensure you are on the
Virtual Machines pane. Take notice of the virtual machines listed.
9. Select the ICM-Compute-01 cluster in the inventory, and click the Monitor tab. Navigate to vSphere
HA > Summary in the middle pane.
11. Leave the vSphere Client open, and continue to the next task.
In this task, you will examine the CPU and memory resource usage information of the ICM-Compute-01
cluster.
It is important to view the resource usage information of a cluster in vSphere HA for several reasons:
Capacity planning: Viewing resource usage information can help you plan for future capacity
needs and ensures that the cluster has enough resources to support the VMs running on it.
Performance monitoring: Resource usage information can help you monitor the performance
of the cluster and identify any potential performance issues, such as over-utilization of CPU,
memory, or storage resources.
Troubleshooting: If there are performance issues, resource usage information can help you
troubleshoot the cause of the problem and find a solution.
Right-sizing VMs: By monitoring resource usage, you can determine if VMs are appropriately
sized, and make adjustments to optimize their performance and resource utilization.
In summary, viewing resource usage information is important for ensuring that the cluster is running
efficiently, has enough resources to support the VMs, and that VMs are performing optimally.
1. Ensure ICM-Compute-01 is selected in the Navigator. Click the Monitor tab and navigate to
Resource Allocation > CPU.
2. On the CPU Reservation Details pane, review the information such as Total Reservation Capacity,
Used Reservation by other, and Available Reservation. In the Virtual Machines pane, verify that the
CPU reservation is not set on the VMs. The Reservation column should show 0 (MHz).
4. On the Memory Reservation Details, review the information such as Total Reservation Capacity,
Used Reservation by other, and Available Reservation. In the Virtual Machines pane, verify that the
memory reservation is not set on the VMs. The Reservation column should show 0 (MB).
5. Leave the vSphere Client open, and continue to the next task.
In this task, you will specify the percentage of resource degradation to tolerate, and you will verify that
a message appears when the reduction threshold is met.
Resource degradation tolerance in vSphere HA refers to the ability of a vSphere HA cluster to tolerate
failures of individual components and continue to function correctly. Resource degradation can occur
when a host, network, or storage component experiences issues, and vSphere HA must determine
whether it is still possible to provide HA protection for VMs.
vSphere HA can tolerate resource degradation by continuously monitoring the state of the cluster and
VMs, and taking action when necessary. For example, if a host experiences a failure, vSphere HA can
automatically restart the VMs on other available hosts, ensuring that the VMs remain available and
running. If a network component experiences a failure, vSphere HA can detect the failure and take
action to ensure that the network remains available for VM traffic.
1. In the Navigator, select ICM-Compute-01 and click the Configure tab. Navigate to Services >
vSphere DRS. Confirm that vSphere DRS is Turned ON and the Automation level is set to Manual.
2. Under Services, select vSphere Availability and click EDIT beside vSphere HA is Turned ON.
3. Select the Admission Control tab, scroll to find Performance degradation VMs tolerate, and enter 0
for the percentage. If you reduce the threshold to 0%, a warning is generated when cluster usage
exceeds the available cluster capacity. Click OK.
4. Select the LinuxGUI-01 VM and select the Summary tab. In the Guest OS pane, click LAUNCH WEB
CONSOLE. If the LinuxGUI-01 VM is sleeping, click inside the web console to wake it up.
6. In the Terminal window, enter the command below, and press Enter.
8. Select the LunuxGUI-02 VM and select the Summary tab. In the Guest OS pane, click LAUNCH WEB
CONSOLE. If the LinuxGUI-02 VM is sleeping, click inside the web console to wake it up.
10. In the Terminal window, enter the command below, and press Enter.
11. Return to the vSphere Client by clicking the vSphere-LinuxGUI-02-*** tab in Firefox.
12. Let the stress-ng command run for a couple minutes before continuing to the next step.
13. In the Navigator, select LinuxCLI-02. In the LinuxCLI-02 main workspace, click Power On.
14. In the Power On Recommendations window, select the recommendation that places LinuxCLI-02 on
sa-esxi-02.vclass.local. Click OK.
15. Select the ICM-Compute-01 cluster in the Navigator, and click the Summary tab. The informational
message may take 3 - 5 minutes to appear. Do not continue to the next task until you see this
message.
You should see an informational message that says Running VMs utilization
cannot satisfy the configured failover resources on the cluster ICM-Compute-01
in ICM-Datacenter.
16. In the vSphere Client, shut down LinuxCLI-02 VM. Click the Shut Down Guest OS icon.
18. Repeat steps 12 and 13, and shut down the LinuxGUI-01 and LinuxGUI-02 VMs.
19. In the vSphere Client, click the Refresh icon at the top of the window.
20. In the ICM-Compute-01 cluster main workspace, click the Summary tab. Verify that the message
about the configured failover resources no longer appears in the cluster's Summary tab.
21. The lab is now complete; you may end your reservation.