Lvmad 2
Lvmad 2
Administrator's Guide
F52197-06
May 2023
Oracle Linux Virtualization Manager Administrator's Guide,
F52197-06
1 Preface
Conventions 1-1
Documentation Accessibility 1-2
Access to Oracle Support for Accessibility 1-2
Diversity and Inclusion 1-2
2 Global Configuration
Administering User Accounts from the Administration Portal 2-1
Adding VM Portal Permissions to a User 2-1
Removing Users and Groups 2-1
Assigning Permissions to Users and Groups 2-2
Creating a Custom Role 2-2
Administering User and Group Accounts from the Command Line 2-3
Creating a New User Account 2-3
Setting the Password for a User Account 2-4
Editing User Information 2-5
Viewing User Information 2-5
Removing a User 2-6
Disabling User Accounts 2-6
Creating Group Accounts 2-7
Removing a Group Account 2-8
Querying Users and Groups 2-9
Managing Account Settings 2-12
Creating a Scheduling Policy 2-12
3 Administration Tasks
Data Centers 3-1
Creating a New Data Center 3-1
Clusters 3-1
Creating a New Cluster 3-1
Hosts 3-3
iii
Moving a Host to Maintenance Mode 3-3
Activating a Host from Maintenance Mode 3-4
Removing a Host 3-4
Networks 3-5
Customizing vNIC Profiles for Virtual Machines 3-5
Attaching and Configuring a Logical Network to a Host Network Interface 3-6
Storage 3-9
Preparing Local Storage for a KVM Host 3-9
Configuring a KVM Host to Use Local Storage 3-9
Preparing NFS Storage 3-10
Attaching an NFS Data Domain 3-11
Adding an FC Data Domain 3-12
Detaching a Storage Domain from a Data Center 3-12
Configuring iSCSI Multipathing 3-13
Migrating a Logical Network to an iSCSI Bond 3-14
Virtual Machines 3-15
Live Editing a Virtual Machine 3-15
Migrating Virtual Machines between Hosts 3-17
Configuring Your Environment for Live Migration 3-17
Automatic Virtual Machine Migration 3-18
Setting Virtual Machine Migration Mode 3-18
Manually Migrate a Virtual Machine 3-19
Importing an Oracle Linux Template 3-19
Creating a Snapshot of a Virtual Machine 3-20
Restoring a Virtual Machine from a Snapshot 3-21
Creating a Virtual Machine from a Snapshot 3-22
Deleting a Snapshot 3-23
Encrypted Communication 3-23
Replacing the Oracle Linux Virtualization Manager Apache SSL Certificate 3-24
Event Notifications 3-25
Configuring Event Notification Services on the Engine 3-25
Creating Event Notifications in the Administration Portal 3-27
Canceling Event Notifications in the Administration Portal 3-27
Configuring the Engine to Send SNMP Traps 3-28
4 Deployment Optimization
Optimizing Clusters, Hosts and Virtual Machines 4-1
Configuring Memory and CPUs 4-1
Configuring Cluster Memory and CPUs 4-2
Changing Memory Overcommit Manually 4-3
iv
Configuring Virtual Machine Memory and CPUs 4-3
Configuring a Highly Available Host 4-4
Configuring Power Management and Fencing on a Host 4-4
Preventing Host Fencing During Boot 4-6
Checking Fencing Parameters 4-7
Configuring a Highly Available Virtual Machine 4-7
Configuring High-Performance Virtual Machines 4-8
Creating a High Performance Virtual Machine 4-9
Configuring Huge Pages 4-9
Hot Plugging Devices on Virtual Machines 4-10
Hot Plugging vCPUs 4-10
Hot Plugging Virtual Memory 4-11
7 Disaster Recovery
Active-Active Disaster Recovery 7-1
Network Considerations 7-2
Storage Considerations 7-2
Configuring a Standalone Engine Stretch Cluster Environment 7-2
Configuring a Self-Hosted Engine Stretch Cluster Environment 7-3
Active-Passive Disaster Recovery 7-4
Network Considerations 7-5
v
Storage Considerations 7-5
Creating the Ansible Playbooks 7-6
Simplifying Ansible Tasks Using the ovirt-dr Script 7-7
Generating the Mapping File Using an Ansible Playbook 7-7
Creating Failover and Failback Playbooks 7-8
Executing a Failover 7-9
Cleaning the Primary Site 7-9
Executing a Failback 7-10
Testing the Active-Passive Configuration 7-10
Discreet Failover Test 7-11
Discreet Failover and Failback Tests 7-11
Full Failover and Failback Tests 7-12
Mapping File Attributes 7-13
vi
1
Preface
Oracle Linux Virtualization Manager Release 4.4 is based on oVirt, which is a free, open-
source virtualization solution. The product documentation comprises:
• Release Notes - A summary of the new features, changes, fixed bugs, and known issues
in the Oracle Linux Virtualization Manager. It contains last-minute information, which
might not be included in the main body of documentation.
• Architecture and Planning Guide - An architectural overview of Oracle Linux
Virtualization Manager, prerequisites, and planning information for your environment.
• Getting Started Guide - How to install, configure, and get started with the Oracle Linux
Virtualization Manager. The document includes an example scenario covering basic
procedures for setting up the environment, such as adding hosts and storage, creating
virtual machines, configuring networks, working with templates, and backup and restore
tasks. In addition, there is information on upgrading your engine and hosts as well as
deploying a self-hosted configuration.
• Administration Guide - Provides common administrative tasks for Oracle Linux
Virtualization Manager and information on setting up users and groups, configuring high-
availability, memory and CPUs, configuring and using event notifications, configuring
vCPUs and virtual memory.
You can also refer to:
• REST API Guide, which you can access from the Welcome Dashboard or directly
through its URL https://manager-fqdn/ovirt-engine/apidoc.
• Upstream oVirt Documentation.
To access the Release 4.3.10 documentation, PDFs are available at:
• https://www.oracle.com/a/ocom/docs/olvm43/olvm-43-releasenotes.pdf
• https://www.oracle.com/a/ocom/docs/olvm43/olvm-43-gettingstarted.pdf
• https://www.oracle.com/a/ocom/docs/olvm43/olvm-43-architecture-planning.pdf
• https://www.oracle.com/a/ocom/docs/olvm43/olvm-43-administration.pdf
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
1-1
Chapter 1
Documentation Accessibility
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
For information about the accessibility of the Oracle Help Center, see the Oracle
Accessibility Conformance Report at https://www.oracle.com/corporate/accessibility/
templates/t2-11535.html.
1-2
2
Global Configuration
For Oracle Linux Virtualization Manager, global configuration options are set from the
Configure dialog box. This dialog box is accessed by selecting Administration and then
clicking Configure. From the Configure dialog box, you can configure a number of global
resources for your virtualization environment, such as users, roles, system permissions,
scheduling policies, instance types, and MAC address pools. You can also customize the way
in which users interact with resources in the environment and configure options that can be
applied to multiple clusters from a central location.
2-1
Chapter 2
Administering User Accounts from the Administration Portal
2. On the Users pane, select either the User or Group tab to display the added
users or groups.
3. Select the user or group to be removed.
4. Click Remove.
The Remove User(s) dialog box opens.
5. Click OK to confirm the removal of the user.
The user or group is removed and no longer appears on the Users pane.
2-2
Chapter 2
Administering User and Group Accounts from the Command Line
Note:
For more information about the default set of roles provided by the Manager, the
Administration Guide in oVirt Documentation.
Note:
Changes made using ovirt-aaa-jdbc-tool command utility take effect
immediately and do not require you to restart the Manager.
2-3
Chapter 2
Administering User and Group Accounts from the Command Line
To view a full list of options available for creating a user account, run the ovirt-
aaa-jdbc-tool user add --help command.
The following example shows how to create a new user account and add a first
and last name to associate with the account.
# ovirt-aaa-jdbc-tool user add test1 --attribute=firstName=John --
attribute=lastName=Doe
adding user test1...
user added successfully
Note: by default created user cannot log in. see:
/usr/bin/ovirt-aaa-jdbc-tool user password-reset --help.
Note:
After creating a new user account, you must set a password so that the
user can log in. See Setting the Password for a User Account.
3. Add the newly created user in the Administration Portal and assign the group
appropriate roles and permissions. See Assigning Permissions to Users and
Groups.
Note:
You must set a value for the --password-valid-to option; otherwise the
password expiry time defaults to the time of the last login.
By default, the password policy for user accounts on the internal domain has the
following restrictions:
• A user password must be a minimum length of 6 characters.
• When resetting a password, you cannot use the three previous passwords
used for the user account.
2-4
Chapter 2
Administering User and Group Accounts from the Command Line
For more information on the password policy and other default settings, run the ovirt-
aaa-jdbc-tool settings show command.
The following example shows how to set a user password. In the example, 0800 stands
for GMT minus 8 hours.
# ovirt-aaa-jdbc-tool user password-reset test1 --password-valid-to="2025-08-01
12:00:00-0800"
Password:
Reenter password:
updating user test1...
user updated successfully
To view a full list of options available for editing user information, run the ovirt-aaa-
jdbc-tool user edit --help command.
The following example shows to edit a user account by adding an email address to
associate with this user.
# ovirt-aaa-jdbc-tool user edit test1 [email protected]
updating user test1...
user updated successfully
The following example shows how to view details about a user account.
# ovirt-aaa-jdbc-tool user show test1
-- User test1(e9e4b7d0-8ffd-45a3-b6ea-1f519238e766) --
Namespace: *
Name: test1
ID: e9e4b7d0-8ffd-45a3-b6ea-1f519238e766
Display Name:
Email: [email protected]
First Name: John
Last Name: Doe
Department:
Title:
Description:
2-5
Chapter 2
Administering User and Group Accounts from the Command Line
Removing a User
The ovirt-aaa-jdbc-tool user delete command is used to remove a user.
Important:
Make sure you have at least one user in the environment with full
administrative permissions before disabling the default internal administrative
user account (admin user). The SuperUser role gives a user full
administrative permissions.
To disable a user:
1. Log in to the host that is running the Manager.
2. Disable the user.
ovirt-aaa-jdbc-tool user edit username --flag=+disabled
2-6
Chapter 2
Administering User and Group Accounts from the Command Line
Note:
If for some reason you need to re-enable the internal admin user after it has
been disabled, you can do so by running the ovirt-aaa-jdbc-tool user edit
admin --flag=-disabled command.
Creating a Group
To create a group account:
1. Log in to the host that is running the Manager.
2. Create a new group account.
ovirt-aaa-jdbc-tool group add group-name
Note:
To view a full list of the options for adding or removing members to and from groups, run
the ovirt-aaa-jdbc-tool group-manage --help command.
The following example shows how to add users to a group.
# ovirt-aaa-jdbc-tool group-manage useradd group1 --user test1
updating user group1...
user updated successfully
The following example shows how to display details about a group account.
# ovirt-aaa-jdbc-tool group show group1
-- Group group1(f23ca27c-1d6a-4f6e-8c3e-1e03e8e56829) --
2-7
Chapter 2
Administering User and Group Accounts from the Command Line
Namespace: *
Name: group1
ID: f23ca27c-1d6a-4f6e-8c3e-1e03e8e56829
Display Name:
Description:
5. Add the newly created group in the Administration Portal and assign the group
appropriate roles and permissions. See Assigning Permissions to Users and
Groups.
The users in the group inherit the roles and permissions of the group.
The following examples shows how to create the second group account.
# ovirt-aaa-jdbc-tool group add group2
adding group group2...
group added successfully
The following examples shows how to add the second group to the first group.
# ovirt-aaa-jdbc-tool group-manage groupadd group1 --group=group2
updating group group1...
group updated successfully
5. Add the first group in the Administration Portal and assign the group appropriate
roles and permissions. See Assigning Permissions to Users and Groups.
2-8
Chapter 2
Administering User and Group Accounts from the Command Line
The following example shows sample output from the ovirt-aaa-jdbc-tool query
--what=user command.
# ovirt-aaa-jdbc-tool query --what=user
-- User test2(35e8b35e-2320-45da-b59e-1076b521d13f) --
Namespace: *
Name: test2
ID: 35e8b35e-2320-45da-b59e-1076b521d13f
Display Name:
Email:
First Name: Jane
Last Name: Doe
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2019-09-06 16:51:32Z
Account Valid To: 2219-09-06 16:51:32Z
Account Without Password: false
Last successful Login At: 2019-09-06 17:12:08Z
Last unsuccessful Login At: 1970-01-01 00:00:00Z
Password Valid To: 2025-08-01 20:00:00Z
-- User admin(89559d7f-3b48-420b-bd4d-2790122c199b) --
Namespace: *
Name: admin
ID: 89559d7f-3b48-420b-bd4d-2790122c199b
Display Name:
Email:
First Name: admin
Last Name:
Department:
Title:
Description:
2-9
Chapter 2
Administering User and Group Accounts from the Command Line
2-10
Chapter 2
Administering User and Group Accounts from the Command Line
The following example shows how to filter the output of the ovirt-aaa-jdbc-tool
query command to display only user account details that start with the character J.
# ovirt-aaa-jdbc-tool query --what=user --pattern="firstName=J*"
-- User test1(e75956a8-6ebf-49d7-94fa-504afbfb96ad) --
Namespace: *
Name: test1
ID: e75956a8-6ebf-49d7-94fa-504afbfb96ad
Display Name:
Email: [email protected]
First Name: John
Last Name: Doe
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2019-08-29 18:15:20Z
Account Valid To: 2219-08-29 18:15:20Z
Account Without Password: false
Last successful Login At: 1970-01-01 00:00:00Z
Last unsuccessful Login At: 1970-01-01 00:00:00Z
Password Valid To: 2025-08-01 20:00:00Z
-- User test2(35e8b35e-2320-45da-b59e-1076b521d13f) --
Namespace: *
Name: test2
ID: 35e8b35e-2320-45da-b59e-1076b521d13f
Display Name:
Email:
First Name: Jane
Last Name: Doe
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2019-09-06 16:51:32Z
Account Valid To: 2219-09-06 16:51:32Z
Account Without Password: false
Last successful Login At: 2019-09-06 17:12:08Z
Last unsuccessful Login At: 1970-01-01 00:00:00Z
Password Valid To: 2025-08-01 20:00:00Z
2-11
Chapter 2
Creating a Scheduling Policy
The following example shows how to filter the output of the ovirt-aaa-jdbc-
tool query command to display only group account details that match the
description documentation-group.
# ovirt-aaa-jdbc-tool query --what=group --
pattern="description=documentation-group"
-- Group group1(f23ca27c-1d6a-4f6e-8c3e-1e03e8e56829) --
Namespace: *
Name: group1
ID: f23ca27c-1d6a-4f6e-8c3e-1e03e8e56829
Display Name:
Description: documentation-group
Note:
To learn about the default scheduling policies and for conceptual information,
see High Availability and Optimization in the Oracle Linux Virtualization
Manager: Architecture and Planning Guide. For detailed information on
scheduling policies and other policy types, refer to the Administration Guide
in oVirt Documentation.
2-12
Chapter 2
Creating a Scheduling Policy
• Drag and drop modules from the Disabled Filters section to the Enabled Filters
section.
• Optionally, set the module priority by right-clicking on a filter module name, hover
over Position and then select First or Last.
6. In Weights Modules:
• Drag and drop modules from the Disabled Weights section to the Enabled Weights
& Factors section.
• Optionally, use the plus (+) and minus (-) to increase or decrease module weight.
7. In Load Balancer:
• Select the load balancing policy.
• Select a load balancing property and then enter a property value.
• Optionally, use the plus (+) and minus (-) to add or remove additional properties.
8. Click OK to create the scheduling policy.
2-13
3
Administration Tasks
The following are common Oracle Linux Virtualization Manager administration tasks. For
conceptual information about these topics, refer to the Oracle Linux Virtualization Manager:
Architecture and Planning Guide.
For additional administrative tasks, see the oVirt Documentation.
Data Centers
Oracle Linux Virtualization Manager creates a default data center during installation. You can
configure the default data center, or set up new appropriately named data centers.
A data center requires a functioning cluster, host, and storage domain to operate in your
virtualization environment.
Clusters
Oracle Linux Virtualization Manager creates a default cluster in the default data center during
installation. You can configure the default cluster, or set up new appropriately named clusters.
3-1
Chapter 3
Clusters
Note:
For more information on compatibility versions, see Changing Data
Center and Cluster Compatibility Versions After Upgrading.
9. From the Switch Type drop-down list, choose the type of switch to be used for the
cluster.
By default, Linux Bridge is selected from the drop-down list.
10. From the Firewall Type drop-down list, choose the firewall type for hosts in the
cluster.
The firewall types available are either iptables or firewalld. By default, the
firewalld option is selected from the drop-down list.
11. The Enable Virt Service check box is selected by default. This check box
designates that the cluster is to be populated with virtual machine hosts.
12. (Optional) Review the other tabs to further configure your cluster:
a. Click the Optimization tab on the sidebar to select the memory page sharing
threshold for the cluster, and optionally enable CPU thread handling and
memory ballooning on the hosts in the cluster. See Deployment Optimization.
b. Click the Migration Policy tab on the sidebar menu to define the virtual
machine migration policy for the cluster.
c. Click the Scheduling Policy tab on the sidebar to optionally configure a
scheduling policy, configure scheduler optimization settings, enable trusted
service for hosts in the cluster, enable HA Reservation, and add a custom
serial number policy.
3-2
Chapter 3
Hosts
d. Click the Fencing policy tab on the sidebar to enable or disable fencing in the
cluster, and select fencing options.
e. Click the MAC Address Pool tab on the sidebar to specify a MAC address pool other
than the default pool for the cluster.
13. Click OK to create the data center.
The cluster is added to the virtualization environment and the Cluster - Guide Me menu
opens to guide you through the entities that are required to be configured for the cluster
to operate.
You can postpone the configuration of these entities by clicking the Configure Later
button. You can resume the configuration of these entities by selecting the respective
cluster and clicking More Actions and then choosing Guide Me from the drop-down
menu.
Hosts
Hosts, also known as hypervisors, are the physical servers on which virtual machines run.
Full virtualization is provided by using a loadable Linux kernel module called Kernel-based
Virtual Machine (KVM). KVM can concurrently host multiple virtual machines. Virtual
machines run as individual Linux processes and threads on the host machine and are
managed remotely by the engine.
Note:
Virtual machines that are pinned to the host and cannot be migrated are shut down.
You can check which virtual machines are pinned to the host by clicking Pinned to
Host in the Virtual Machines tab of the host’s details view.
3-3
Chapter 3
Hosts
Select the Ignore Gluster Quorum and Self-Heal Validations option to avoid the
default checks. By default, the Engine checks that the Gluster quorum is not lost
when the host is moved to maintenance mode. The Engine also checks that there
is no self-heal activity that will be affected by moving the host to maintenance
mode. If the Gluster quorum will be lost or if there is self-heal activity that will be
affected, the Engine prevents the host from being placed into maintenance mode.
Only use this option if there is no other way to place the host in maintenance
mode.
Select the Stop Gluster Service option to stop all Gluster services while moving
the host to maintenance mode.
These fields will only appear in the host maintenance window when the selected
host supports Gluster.
6. Click OK to initiate maintenance mode.
7. All running virtual machines are migrated to alternative hosts. If the host is the
Storage Pool Manager (SPM), the SPM role is migrated to another host. The
Status field of the host changes to Preparing for Maintenance, and finally
Maintenance when the operation completes successfully. VDSM does not stop
while the host is in maintenance mode.
Note:
If migration fails on any virtual machine, click Management and then
select Activate on the host to stop the operation placing it into
maintenance mode, then click Cancel Migration on the virtual machine
to stop the migration.
Removing a Host
You may need to remove a host from the Oracle Linux Virtualization Manager
environment when upgrading to a newer version.
1. Click Compute and then select Hosts and select the host.
2. Select the host.
3-4
Chapter 3
Networks
Networks
With Oracle Linux Virtualization Manager, you can create custom vNICs for your virtual
machines.
Important:
Since virtual machines can start on any host in a data center/cluster, all hosts
must have the customized VM network assigned to one of its NICs. Ensure that
you assign this customized VM network to each host before booting the virtual
machine. For more information, see Assigning a Virtual Machine Network to a
KVM Host in the Oracle Linux Virtualization Manager: Getting Started Guide.
3-5
Chapter 3
Networks
8. Highlight the virtual machine where you added the network and then click Run to
boot the virtual machine.
The red down arrow icon to the left of the virtual machine turns green and the
Status column displays UP when the virtual machine is up and running on the
network.
Note:
Before assigning logical networks, check the configuration. To help
detect to which ports and on which switch the host’s interfaces are
patched, review Port Description (TLV type 4) and System Name (TLV
type 5). The Port VLAN ID shows the native VLAN ID configured on the
switch port for untagged ethernet frames. All VLANs configured on the
switch port are shown as VLAN Name and VLAN ID combinations.
3-6
Chapter 3
Networks
Note:
If you change the host’s management network IP address, you must
reinstall the host for the new IP address to be configured.
Each logical network can have a separate gateway defined from the
management network gateway. This ensures traffic that arrives on the
logical network is forwarded using the logical network’s gateway instead of
the default gateway used by the management network.
Set all hosts in a cluster to use the same IP stack for their management
network; either IPv4 or IPv6 only.
c. To configure a network bridge, click the Custom Properties tab, select bridge_opts
from the list, and enter a valid key and value with the syntax of key=value.
The following are valid keys with example values:
forward_delay=1500
group_addr=1:80:c2:0:0:0
group_fwd_mask=0x0
hash_max=512
hello_time=200
max_age=2000
multicast_last_member_count=2
multicast_last_member_interval=100
multicast_membership_interval=26000
multicast_querier=0
multicast_querier_interval=25500
multicast_query_interval=13000
multicast_query_response_interval=1000
multicast_query_use_ifaddr=0
multicast_router=1
multicast_snooping=1
multicast_startup_query_count=2
multicast_startup_query_interval=3125
3-7
Chapter 3
Networks
You can use wildcard to apply the same option to all of a network’s interfaces,
for example:
--coalesce * rx-usecs 14 sample-interval 3
The ethtool_opts option is not available by default; you need to add it using
the engine configuration tool. To view ethtool properties, from a command
line type man ethtool to open the man page. For more information, see How
to Set Up oVirt Engine to Use Ethtool in oVirt Documentation.
e. To configure Fibre Channel over Ethernet (FCoE), click the Custom
Properties tab, select fcoe from the list, and enter enable=yes. Separate
multiple entries with a whitespace character.
The fcoe option is not available by default; you need to add it using the engine
configuration tool. For more information, see How to Set Up oVirt Engine to
Use FCoE in oVirt Documentation.
f. To change the default network used by the host from the management network
(ovirtmgmt) to a non-management network, configure the non-management
network’s default route. For more information, see Configuring a Non-
Management Logical Network as the Default Route in oVirt Documentation.
g. If your logical network definition is not synchronized with the network
configuration on the host, select the Sync network check box. For more
information about unsynchronized hosts and how to synchronize them, see
Synchronizing Host Networks in oVirt Documentation.
8. To check network connectivity, select the Verify connectivity between Host and
Engine check box.
Note:
The host must be in maintenance mode.
9. Click OK.
Note:
If not all network interface cards for the host are displayed, click
Management and then Refresh Capabilities to update the list of
network interface cards available for that host.
3-8
Chapter 3
Storage
Storage
Oracle Linux Virtualization Manager uses a centralized storage system for virtual machine
disk images, ISO files, and snapshots. You can use Network File System (NFS), Internet
Small Computer System Interface (iSCSI), or Fibre Channel Protocol (FCP) storage. You can
also configure local storage attached directly to hosts.
This following administration tasks cover preparing and adding local, NFS, and FCP storage.
For information about attaching iSCSI storage, see Attaching an iSCSI Data Domain in the
Oracle Linux Virtualization Manager: Getting Started Guide.
2. Ensure that the directory has permissions that allows read-write access to the vdsm user
(UID 36) and kvm group (GID 36).
# chown 36:36 /data /data/images
# chmod 0755 /data /data/images
3-9
Chapter 3
Storage
5. Click Edit next to the Data Center, Cluster, and Storage fields to configure and
name the local storage domain.
6. In the Set the path to your local storage text input field, specify the path to your
local storage domain.
For more information, refer to Preparing Local Storage for a KVM Host.
7. Click OK to add the local storage domain.
When the virtualization environment is finished adding the local storage, the new
data center, cluster, and storage created for the local storage appears on the Data
Center, Clusters, and Storage panes, respectively.
You can click Tasks to monitor the various processing steps that are completed to
add the local storage to the host.
You can also verify the successful addition of the local storage domain by viewing
the /var/log/ovirt-engine/engine.log file.
2. Set the required permissions on the new directory to allow read-write access to the
vdsm user (UID 36) and kvmgroup (GID 36).
# chown -R 36:36 /nfs/olv_ovirt
# chmod -R 0755 /nfs/olv_ovirt
3. Add an entry for the newly created NFS share in the /etc/exports directory on
the NFS file server that uses the following format: full-path-of-share-created
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36).
For example:
# vi /etc/exports
# added the following entry
/nfs/olv_ovirt/data
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
3-10
Chapter 3
Storage
If you do not want to export the domain share to all servers on the network (denoted by
the * before the left parenthesis), you can specify each individual host in your
virtualization environment by using the following format: /nfs/ol_ovirt/data hostname-
or-ip-address (rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36).
For example:
/nfs/olv_ovirt/data
hostname
(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
5. Confirm that the added export is available to Oracle Linux Virtualization Manager hosts
by using the following showmount commands on the NFS File Server.
# showmount -e | grep pathname-to-domain-share-added
# showmount | grep ip-address-of-host
3-11
Chapter 3
Storage
For information about uploading images to the data domain, see Uploading
Images to a Data Domain in the Oracle Linux Virtualization Manager: Getting
Started Guide.
You can click Tasks to monitor the various processing steps that are completed to
attach the FC data domain to the data center.
3-12
Chapter 3
Storage
Note:
The OVF_STORE disks are images that contain the metadata of virtual
machines and disks that reside on the storage data domain.
6. Click OK.
The storage domain is deactivated and has an Inactive status in the results list. You can
now detach the inactive storage domain from the data center.
7. Click Detach.
8. Click OK to detach the storage domain.
Now that the storage domain is detached from the data center, it can be attached to another
data center.
3-13
Chapter 3
Storage
4. In the Add iSCSI Bond window, enter a Name and optionally add a Description.
5. Select a logical network from Logical Networks and a storage domain from
Storage Targets. You must select all paths to the same target.
6. Click OK.
The hosts in the data center are connected to the iSCSI targets through the logical
networks in the iSCSI bond.
3-14
Chapter 3
Virtual Machines
d. In the Add iSCSI Bond window, enter a Name, select the networks net-1 and net-2
and click OK.
Your data center has an iSCSI bond containing the old and new logical networks.
Virtual Machines
Oracle Linux Virtualization Manager lets you perform basic administration of your virtual
machines, including live editing, creating and using snapshots and live migration.
Note:
For information on creating virtual machines, see the Oracle Linux Virtualization
Manager: Getting Started Guide.
3-15
Chapter 3
Virtual Machines
Important:
Check this box with caution because you can expose the previous
user's session to the new user.
Note:
You are not able to check this box if on the Host tab you have
selected either Allow manual migration only or Do not allow
migration for the Migration mode. For a virtual machine to be highly-
available it must be possible for the engine to migrate the virtual
machine to another host when needed.
3-16
Chapter 3
Virtual Machines
Select the priority level (Low, Medium or High) for the virtual machine to live migrate
or restart on another host.
Select the Icon tab, to upload a new icon.
5. Click OK when you are finished with all tabs to save your changes.
Changes to any other settings are applied when you shut down and restart your virtual
machine. Until then, an orange icon displays to indicate pending changes.
Note:
Live migrations are performed using the management network. The number of
concurrent migrations supported is limited by default. Even with these limits,
concurrent migrations can potentially saturate the management network. To
minimize the risk of network saturation, we recommended that you create separate
logical networks for storage, display, and virtual machine data.
3-17
Chapter 3
Virtual Machines
3-18
Chapter 3
Virtual Machines
NOT_SUPPORTED:
Assigning a virtual machine to one specific host and disabling migration is
mutually exclusive in Oracle Linux Virtualization Manager high availability (HA).
Virtual machines that are assigned to one specific host can only be made highly
available using third-party HA products. This restriction does not apply to virtual
machines that are assigned to a group of hosts.
5. From the Migration mode drop-down list, select Allow manual and automatic
migration, Allow manual migration only or Do not allow migration.
6. (Optional) Check Use custom migration downtime and specify a value in milliseconds.
7. Click OK.
3. Ensure that the kvm user has access to the OVA file's path, for example:
# -rw-r--r-- 1 vdsm kvm 872344576 Jan 15 17:43 OLKVM_OL7U7_X86_64.ova
3-19
Chapter 3
Virtual Machines
Note:
You can select more than one virtual appliance to import.
9. Click the right arrow to move the appliance(s) to the Virtual Machines to Import
list and then click Next.
10. Click the Clone field for the template you want to import and review its General,
Network Interfaces, and Disks configuration.
11. Click OK.
The import process can take several minutes. Once it completes, you can view the
template(s) by clicking Compute and then Templates.
To create a virtual machine from your imported template, see Creating an Oracle
Linux Virtual Machine from a Template in the Oracle Linux Virtualization Manager:
Getting Started Guide.
Note:
For best practices when using snapshots, see Considerations When Using
Snapshots in the Oracle Linux Virtualization Manager: Architecture and
Planning Guide.
3-20
Chapter 3
Virtual Machines
Important:
Not selecting a disk results in the creation of a partial snapshot of the virtual
machine without a disk. Although a saved partial snapshot does not have a
disk, you can still preview a partial snapshot to view the configuration of the
virtual machine.
7. (Optional) Select the Save Memory check box to include the virtual machine's memory
in the snapshot. By default, this checkbox is selected.
8. Click OK to save the snapshot.
The virtual machine’s operating system and applications on the selected disks are stored
in a snapshot that can be previewed or restored.
On the Snapshots pane, the Lock icon appears next to the snapshot as it is being
created. Once complete, the icon changes to the Snapshot (camera) icon. You can then
display details about the snapshot by selecting the General, Disks, Network Interfaces,
and Installed Applications drop-down views.
Note:
The virtual machine must be in a Down state before performing this task.
3-21
Chapter 3
Virtual Machines
On the Snapshots pane, the Preview (eye) icon appears next to the snapshot
when the preview of the snapshot is completed.
6. Click Run to start the virtual machine.
The virtual machine runs using the disk image of the snapshot. You can preview
the snapshot and verify the state of the virtual machine.
7. Click Shutdown to stop the virtual machine.
8. From the Snapshot pane, perform one of the following steps:
a. Click Commit to permanently restore the virtual machine to the condition of
the snapshot. Any subsequent snapshots are erased.
b. Alternatively, click Undo to deactivate the snapshot and return the virtual
machine to its previous state.
Note:
The Name field is the only required field on this dialog box.
After a short time, the cloned virtual machine appears on the Virtual Machines
pane with a status of Image Locked. The virtual machine remains in this state until
the Manager completes the creation of the virtual machine. When the virtual
machine is ready to use, its status changes from Image Locked to Down on the
Virtual Machines pane.
3-22
Chapter 3
Encrypted Communication
Deleting a Snapshot
You can delete a virtual machine snapshot and permanently remove it from your virtualization
environment. This operation is supported on a running virtual machine and does not require
the virtual machine to be in a Down state.
Important:
• When you delete a snapshot from an image chain, there must be enough free
space in the storage domain to temporarily accommodate both the original
volume and the newly merged volume; otherwise, the snapshot deletion fails.
This is due to the data from the two volumes being merged in the resized
volume and the resized volume growing to accommodate the total size of the
two merged images. In this scenario, you must export and reimport the volume
to remove the snapshot.
• If the snapshot being deleted is contained in a base image, the volume
subsequent to the volume containing the snapshot being deleted is extended to
include the base volume.
• If the snapshot being deleted is contained in a QCOW2 (thin-provisioned), non-
base image hosted on internal storage, the successor volume is extended to
include the volume containing the snapshot being deleted.
To delete a snapshot:
1. Click Compute and then select Virtual Machines.
The Virtual Machines pane opens with the list of virtual machines that have been
created.
2. Under the Name column, select the virtual machine with the snapshot that you want to
delete.
The General tab opens with details about the virtual machine.
3. Click the Snapshots tab.
4. On the Snapshots pane, select the snapshot to delete.
5. Select the snapshot to delete.
6. Click Delete.
7. Click OK.
On the Snapshots pane, a Lock icon appears next to the snapshot until the snapshot is
deleted.
Encrypted Communication
You can configure your organization’s third-party CA certificate to identify the Oracle Linux
Virtualization Manager to users connecting over HTTPS.
3-23
Chapter 3
Encrypted Communication
Using a third-party CA certificate for HTTPS connections does not affect the certificate
that is used for authentication between the engine host and KVM hosts. They continue
to use the self-signed certificate generated by the Manager.
Caution:
Do not change the permissions and ownerships for the /etc/pki directory
or any subdirectories. The permission for the /etc/pki and /etc/pki/
ovirt-engine directories must remain as the default value of 755.
3. Copy the CA certificate into the PKI directory for the Manager.
# cp third-party-ca-cert.pem /etc/pki/ovirt-engine/apache-ca.pem
5. Copy the new Apache private key into the PKI directory for the Manager by
entering the following command and respond to prompt.
# cp apache.key /etc/pki/ovirt-engine/keys/apache.key.nopass
cp: overwrite \u2018/etc/pki/ovirt-engine/keys/apache.key.nopass\u2019? y
6. Copy the new Apache certificate into the PKI directory for the Manager by entering
the following command and respond to the prompt.
# cp apache.cer /etc/pki/ovirt-engine/certs/apache.cer
cp: overwrite \u2018/etc/pki/ovirt-engine/certs/apache.cer\u2019? y
3-24
Chapter 3
Event Notifications
8. Create a new trust store configuration file (or edit the existing one) at /etc/ovirt-
engine/engine.conf.d/99-custom-truststore.conf by adding the following
parameters.
ENGINE_HTTPS_PKI_TRUST_STORE="/etc/pki/java/cacerts"
ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD=""
Event Notifications
The following section explains how to set up event notifications to monitor events in your
virtualization environment. You can configure the Manager to send event notifications in email
to alert designated users when certain events occur or enable Simple Network Management
Protocol (SNMP) traps to monitor your virtualization environment.
3-25
Chapter 3
Event Notifications
Note:
If you plan to also configure SNMP traps in your virtualization
environment, you can also copy the values from the SNMP_TRAP
Notifications section of the ovirt-notifier.conf file to a file
named 20-snmp.conf. For more information, see Configuring the
Engine to Send SNMP Traps.
4. Enter the correct email variables. This file overrides the values in the original
ovirt-engine-notifier.conf file.
---------------------
# EMAIL Notifications #
---------------------
# The SMTP port (usually 25 for plain SMTP, 465 for SMTP with SSL, 587 for
SMTP with TLS)
MAIL_PORT=25
3-26
Chapter 3
Event Notifications
Note:
For information about the other parameters available for event notification in the
ovirt-engine-notifier.conf file, refer to oVirt Documentation.
Note:
A user does not appear in the Administration Portal until the user is created and
assigned appropriate permissions. For more information, refer to Creating a
New User Account.
3-27
Chapter 3
Event Notifications
Note:
3-28
Chapter 3
Event Notifications
Default SNMP configuration values exist on the Engine in the events notifications
configuration file (ovirt-engine-notifier.conf), which is available at the following
directory path: /usr/share/ovirt-engine/services/ovirt-engine-notifier/
ovirt-engine-notifier.conf. The values provided in this step are based on the
default or example values provided in that file. To persist that your configuration settings
persist across reboots, define an override file for your SNMP configuration (20-
snmp.conf), rather than edit the ovirt-engine-notifier.conf file, For more
information, see Configuring Event Notification Services on the Engine.
3. Specify the SNMP manager, the SNMP community, and the OID in the following format:
SNMP_MANAGERS="manager1.example.com manager2.example.com:162"
SNMP_COMMUNITY=public
SNMP_OID=1.3.6.1.4.1.2312.13.1.1
# Default SNMP Version. SNMP version 2 and version 3 traps are supported
# 2 = SNMPv2
# 3 = SNMPv3
SNMP_VERSION=2
# The SNMPv3 auth protocol. Supported values are MD5 and SHA.
SNMP_AUTH_PROTOCOL=
3-29
Chapter 3
Event Notifications
SNMP_AUTH_PASSPHRASE=
# The SNMPv3 privacy protocol. Supported values are AES128, AES192 and
AES256.
# Be aware that AES192 and AES256 are not defined in RFC3826, so please
verify
# that your SNMP server supports those protocols before enabling them.
SNMP_PRIVACY_PROTOCOL=
• Send all events with the severity ERROR or ALERT to the default SNMP profile:
FILTER="include:\*:ERROR(snmp:) ${FILTER}"
FILTER="include:\*:ALERT(snmp:) ${FILTER}"
7. (Optional) Validate that traps are being sent to the SNMP Manager.
3-30
4
Deployment Optimization
You can configure Oracle Linux Virtualization Manager so that your cluster is optimized and
your hosts and virtual machine are highly available. You can also enable or disable devices
(hot plug) while a virtual machine is running.
Note:
If a virtual machine is running Oracle products, such as Oracle Database or other
Oracle applications, that require dedicated memory, configuring memory
overcommitment is not an available option.
Using the Resource Allocation tab when creating or editing a virtual machine, you can:
• set the maximum amount of processing capability a virtual machine can access on its
host.
• pin a virtual CPU to a specific physical CPU.
• guarantee an amount of memory for the virtual machine.
• enable the memory balloon device for the virtual machine. Enable Memory Balloon
Optimization must also be selected for the cluster.
• improve the speed of disks that have a VirtIO interface by pinning them to a thread
separate from the virtual machine's other functions.
4-1
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
For more information, refer to High Availability and Optimization in the Oracle Linux
Virtualization Manager: Architecture and Planning Guide.
Note:
Enable ballooning on virtual machines that have applications and loads
that slowly consume memory, occasionally release memory, or stay
dormant for long periods of time, such as virtual desktops.
5. Under KSM Control, check Enable KSM to enable MoM to run KSM when
necessary and when it can yield a memory saving benefit that outweighs its CPU
cost.
6. Click OK to save your changes.
4-2
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
# engine-config -s MaxVdsMemOverCommit=percentage
Important:
Since this check box is selected by default, make sure you have enabled
memory ballooning for the cluster where the virtual machine's host resides.
5. Under I/O Threads, check I/O Threads Enabled to improve the speed of disks that have
a VirtIO interface by pinning them to a thread separate from the virtual machine's other
functions.
This check box is selected by default.
6. Under Queues, check Multi Queues Enabled to create up to four queues per vNIC,
depending on how many vCPUs are available.
4-3
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
Important:
If a host runs virtual machines that are highly available, power management
must be enabled and configured.
For more information, refer to High Availability and Optimization in the Oracle Linux
Virtualization Manager: Architecture and Planning Guide.
4-4
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
Important:
If you enable or disable Kdump integration on an existing host, you must
reinstall the host.
6. (Optional) Check Disable policy control of power management if you do not want
your host’s power management to be controlled by the scheduling policy of the host's
cluster.
7. To configure a fence agent, click the plus sign (+) next to Add Fence Agent.
The Edit fence agent pane opens.
8. Enter the Address (IP Address or FQDN) to access the host's power management
device.
9. Enter the User Name and Password of the of the account used to access the power
management device.
10. Select the power management device Type from the drop-down list.
11. Enter the Port (SSH) number used by the power management device to communicate
with the host.
12. Enter the Slot number used to identify the blade of the power management device.
13. Enter the Options for the power management device. Use a comma-separated list of
key-value pairs.
• If you leave the Options field blank, you are able to use both IPv4 and IPv6
addresses
• To use only IPv4 addresses, enter inet4_only=1
• To use only IPv6 addresses, enter inet6_only=1
14. Check Secure to enable the power management device to connect securely to the host.
You can use ssh, ssl, or any other authentication protocol your power management
device supports.
4-5
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
15. Click Test to ensure the settings are correct and then click OK.
NOT_SUPPORTED:
Power management parameters (userid, password, options, etc.) are
tested by the Manager only during setup and manually after that. If you
choose to ignore alerts about incorrect parameters, or if the parameters
are changed on the power management hardware without changing in
the Manager as well, fencing is likely to fail when most needed.
16. Fence agents are sequential by default. To change the sequence in which the
fence agents are used:
a. Review your fence agent order in the Agents by Sequential Order field.
b. To make two fence agents concurrent, next to one fence agent click the
Concurrent with drop-down list and select the other fence agent.
You can add additional fence agents to this concurrent fence agent group.
17. Expand the Advanced Parameters and use the up and down buttons to specify
the order in which the Manager searches the host’s cluster and dc (data center)
for a power management proxy.
18. To add an additional power management proxy:
a. Click the plus sign (+) next to Add Power Management Proxy.
The Select fence proxy preference type to add pane opens.
b. Select a power management proxy from the drop-down list and then click OK.
Your new proxy displays in the Power Management Proxy Preference list.
Note:
By default, the Manager searches for a fencing proxy within the same
cluster as the host. If The Manager cannot find a fencing proxy within the
cluster, it searches the data center.
From the list of hosts, the exclamation mark next to the host’s name disappeared,
signifying that you have successfully configured power management and fencing.
4-6
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
You can configure quiet time using the engine-config command option
DisableFenceAtStartupInSec:
# engine-config -s DisableFenceAtStartupInSec=number
# engine-config -s PMHealthCheckIntervalInSec=number
When set to true, PMHealthCheckEnabled checks all host agents at the interval specified by
PMHealthCheckIntervalInSec and raises warnings if it detects issues.
Note:
A highly available virtual machine does not restart if you shut it down cleanly from
within the virtual machine or the Manager or if you shut down a host without first
putting it into maintenance mode.
4-7
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
• Check that the destination host has enough RAM and CPUs that are not in use to
support the virtual machine's requirements
Virtual machines can also be restarted on another host even if the original host loses
power if you have configured it to acquire a lease on a special volume on the storage
domain. Acquiring a lease prevents the virtual machine from being started on two
different hosts, which could result in virtual machine disk corruption.
If you configure high availability:
• there is minimal service interruption because virtual machines are restarted within
seconds and with no user intervention.
• your resources are balanced by restarting virtual machines on a host with low
current resource utilization.
• you are ensured that there is sufficient capacity to restart virtual machines at all
times.
You must configure high availability for each virtual machine using the following steps:
1. Click Compute and then Virtual Machines.
2. In the list of virtual machines, click to highlight a virtual machine and then click
Edit.
3. In the Edit Virtual Machine window, click the High Availability tab.
4. Check Highly Available to enable high availability for the virtual machine.
5. From the Target Storage Domain for VM Lease drop-down list, select No VM
Lease (default) to disable the functionality or select a storage domain to hold the
virtual machine lease.
Virtual machines are able to acquire a lease on a special volume on the storage
domain. This enables a virtual machine to start on another host even if the original
host loses power. For more information, see Storage Leases in the Oracle Linux
Virtualization Manager: Architecture and Planning Guide.
6. From the Resume Behavior drop-down list, select AUTO_RESUME,
LEAVE_PAUSED, OR KILL. If you defined a VM lease, KILL is the only option
available.
7. From the Priority list, select Low, Medium, or High.
When virtual machine migration is triggered, a queue is created in which the high
priority virtual machines are migrated first. If a cluster is running low on resources,
only the high-priority virtual machines are migrated.
8. Click OK.
4-8
Chapter 4
Optimizing Clusters, Hosts and Virtual Machines
If you change the optimization mode of a running virtual machine to high performance, some
configuration changes require restarting the virtual machine. To change the optimization
mode of a new or existing virtual machine to high performance, you may need to make
manual changes to the cluster and to the pinned host configuration first.
A high performance virtual machine has certain limitations, because enhanced performance
has a trade-off in decreased flexibility:
• If pinning is set for CPU threads, IO threads, emulator threads, or NUMA nodes,
according to the recommended settings, only a subset of cluster hosts can be assigned
to the high performance virtual machine.
• Many devices are automatically disabled, which limits the virtual machine’s usability.
4-9
Chapter 4
Hot Plugging Devices on Virtual Machines
the virtual machine is configured with a set of automatic and recommended manual
settings for maximum efficiency. By using huge pages, you increase the page size
which reduces the page table, reduces the pressure on the Translation Lookaside
Buffer cache, and improves performance.
Huge pages are pre-allocated when a virtual machine starts to run (dynamic allocation
is disabled by default).
Note:
If you configure huge pages for a virtual machine, you cannot hotplug or hot
uplug memory.
4-10
Chapter 4
Hot Plugging Devices on Virtual Machines
Note:
Hot unplugging a vCPU is only supported if the vCPU was previously hot plugged.
A virtual machine’s vCPUs cannot be hot unplugged to less vCPUs than it was
originally created with.
Before you can hot plug vCPUs, you must meet the following prerequisites:
• The virtual machine's operating system must be explicitly set and must support CPU hot
plug. For details, see oVirt Documentation.
• The virtual machine must have at least four vCPUs
• Windows virtual machines must have the guest agents installed. See Windows Virtual
Machines Lose Functionality Due To Deprecated Guest Agent in the Known Issues
section of the Oracle Linux Virtualization Manager: Release Notes for more information.
Create vm with 4 cpus Hotplug 2 more (cpu count 6) Hot unplug cpus that you added (cpu
count 4) Note that only previously hot plugged CPUs can be hot unplugged
To hot plug a vCPU:
1. Click Compute and then select Virtual Machines.
2. Select a virtual machine that is running and click Edit.
3. Click the System tab.
4. Change the value of Virtual Sockets as required.
5. Click OK.
Note:
This feature is only available for the self-hosted engine Engine virtual machine,
which is currently a technology preview feature.
4-11
Chapter 4
Hot Plugging Devices on Virtual Machines
4. Enter a new number for Memory Size. You can add memory in multiples of 256
MB. By default, the maximum memory allowed for the virtual machine is set to 4x
the memory size specified.
5. Click OK.
The Pending Virtual Machine changes window opens.
6. Click OK for the changes to take place immediately or check Apply later and then
OK to wait for the next virtual machine restart.
7. Click OK.
You can see the virtual machine's updated memory in the Defined Memory field
of the virtual machine's details page and you can see the added memory under
Vm Devices.
You can also hot unplug virtual memory, but consider:
• Only memory added with hot plugging can be hot unplugged.
• The virtual machine's operating system must support memory hot unplugging.
• The virtual machine must not have a memory balloon device enabled.
To hot unplug virtual memory:
1. Click Compute and then select Virtual Machines.
2. Click on the name of a virtual machine that is running.
The virtual machine's details page opens.
3. Click Vm Devices.
4. In the Hot Unplug column, click Hot Unplug beside any memory device you want
to remove.
The Memory Hot Unplug windows opens with a warning.
5. Click OK.
Under General on the virtual machine details page, the Physical Memory
Guaranteed value for the virtual machine is decremented automatically.
4-12
5
Upgrading Your Environment to 4.4
You can upgrade Oracle Linux Virtualization Manager from 4.3.10 to 4.4 by upgrading your
engine or self-hosted engine and KVM hosts. Upgrading from 4.3 to 4.4 with Gluster 8
storage in your environment is supported. However, if your 4.3 installation uses Gluster 6
storage, you must upgrade to Gluster 8 before upgrading to 4.4.
Optionally, you can use the Leapp utility to upgrade the engine from 4.3 to 4.4. Refer to the
My Oracle Support (MOS) article Leapp Upgrade from OLVM 4.3.10 to 4.4.x (Doc ID
2900355.1) for instructions.
If you want to update your engine, KVM hosts, or self-hosted engine within versions, such as
from 4.4.8 to 4.4.10, see Updating Engine, Self-Hosted Engine and KVM Hosts.
Important:
Upgrading from 4.2 to 4.4 is not supported. You must first upgrade to 4.3.10.
5-1
Chapter 5
Updating the Engine or Self-Hosted Engine
Important:
The update process might take some time. Do not stop the process
before it completes.
5-2
Chapter 5
Upgrading the Engine or Self-Hosted Engine
Note:
The engine-setup script displays configuration values supplied during the initial
engine installation process. These values may not be up to date, if you used
engine-config after the initial installation. However, engine-setup will not
overwrite your updated values.
For example, if you used engine-config to update SANWipeAfterDelete to
true after installation, engine-setup will display "Default SAN wipe after delete:
False" in the configuration preview. However, it will not apply this value. It will
keep the SANWipeAfterDelete to true setting.
6. Update the base operating system and any optional packages installed on the engine:
# yum update
7. If any kernel packages were updated, reboot the machine to complete the update.
8. You can now upgrade the engine to 4.4.
Note:
Connected hosts and virtual machines can continue to work while you upgrade the
engine.
Prerequisites
NOT_SUPPORTED:
Before you begin the upgrade process, ensure you have updated your engine or
self-hosted engine.
There are upgrade prerequisites that are common to both a standard environment and a self-
hosted engine environment. There are additional prerequisites for a self-hosted engine
environment.
For all environments:
• The engine must be updated to the latest version of 4.3.
5-3
Chapter 5
Upgrading the Engine or Self-Hosted Engine
• All data centers and clusters in the environment must have the cluster
compatibility level set to version 4.2 or 4.3.
• All virtual machines in the environment must have the cluster compatibility level
set to version 4.3.
• If you use an external CA to sign HTTPS certificates, follow the steps in Replacing
the Oracle Linux Virtualization Manager Apache SSL Certificate. The backup and
restore include the 3rd-party certificate, so you should be able to log in to the
Administration portal after the upgrade. Ensure the CA certificate is added to
system-wide trust stores of all clients to ensure the foreign menu of virt-viewer
works.
Additionally, for Self-Hosted Engine Environments:
• Make note of the MAC address of the self-hosted engine if you are using DHCP
and want to use the same IP address. The deploy script prompts you for this
information.
• During the deployment you need to provide a new storage domain for the engine
machine. The deployment script renames the 4.3 storage domain and retains its
data to enable disaster recovery.
• Set the cluster scheduling policy to cluster_maintenance in order to prevent
automatic virtual machine migration during the upgrade.
Caution:
In an environment with multiple highly available self-hosted engine
nodes, you need to detach the storage domain hosting the version 4.3
engine after upgrading the engine to 4.4. Use a dedicated storage
domain for the 4.4 self-hosted engine deployment.
Important:
If you have a self-hosted engine environment, see Upgrading the Self-
Hosted Engine.
4. Copy the backup file to a storage device outside of the Oracle Linux Virtualization
Manager environment.
5. See Installing the Engine in the Oracle Linux Virtualization Manager: Getting
Started Guide.
5-4
Chapter 5
Upgrading the Engine or Self-Hosted Engine
Install Oracle Linux 8.5 or later and complete the steps to install the 4.4 engine, including
running the command dnf install ovirt-engine, but do not run engine-setup.
6. Copy the backup file to the 4.4 engine machine and restore it.
# engine-backup --mode=restore --file=backup.bck --provision-all-databases
If the backup contained grants for extra database users, this command creates the extra
users with random passwords. You must change these passwords manually if the extra
users require access to the restored system.
7. Install optional extension packages if they were installed on the 4.3 engine machine.
# dnf install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc
The backup and restore process does not migrate configuration settings. You must
manually reapply the configuration for these package extensions.
8. Configure the engine by running the engine-setup command:
# engine-setup
9. Decommission the 4.3 engine machine if a different machine is used for the 4.4 engine.
Two different engines must not manage the same hosts or storage.
10. You can now update the KVM hosts. Proceed to Migrating Hosts and Virtual Machines.
Important:
If you do not have a self-hosted engine environment, see Upgrading the Engine.
4. Copy the backup file to a storage device outside of the Oracle Linux Virtualization
Manager environment.
5. Shut down the self-hosted engine.
# shutdown
If you want to reuse the self-hosted engine virtual machine to deploy the 4.4 engine, note
the MAC address of the self-hosted engine network interface before you shut it down.
6. Make sure that the self-hosted engine is shut down.
# hosted-engine --vm-status | grep 'Engine status|Hostname'
If any of the hosts report the detail field as Up, log in to that specific host and shut it
down with the hosted-engine --vm-shutdown command.
5-5
Chapter 5
Upgrading the Engine or Self-Hosted Engine
7. Install Oracle Linux 8.5 or later on the existing host currently running the engine
virtual machine to use it as the self-hosted engine deployment host. See
Deploying the Self-Hosted Engine in the Oracle Linux Virtualization Manager:
Getting Started Guide for more information.
Note:
Oracle recommends that you use one of the existing hosts. If you decide
to use a new host, you must assign a unique name to the new host and
then add it to the existing cluster before you begin the upgrade
procedure.
tmux enables the deployment script to continue if the connection to the server is
interrupted, so you can reconnect and attach to the deployment and continue.
Otherwise, if the connection is interrupted during deployment, the deployment
fails.
To run the deployment script using tmux, enter the tmux command before you run
the deployment script:
# tmux
# hosted-engine --deploy --restore-from-file=backup.bck
The deployment script automatically disables global maintenance mode and calls
the HA agent to start the self-hosted engine virtual machine. The upgraded host
with the 4.4 self-hosted engine reports that HA mode is active, but the other hosts
report that global maintenance mode is still enabled as they are still connected to
the old self-hosted engine storage.
11. Detach the storage domain that hosts the 4.3 engine machine. For details, see
Detaching a Storage Domain from a Data Center.
12. Log in to the Engine virtual machine and shut down the engine service.
# systemctl stop ovirt-engine
13. Install optional extension packages if they were installed on the 4.3engine
machine.
# dnf install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc
Note:
The configuration for these package extensions must be manually
reapplied because they are not migrated as part of the backup and
restore process.
5-6
Chapter 5
Migrating Hosts and Virtual Machines
The 4.4 engine is now installed, with the cluster compatibility version set to 4.2 or 4.3,
whichever was the preexisting cluster compatibility version.
15. You can now update the self-hosted engine host and then any standard hosts. Proceed
to Migrating Hosts and Virtual Machines.
NOT_SUPPORTED:
When installing or reinstalling the host’s operating system, Oracle strongly
recommends that you first detach any existing non-OS storage from the host to
avoid potential data loss from accidental initialization of these disks.
Oracle Linux Virtualization Manager 4.3 and 4.4 are based on Oracle Linux 7 and
Oracle Linux 8, respectively, which have different kernel versions with different CPU
flags and microcodes. This can cause problems in migrating CPU-passthrough
virtual machines and they might not migrate properly.
Procedure
1. Verify the 4.4 engine is installed and running.
2. Verify the compatibility level of all data centers and clusters in the environment is 4.2 or
4.3.
3. Pick a host to upgrade and migrate that host’s virtual machines to another host in the
same cluster.
You can use Live Migration to minimize virtual machine downtime. See Migrating Virtual
Machines between Hosts.
4. Put the host into maintenance mode and remove the host from the engine.
See Moving a Host to Maintenance Mode and Removing a Host.
5. Install Oracle Linux 8.5 or later and install the appropriate packages to enable the host for
4.4. Even if you are using the same physical machine as for 4.3, your 4.4 hosts require a
clean installation of Oracle Linux 8.5 or later.
5-7
Chapter 5
Changing Data Center and Cluster Compatibility Versions After Upgrading
Caution:
Before you install, review the prerequisites and follow the instructions
in Configuring a KVM Host in the Oracle Linux Virtualization Manager:
Getting Started Guide. Ensure you select Minimal Install as the base
environment for the installation. If you do not, your hosts will have
incorrect qemu and libvirt versions, incorrect repositories configured, and
no access to virtual machine consoles.
6. Add this host to the engine, assigning it to the same cluster. You can now migrate
virtual machines onto this host.
See Adding a KVM Host in the Oracle Linux Virtualization Manager: Getting
Started Guide.
7. Repeat these steps to migrate virtual machines and upgrade hosts for the rest of
the hosts in the same cluster, one by one, until all are running 4.4.
8. Update the compatibility version to 4.6
See Changing Data Center and Cluster Compatibility Versions After Upgrading.
Important:
Although the Oracle Linux Virtualization Manager version is 4.4, the
corresponding compatibility version is 4.6.
5-8
Chapter 5
Changing Data Center and Cluster Compatibility Versions After Upgrading
• If you try to change the cluster compatibility version from 4.3 to 4.6 when you have 4.3
hosts running, you get the following error:
Error while executing action: Cannot change Cluster Compatibility Version to
higher version
when there are active Hosts with lower version. -Please move Host [hostname] with
lower
version to maintenance first.
• When you put a 4.3 host in maintenance mode, you can change the cluster and then data
center compatibility version to 4.6. However, the host shows non-operational with the
following event:
Host [hostname] is compatible with versions ([version levels]) and cannot join
Cluster
[clustername] which is set to version [version level].
To resolve this error, log onto the host as root and execute the following two commands
and then attempt to add the host to the engine again.
# sed 's|enabled=1|enabled=0|g' /etc/yum/pluginconf.d/enabled_repos_upload.conf -i
# sed 's|enabled=1|enabled=0|g' /etc/yum/pluginconf.d/package_upload.conf -i
Note:
The preferred approach after upgrading your engine to 4.4 is to upgrade all hosts to
4.4 and then change the cluster compatibility to 4.6. You can then add new hosts as
4.4 hosts.
5-9
Chapter 5
Changing Data Center and Cluster Compatibility Versions After Upgrading
1. Verify all hosts are running a version level that supports your desired compatibility
level. See Compatibility Version Restrictions.
2. In the Administration Portal, go to Compute and click Clusters.
3. Select the cluster to change and click Edit.
4. From the Edit Cluster dialog box, select General.
5. For Compatibility Version, select desired value and click OK.
6. On the Change Cluster Compatibility Version confirmation window, click OK.
Important:
You might get an error message warning that some virtual machines and
templates are incorrectly configured. To fix this error, edit each virtual
machine manually. The Edit Virtual Machine window provides additional
validations and warnings that show what to correct. Sometimes the issue
is automatically corrected and the virtual machine’s configuration just
needs to be saved again. After editing each virtual machine, you will be
able to change the cluster compatibility version.
Note:
Virtual machines continue to run in the previous cluster compatibility
level until you restart them. The Next-Run icon (triangle with an
exclamation mark) indentifies virtual machines that require a restart.
However, the self-hosted engine virtual machine does not need to be
restarted.
You cannot change the cluster compatibility version of a virtual machine
snapshot that is in preview. You must first commit or undo the preview.
5-10
6
Updating Engine, Self-Hosted Engine and
KVM Hosts
You can update your engine, KVM hosts, and self-hosted engine within versions, such as
from 4.4.8 to 4.4.10.
If you want to move from one version to another, such as 4.3.10 to 4.4, this is considered an
upgrade. See Upgrading Your Environment to 4.4.
Important:
If the update fails, the engine-setup command attempts to rollback your installation
to its previous state. If you encounter a failed update, detailed instructions display
explaining how to restore your installation.
3. Run the engine-setup command. The update process may take some time, so allow it to
complete and do not stop the process once initiated.
# engine-setup
...
[ INFO ] Execution of setup completed successfully
The engine-setup script prompts you with some configuration questions, then stops the
ovirt-engine service, downloads and installs the updated packages, backs up and
updates the database, performs post-installation configuration, and starts the ovirt-
engine service. For more information, see Engine Configuration Options in the Oracle
Linux Virtualization Manager: Getting Started.
6-1
Chapter 6
Updating the Self-Hosted Engine
Note:
When you run the engine-setup script during the installation process
your configuration values are stored. During an upgrade, these stored
values display when previewing the configuration and they might not be
up-to-date if you ran engine-config after installation. For example, if you
ran engine-config to update SANWipeAfterDelete to true after
installation, engine-setup outputs Default SAN wipe after delete:
False in the configuration preview. However, your updated values are
not overwritten by engine-setup.
4. Update the base operating system and any optional packages installed.
# yum update
Note:
If the update upgraded any kernel packages, reboot the system to
complete the changes.
You should see the following message indicating that the cluster is in maintenance
mode.
!! Cluster is in GLOBAL MAINTENANCE mode !!
When you run the engine-setup script during the installation process your
configuration values are stored. During an upgrade, these stored values display when
previewing the configuration and they might not be up-to-date if you ran engine-config
after installation. For example, if you ran engine-config to update
SANWipeAfterDelete to true after installation, engine-setup outputs Default SAN
wipe after delete: False in the configuration preview. However, your updated value
of true is not overwritten by engine-setup.
1. Log in to the engine virtual machine and check to see if your engine is eligible to
update and if there are updates for any packages.
# engine-upgrade-check
...
Upgrade available.
6-2
Chapter 6
Updating the Self-Hosted Engine
The engine-setup script prompts you with some configuration questions, then stops the
ovirt-engine service, downloads and installs the updated packages, backs up and
updates the database, performs post-installation configuration, and starts the ovirt-
engine service. For more information, see Engine Configuration Options in the Oracle
Linux Virtualization Manager: Getting Started.
4. Update the base operating system and any optional packages installed on the engine.
# yum update
Important:
If any kernel packages were updated, disable global maintenance mode and
reboot the machine to complete the update.
After you update your self-hosted engine, you must disable global maintenance mode for the
self-hosted engine environment.
1. Log in to the engine virtual machine and shut it down.
2. Log in to the self-hosted engine host and disable global maintenance mode.
# hosted-engine --set-maintenance --mode=none
Note:
When you exit global maintenance mode, ovirt-ha-agent starts the engine
virtual machine, and then the engine automatically starts. This process can take
up to ten minutes.
The status information shows Engine Status and its value should be:
{"health": "good", "vm": "up", "detail": "Up"}
When the virtual machine is still booting and the engine hasn’t started yet, the Engine
status is:
{"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}
6-3
Chapter 6
Updating KVM Hosts
Note:
If the update fails, the host’s status changes to Install Failed and you
must click Installation and then Upgrade again.
6. (Optional) Repeat the previous steps for any KVM host in your environment that
you want to upgrade or update.
6-4
7
Disaster Recovery
Oracle Linux Virtualization Manager supports active-active and active-passive disaster
recovery solutions to ensure that environments can recover when a site outage occurs. Both
solutions support two sites and require replicated storage.
Active-Active Disaster Recovery
Active-active disaster recovery uses a stretch cluster configuration. This means that there is a
single Oracle Linux Virtualization Manager environment with a cluster that contains hosts
capable of running the required virtual machines in the primary and secondary site. The
virtual machines in the primary site automatically migrate to hosts in the secondary site if an
outage occurs. However, the environment must meet latency and networking requirements.
Active-Passive Disaster Recovery
Active-passive disaster recovery is a site-to-site failover solution. Two separate Oracle Linux
Virtualization Manager environments are configured: the active primary environment and the
passive secondary (backup) environment. With active-passive disaster recovery, you must
manually execute failover and failback (when needed) both of which are performed using
Ansible.
Important:
When using clustering applications, such as RAC, Pacemaker/Corosync, set virtual
machines to Kill for Resume Behaviour (which you can find in the edit VM dialog
under High Availability). Otherwise, the clustering applications might try to fence a
suspended or paused virtual machine.
7-1
Chapter 7
Active-Active Disaster Recovery
To ensure virtual machine failover and failback works, you must configure:
• virtual machines for highly availability, and each virtual machine must have a lease
on a target storage domain to ensure the virtual machine can start even without
power management.
• soft enforced virtual machine to host affinity to ensure the virtual machines only
start on the selected hosts.
Network Considerations
All hosts in the stretch cluster must be on the same broadcast domain over a Layer 2
(L2) network, which means that connectivity between the two sites needs to be L2.
The maximum latency requirements between the sites across the L2 network is
different for the standalone Engine environment and the self-hosted engine
environment:
• A maximum latency of 100ms is required for the standalone Engine environment
• A maximum latency of 7ms is required for self-hosted engine environment
Storage Considerations
The storage domain for Oracle Linux Virtualization Manager can be either block
devices (iSCSI or FCP) or a file system (NAS/NFS or GlusterFS).
Both sites require synchronously replicated storage that is writable with shared L2
network connectivity to allow virtual machines to migrate between sites and continue
running on the site’s storage. All storage replication options supported by Oracle Linux
8 and later can be used in the stretch cluster.
For more information, see the storage topics in the Administration Guide and the
Architecture and Planning Guide.
7-2
Chapter 7
Active-Active Disaster Recovery
For more information, see Installation and Configuration in the Oracle Linux Virtualization
Manager: Getting Started Guide.
2. Install hosts in each site and add them to the cluster.
For more information, see Configuring a KVM Host in the Oracle Linux Virtualization
Manager: Getting Started Guide.
3. Configure the storage pool manager (SPM) priority to be higher on all hosts in the
primary site to ensure SPM failover to the secondary site occurs only when all hosts in
the primary site are unavailable.
For more information, see Storage Pool Manager in the Oracle Linux Virtualization
Manager: Architecture and Planning Guide.
4. Configure all virtual machines that need to failover as highly available and ensure that a
virtual machine has a lease on the target storage domain.
For more information, see Optimizing Clusters, Hosts and Virtual Machines.
5. Configure virtual machine to host soft affinity and define the behavior you expect from the
affinity group.
For more information, see Affinity Groups in the oVirt Virtual Machine Management
Guide.
Important:
With VM Affinity Rule Enforcing enabled (shown as Hard in the list of Affinity
Groups), the system does not migrate a virtual machine to a host different from
where the other virtual machines in its affinity group are running. For more
information, see Virtual Machine Issues in the Oracle Linux Virtualization
Manager: Release Notes.
The active-active failover can be manually performed by placing the main site’s hosts into
maintenance mode.
7-3
Chapter 7
Active-Passive Disaster Recovery
3. Configure the storage pool manager (SPM) priority to be higher on all hosts in the
primary site to ensure SPM failover to the secondary site occurs only when all
hosts in the primary site are unavailable.
For more information, see Storage Pool Manager in the Oracle Linux Virtualization
Manager: Architecture and Planning Guide.
4. Configure all virtual machines that need to failover as highly available and ensure
that a virtual machine has a lease on the target storage domain.
For more information, see Optimizing Clusters, Hosts and Virtual Machines.
5. Configure a virtual machine to host soft affinity and define the affinity group's
behaviour.
For more information, see Affinity Groups in the oVirt Virtual Machine Management
Guide.
Important:
With VM Affinity Rule Enforcing enabled (shown as Hard in the list of
Affinity Groups), the system does not migrate a virtual machine to a host
different from where the other virtual machines in its affinity group are
running. For more information, see Virtual Machine Issues in the Oracle
Linux Virtualization Manager: Release Notes.
The active-active failover can be manually performed by placing the main site’s hosts
into maintenance mode.
Important:
You must ensure that the secondary environment has enough resources to
run the failed over virtual machines, and that both the primary and secondary
environments have identical Engine versions, data center and cluster
compatibility levels, and PostgreSQL versions.
Storage domains that contain virtual machine disks and templates in the
primary site must be replicated. These replicated storage domains must not
be attached to the secondary site.
7-4
Chapter 7
Active-Passive Disaster Recovery
The failover and failback processes are executed manually using Ansible playbooks that map
entities between the sites and manage the failover and failback processes. The mapping file
instructs the Oracle Linux Virtualization Manager components where to failover or failback to.
Network Considerations
You must ensure that the same general connectivity exists in the primary and secondary
sites. If you have multiple networks or multiple data centers then you must use an empty
network mapping in the mapping file to ensure that all entities register on the target during
failover.
Storage Considerations
The storage domain for Oracle Linux Virtualization Manager can be made of either block
devices (iSCSI or FCP) or a file system (NAS/NFS or GlusterFS). Local storage domains are
unsupported for disaster recovery.
Your environment must have a primary and secondary storage replica. The primary storage
domain’s block devices or shares that contain virtual machine disks or templates must be
replicated. The secondary storage must not be attached to any data center and is added to
the backup site’s data center during failover.
If you are implementing disaster recovery using a self-hosted engine, ensure that the storage
domain used by the self-hosted engine's Engine virtual machine does not contain virtual
machine disks because the storage domain will not failover.
You can use any storage solutions that have replication options supported by Oracle Linux 8
and later.
Important:
Metadata for all virtual machines and disks resides on the storage data domain as
OVF_STORE disk images. This metadata is used when the storage data domain is
moved by failover or failback to another data center in the same or different
environment.
By default, the metadata is automatically updated by the Engine in 60 minute
intervals. This means that you can potentially lose all data and processing
completed during an interval. To avoid such loss, you can manually update the
metadata from the Administration Portal by navigating to the storage domain
section and clicking Update OVFs. Or, you can modify the Engine parameters to
change the update frequency, for example:
# engine-config -s OvfUpdateIntervalInMinutes=30 && systemctl restart ovirt-
engine
For more information, see the Storage topics in the Oracle Linux Virtualization Manager:
Administration Guide and the Oracle Linux Virtualization Manager: Architecture and Planning
Guide.
7-5
Chapter 7
Active-Passive Disaster Recovery
Note:
We recommended that you create environment properties that exist in the
primary site, such as affinity groups, affinity labels, users, on the secondary
site. The default behaviour of the Ansible playbooks can be configured in
the /usr/share/ansible/collections/ansible_collections/ovirt/
ovirt/roles/disaster_recovery/defaults/main.yml file.
7-6
Chapter 7
Active-Passive Disaster Recovery
After configuring active-passive disaster recovery, you should test and verify the
configuration. See Testing the Active-Passive Configuration.
[--conf-file=dr.conf]
[--log-file=ovirt-dr-log_number.log]
[--log-level=DEBUG/INFO/WARNING/ERROR]
Important:
Generating the mapping file will fail if you have any virtual machine disks on the
self-hosted engine’s storage domain. Also, the generated mapping file will not
contain an attribute for this storage domain because it must not be failed over.
7-7
Chapter 7
Active-Passive Disaster Recovery
site: https://manager1.mycompany.com/ovirt-engine/api
username: admin@internal
password: Mysecret1
ca: /etc/pki/ovirt-engine/ca.pem
var_file: disaster_recovery_vars.yml
roles:
- disaster_recovery
collections:
- ovirt.ovirt
For extra security you can encrypt your Engine password in a .yml file.
2. Run the Ansible command to generate the mapping file. The primary site’s
configuration will be prepopulated.
# ansible-playbook dr-olvm-setup.yml --tags "generate_mapping"
3. Configure the generated mapping .yml file with the backup site’s configuration. For
more information, see Mapping File Attributes.
If you have multiple Ansible machines that can perform failover and failback, then copy
the mapping file to all relevant machines.
For extra security you can encrypt the password file. However, you will need to
use the --ask-vault-pass parameter when running the playbook.
2. Create an Ansible playbook using a failover yaml file (such as dr-olvm-
failover.yml) to failover the environment, for example:
---
- name: oVirt Failover
hosts: localhost
connection: local
vars:
dr_target_host: secondary
dr_source_map: primary
vars_files:
- disaster_recovery_vars.yml
7-8
Chapter 7
Active-Passive Disaster Recovery
roles:
- disaster_recovery
collections:
- ovirt.ovirt
Executing a Failover
Before executing a failover, ensure you have read and understood the Network
Considerations and Storage Considerations. You must also ensure that:
• the Engine and hosts in the secondary site are running.
• replicated storage domains are in read/write mode.
• no replicated storage domains are attached to the secondary site.
• a machine running the Ansible Engine that can connect via SSH to the Engine in the
primary and secondary site, with the required packages and files:
– The ovirt-ansible-collection package.
– The mapping file and failover playbook.
Sanlock must release all storage locks from the replicated storage domains before the
failover process starts. These locks should be released automatically approximately 80
seconds after the disaster occurs.
To execute a failover, run the failover playbook on the Engine host using the following
command:
# ansible-playbook dr-olvm-failover.yml --tags "fail_over"
When the primary site becomes active, ensure that you clean the environment before failing
back. For more information, see Cleaning the Primary Site.
7-9
Chapter 7
Active-Passive Disaster Recovery
• Synchronizes the replication from the secondary site’s storage domains to the
primary site’s storage domains.
• Cleans the primary site of all storage domains to be imported. This can be done
manually in the Engine. For more information, see Detaching a Storage Domain
from a Data Center.
For example, create a cleanup yml file (such as dr_cleanup_primary_site.yml):
---
- name: oVirt Cleanup Primary Site
hosts: localhost
connection: local
vars:
dr_source_map: primary
vars_files:
- disaster_recovery_vars.yml
roles:
- disaster_recovery
collections:
- ovirt.ovirt
Once you have cleaned the primary site, you can now failback the environment to the
primary site. For more information, see Executing a Failback.
Executing a Failback
After failover, you can failback to the primary site when it is active and you have
performed the necessary steps to clean the environment by ensuring:
• The primary site's environment is running and has been cleaned. For more
information, see Cleaning the Primary Site.
• The environment in the secondary site is running and has active storage domains.
• The machine running the Ansible Engine that can connect via SSH to the Engine
in the primary and secondary site, with the required packages and files:
– The ovirt-ansible-collection package.
– The mapping file and required failback playbook.
To execute a failback, complete the following steps.
1. Run the failback playbook on the Engine host using the following command:
# ansible-playbook dr-olvm-failback.yml --tags "fail_back"
2. Enable replication from the primary storage domains to the secondary storage
domains.
7-10
Chapter 7
Active-Passive Disaster Recovery
2. Test failover and failback using specific storage domains attached to the primary site
which allows the primary site to remain active. See Discreet Failover and Failback Tests.
3. Test failover and failback for an unplanned shutdown of the primary site or an impending
disaster where you have a grace period to failover to the secondary site. See Full
Failover and Failback Tests.
Important:
Ensure that you have completed all the steps to configure your active-passive
disaster recovery before running any of these tests.
Important:
Ensure that no production tasks are performed after the failover. For example,
ensure that email systems are blocked from sending emails to real users or redirect
emails elsewhere. If systems are used to directly manage other systems, prohibit
access to the systems or ensure that they access parallel systems in the secondary
site.
3. Verify that all relevant storage domains, virtual machines, and templates are registered
and running successfully on the secondary site.
To restore the environment to its active-passive state, complete the following steps.
1. Detach the storage domains from the secondary site.
2. Enable storage replication between the primary and secondary storage domains.
7-11
Chapter 7
Active-Passive Disaster Recovery
replicated storage can be attached to the secondary site which allows you to test the
failover while users continue to work in the primary site.
Note:
You should define the testable storage domains on a separate storage server
that can be offline without affecting the primary storage domains used for
production in the primary site.
5. Verify that all relevant storage domains, virtual machines, and templates are
registered and running successfully on the secondary site.
To perform a discreet failback test, complete the following steps.
1. Clean the primary site and remove all inactive storage domains and related virtual
machines and templates. For more information, see Cleaning the Primary Site.
2. Run the command to failback to the primary site:
# ansible-playbook playbook --tags "fail_back"
3. Enable replication from the primary storage domains to the secondary storage
domains.
4. Verify that all relevant storage domains, virtual machines, and templates are
registered and running successfully in the primary site.
3. Verify that all relevant storage domains, virtual machines, and templates are
registered and running successfully in the secondary site.
7-12
Chapter 7
Active-Passive Disaster Recovery
4. Enable replication from the primary storage domains to the secondary storage domains.
5. Verify that all relevant storage domains, virtual machines, and templates are registered
and running successfully on the primary site.
• Cluster details
Attributes that map the cluster names between the primary and secondary site, for
example:
7-13
Chapter 7
Active-Passive Disaster Recovery
dr_cluster_mappings:
- primary_name: cluster_prod
secondary_name: cluster_recovery
- primary_name: fc_cluster
secondary_name: recovery_fc_cluster
• Role details
Attributes that provide mapping for specific roles, for example:
dr_role_mappings:
- primary_name: admin
Secondary_name: newadmin
• Network details
Attributes that map the vNIC details between the primary and secondary site, for
example:
dr_network_mappings:
- primary_network_name: ovirtmgmt
primary_profile_name: ovirtmgmt
primary_profile_id: 0000000a-000a-000a-000a-000000000398
# Fill in the correlated vnic profile properties in the secondary site for
profile 'ovirtmgmt'
secondary_network_name: ovirtmgmt
secondary_profile_name: ovirtmgmt
secondary_profile_id: 0000000a-000a-000a-000a-000000000410
If you have multiple networks or multiple data centers then you must use an empty
network mapping in the mapping file to ensure that all entities register on the
target during failover, for example:
dr_network_mappings:
# No mapping should be here
7-14
Chapter 7
Active-Passive Disaster Recovery
dr_lun_mappings:
- primary_logical_unit_id: 460014069b2be431c0fd46c4bdce29b66
primary_logical_unit_alias: My_Disk
primary_wipe_after_delete: False
primary_shareable: False
primary_logical_unit_description: 2b66
primary_storage_type: iscsi
primary_logical_unit_address: 10.35.xx.xxx
primary_logical_unit_port: 3260
primary_logical_unit_portal: 1
primary_logical_unit_target: iqn.2017-12.com.prod.example:444
secondary_storage_type: iscsi
secondary_wipe_after_delete: False
secondary_shareable: False
secondary_logical_unit_id: 460014069b2be431c0fd46c4bdce29b66
secondary_logical_unit_address: 10.35.x.xxx
secondary_logical_unit_port: 3260
secondary_logical_unit_portal: 1
secondary_logical_unit_target: iqn.2017-12.com.recovery.example:444
7-15