Red Hat Enterprise Linux-9-Configuring and Managing Virtualization-En-us
Red Hat Enterprise Linux-9-Configuring and Managing Virtualization-En-us
Setting up your host, creating and administering virtual machines, and understanding
virtualization features
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
To use a Red Hat Enterprise Linux (RHEL) system as a virtualization host, follow the instructions in
this document. The information provided includes: What the capabilities and use cases of
virtualization are How to manage your host and your virtual machines by using command-line
utilities, as well as by using the web console What the support limitations of virtualization are on
various system architectures, such as Intel 64, AMD64, and IBM Z
Table of Contents
Table of Contents
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .INTRODUCING
. . . . . . . . . . . . . . . .VIRTUALIZATION
. . . . . . . . . . . . . . . . . . IN
. . .RHEL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
1.1. WHAT IS VIRTUALIZATION? 10
1.2. ADVANTAGES OF VIRTUALIZATION 10
1.3. VIRTUAL MACHINE COMPONENTS AND THEIR INTERACTION 11
1.4. TOOLS AND INTERFACES FOR VIRTUALIZATION MANAGEMENT 12
1.5. RED HAT VIRTUALIZATION SOLUTIONS 13
.CHAPTER
. . . . . . . . . . 2.
. . ENABLING
. . . . . . . . . . . . VIRTUALIZATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
..............
2.1. ENABLING VIRTUALIZATION ON AMD64 AND INTEL 64 14
2.2. ENABLING VIRTUALIZATION ON IBM Z 15
2.3. ENABLING VIRTUALIZATION ON ARM 64 16
. . . . . . . . . . . 3.
CHAPTER . . CREATING
. . . . . . . . . . . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
..............
3.1. CREATING VIRTUAL MACHINES USING THE COMMAND-LINE INTERFACE 19
3.2. CREATING VIRTUAL MACHINES AND INSTALLING GUEST OPERATING SYSTEMS USING THE WEB
CONSOLE 23
3.2.1. Creating virtual machines using the web console 23
3.2.2. Creating virtual machines by importing disk images using the web console 24
3.2.3. Installing guest operating systems using the web console 25
3.2.4. Creating virtual machines with cloud image authentication using the web console 26
.CHAPTER
. . . . . . . . . . 4.
. . .STARTING
. . . . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
..............
4.1. STARTING A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 30
4.2. STARTING VIRTUAL MACHINES USING THE WEB CONSOLE 31
4.3. STARTING VIRTUAL MACHINES AUTOMATICALLY WHEN THE HOST STARTS 31
. . . . . . . . . . . 5.
CHAPTER . . CONNECTING
. . . . . . . . . . . . . . . .TO
. . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
..............
5.1. INTERACTING WITH VIRTUAL MACHINES USING THE WEB CONSOLE 34
5.1.1. Viewing the virtual machine graphical console in the web console 34
5.1.2. Viewing the graphical console in a remote viewer using the web console 35
5.1.3. Viewing the virtual machine serial console in the web console 37
5.2. OPENING A VIRTUAL MACHINE GRAPHICAL CONSOLE USING VIRT VIEWER 38
5.3. CONNECTING TO A VIRTUAL MACHINE USING SSH 39
5.4. OPENING A VIRTUAL MACHINE SERIAL CONSOLE 40
5.5. SETTING UP EASY ACCESS TO REMOTE VIRTUALIZATION HOSTS 41
.CHAPTER
. . . . . . . . . . 6.
. . .SHUTTING
. . . . . . . . . . . DOWN
. . . . . . . .VIRTUAL
. . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
..............
6.1. SHUTTING DOWN A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 45
6.2. SHUTTING DOWN AND RESTARTING VIRTUAL MACHINES USING THE WEB CONSOLE 45
6.2.1. Shutting down virtual machines in the web console 45
6.2.2. Restarting virtual machines using the web console 46
6.2.3. Sending non-maskable interrupts to VMs using the web console 46
.CHAPTER
. . . . . . . . . . 7.
. . DELETING
. . . . . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
..............
7.1. DELETING VIRTUAL MACHINES USING THE COMMAND LINE INTERFACE 48
7.2. DELETING VIRTUAL MACHINES USING THE WEB CONSOLE 48
.CHAPTER
. . . . . . . . . . 8.
. . .MANAGING
. . . . . . . . . . . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . .IN
. . THE
. . . . . WEB
. . . . . .CONSOLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
..............
8.1. OVERVIEW OF VIRTUAL MACHINE MANAGEMENT USING THE WEB CONSOLE 50
8.2. SETTING UP THE WEB CONSOLE TO MANAGE VIRTUAL MACHINES 50
1
Red Hat Enterprise Linux 9 Configuring and managing virtualization
.CHAPTER
. . . . . . . . . . 9.
. . .VIEWING
. . . . . . . . . INFORMATION
. . . . . . . . . . . . . . . . ABOUT
. . . . . . . . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
..............
9.1. VIEWING VIRTUAL MACHINE INFORMATION USING THE COMMAND-LINE INTERFACE 54
9.2. VIEWING VIRTUAL MACHINE INFORMATION USING THE WEB CONSOLE 56
9.2.1. Viewing a virtualization overview in the web console 56
9.2.2. Viewing storage pool information using the web console 57
9.2.3. Viewing basic virtual machine information in the web console 58
9.2.4. Viewing virtual machine resource usage in the web console 59
9.2.5. Viewing virtual machine disk information in the web console 60
9.2.6. Viewing and editing virtual network interface information in the web console 61
9.3. SAMPLE VIRTUAL MACHINE XML CONFIGURATION 62
.CHAPTER
. . . . . . . . . . 10.
. . . SAVING
. . . . . . . . . AND
. . . . . RESTORING
. . . . . . . . . . . . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
..............
10.1. HOW SAVING AND RESTORING VIRTUAL MACHINES WORKS 67
10.2. SAVING A VIRTUAL MACHINE USING THE COMMAND LINE INTERFACE 67
10.3. STARTING A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 68
10.4. STARTING VIRTUAL MACHINES USING THE WEB CONSOLE 69
. . . . . . . . . . . 11.
CHAPTER . . .CLONING
. . . . . . . . . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
..............
11.1. HOW CLONING VIRTUAL MACHINES WORKS 71
11.2. CREATING VIRTUAL MACHINE TEMPLATES 71
11.2.1. Creating a virtual machine template using virt-sysprep 71
11.2.2. Creating a virtual machine template manually 73
11.3. CLONING A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 75
11.4. CLONING A VIRTUAL MACHINE USING THE WEB CONSOLE 77
. . . . . . . . . . . 12.
CHAPTER . . . MIGRATING
. . . . . . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78
..............
12.1. HOW MIGRATING VIRTUAL MACHINES WORKS 78
12.2. BENEFITS OF MIGRATING VIRTUAL MACHINES 79
12.3. LIMITATIONS FOR MIGRATING VIRTUAL MACHINES 79
12.4. VERIFYING HOST CPU COMPATIBILITY FOR VIRTUAL MACHINE MIGRATION 80
12.5. SHARING VIRTUAL MACHINE DISK IMAGES WITH OTHER HOSTS 83
12.6. MIGRATING A VIRTUAL MACHINE USING THE COMMAND-LINE INTERFACE 85
12.7. LIVE MIGRATING A VIRTUAL MACHINE USING THE WEB CONSOLE 88
12.8. SUPPORTED HOSTS FOR VIRTUAL MACHINE MIGRATION 90
. . . . . . . . . . . 13.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .VIRTUAL
. . . . . . . . . .DEVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
..............
13.1. HOW VIRTUAL DEVICES WORK 92
13.2. TYPES OF VIRTUAL DEVICES 93
13.3. MANAGING DEVICES ATTACHED TO VIRTUAL MACHINES USING THE CLI 94
13.3.1. Attaching devices to virtual machines 95
13.3.2. Modifying devices attached to virtual machines 96
13.3.3. Removing devices from virtual machines 98
13.4. MANAGING HOST DEVICES USING THE WEB CONSOLE 99
13.4.1. Viewing devices attached to virtual machines using the web console 99
13.4.2. Attaching devices to virtual machines using the web console 100
13.4.3. Removing devices from virtual machines using the web console 101
13.5. MANAGING VIRTUAL USB DEVICES 103
13.5.1. Attaching USB devices to virtual machines 103
13.5.2. Removing USB devices from virtual machines 104
13.6. MANAGING VIRTUAL OPTICAL DRIVES 105
13.6.1. Attaching optical drives to virtual machines 105
2
Table of Contents
.CHAPTER
. . . . . . . . . . 14.
. . . MANAGING
. . . . . . . . . . . . .STORAGE
. . . . . . . . . . .FOR
. . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
..............
14.1. UNDERSTANDING VIRTUAL MACHINE STORAGE 116
14.1.1. Introduction to storage pools 116
14.1.2. Introduction to storage volumes 117
14.1.3. Storage management using libvirt 117
14.1.4. Overview of storage management 117
14.1.5. Supported and unsupported storage pool types 118
14.2. MANAGING VIRTUAL MACHINE STORAGE POOLS USING THE CLI 118
14.2.1. Viewing storage pool information using the CLI 119
14.2.2. Creating directory-based storage pools using the CLI 119
14.2.3. Creating disk-based storage pools using the CLI 121
14.2.4. Creating filesystem-based storage pools using the CLI 123
14.2.5. Creating iSCSI-based storage pools using the CLI 125
14.2.6. Creating LVM-based storage pools using the CLI 126
14.2.7. Creating NFS-based storage pools using the CLI 128
14.2.8. Creating SCSI-based storage pools with vHBA devices using the CLI 129
14.2.9. Deleting storage pools using the CLI 131
14.3. MANAGING VIRTUAL MACHINE STORAGE POOLS USING THE WEB CONSOLE 132
14.3.1. Viewing storage pool information using the web console 132
14.3.2. Creating directory-based storage pools using the web console 134
14.3.3. Creating NFS-based storage pools using the web console 135
14.3.4. Creating iSCSI-based storage pools using the web console 137
14.3.5. Creating disk-based storage pools using the web console 138
14.3.6. Creating LVM-based storage pools using the web console 140
14.3.7. Removing storage pools using the web console 142
14.3.8. Deactivating storage pools using the web console 143
14.4. PARAMETERS FOR CREATING STORAGE POOLS 144
14.4.1. Directory-based storage pool parameters 144
14.4.2. Disk-based storage pool parameters 145
14.4.3. Filesystem-based storage pool parameters 146
14.4.4. iSCSI-based storage pool parameters 147
14.4.5. LVM-based storage pool parameters 148
14.4.6. NFS-based storage pool parameters 150
14.4.7. Parameters for SCSI-based storage pools with vHBA devices 151
14.5. MANAGING VIRTUAL MACHINE STORAGE VOLUMES USING THE CLI 153
14.5.1. Viewing storage volume information using the CLI 153
14.5.2. Creating and assigning storage volumes using the CLI 154
14.5.3. Deleting storage volumes using the CLI 155
14.6. MANAGING VIRTUAL MACHINE STORAGE VOLUMES USING THE WEB CONSOLE 156
14.6.1. Creating storage volumes using the web console 156
14.6.2. Removing storage volumes using the web console 158
14.7. MANAGING VIRTUAL MACHINE STORAGE DISKS USING THE WEB CONSOLE 160
14.7.1. Viewing virtual machine disk information in the web console 160
14.7.2. Adding new disks to virtual machines using the web console 161
3
Red Hat Enterprise Linux 9 Configuring and managing virtualization
14.7.3. Attaching existing disks to virtual machines using the web console 162
14.7.4. Detaching disks from virtual machines using the web console 164
14.8. SECURING ISCSI STORAGE POOLS WITH LIBVIRT SECRETS 165
14.9. CREATING VHBAS 166
. . . . . . . . . . . 15.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .GPU
. . . . .DEVICES
. . . . . . . . . .IN
. . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169
...............
15.1. ASSIGNING A GPU TO A VIRTUAL MACHINE 169
15.2. MANAGING NVIDIA VGPU DEVICES 172
15.2.1. Setting up NVIDIA vGPU devices 172
15.2.2. Removing NVIDIA vGPU devices 175
15.2.3. Obtaining NVIDIA vGPU information about your system 176
15.2.4. Remote desktop streaming services for NVIDIA vGPU 178
15.2.5. Additional resources 178
.CHAPTER
. . . . . . . . . . 16.
. . . CONFIGURING
. . . . . . . . . . . . . . . . VIRTUAL
. . . . . . . . . .MACHINE
. . . . . . . . . . .NETWORK
. . . . . . . . . . .CONNECTIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
...............
16.1. UNDERSTANDING VIRTUAL NETWORKING 179
16.1.1. How virtual networks work 179
16.1.2. Virtual networking default configuration 180
16.2. USING THE WEB CONSOLE FOR MANAGING VIRTUAL MACHINE NETWORK INTERFACES 181
16.2.1. Viewing and editing virtual network interface information in the web console 181
16.2.2. Adding and connecting virtual network interfaces in the web console 183
16.2.3. Disconnecting and removing virtual network interfaces in the web console 183
16.3. RECOMMENDED VIRTUAL MACHINE NETWORKING CONFIGURATIONS 184
16.3.1. Configuring externally visible virtual machines using the command-line interface 184
16.3.2. Configuring externally visible virtual machines using the web console 186
16.4. TYPES OF VIRTUAL MACHINE NETWORK CONNECTIONS 187
16.4.1. Virtual networking with network address translation 187
16.4.2. Virtual networking in routed mode 188
16.4.3. Virtual networking in bridged mode 189
16.4.4. Virtual networking in isolated mode 190
16.4.5. Virtual networking in open mode 190
16.4.6. Comparison of virtual machine connection types 191
16.5. BOOTING VIRTUAL MACHINES FROM A PXE SERVER 191
16.5.1. Setting up a PXE boot server on a virtual network 191
16.5.2. Booting virtual machines using PXE and a virtual network 193
16.5.3. Booting virtual machines using PXE and a bridged network 193
16.6. ADDITIONAL RESOURCES 194
. . . . . . . . . . . 17.
CHAPTER . . . OPTIMIZING
. . . . . . . . . . . . . VIRTUAL
. . . . . . . . . . MACHINE
. . . . . . . . . . .PERFORMANCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195
...............
17.1. WHAT INFLUENCES VIRTUAL MACHINE PERFORMANCE 195
The impact of virtualization on system performance 195
Reducing VM performance loss 195
17.2. OPTIMIZING VIRTUAL MACHINE PERFORMANCE USING TUNED 196
17.3. OPTIMIZING LIBVIRT DAEMONS 197
17.3.1. Types of libvirt daemons 197
17.3.2. Enabling modular libvirt daemons 197
17.4. CONFIGURING VIRTUAL MACHINE MEMORY 199
17.4.1. Adding and removing virtual machine memory using the web console 199
17.4.2. Adding and removing virtual machine memory using the command-line interface 200
17.4.3. Additional resources 202
17.5. OPTIMIZING VIRTUAL MACHINE I/O PERFORMANCE 202
17.5.1. Tuning block I/O in virtual machines 202
17.5.2. Disk I/O throttling in virtual machines 203
17.5.3. Enabling multi-queue virtio-scsi 204
4
Table of Contents
. . . . . . . . . . . 18.
CHAPTER . . . SECURING
. . . . . . . . . . . .VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
...............
18.1. HOW SECURITY WORKS IN VIRTUAL MACHINES 221
18.2. BEST PRACTICES FOR SECURING VIRTUAL MACHINES 222
18.3. CREATING A SECUREBOOT VIRTUAL MACHINE 223
18.4. LIMITING WHAT ACTIONS ARE AVAILABLE TO VIRTUAL MACHINE USERS 224
18.5. AUTOMATIC FEATURES FOR VIRTUAL MACHINE SECURITY 226
18.6. SELINUX BOOLEANS FOR VIRTUALIZATION 226
18.7. SETTING UP IBM SECURE EXECUTION ON IBM Z 227
18.8. ATTACHING CRYPTOGRAPHIC COPROCESSORS TO VIRTUAL MACHINES ON IBM Z 231
18.9. ENABLING STANDARD HARDWARE SECURITY ON WINDOWS VIRTUAL MACHINES 234
18.10. ENABLING ENHANCED HARDWARE SECURITY ON WINDOWS VIRTUAL MACHINES 235
. . . . . . . . . . . 19.
CHAPTER . . . SHARING
. . . . . . . . . . .FILES
. . . . . .BETWEEN
. . . . . . . . . . .THE
. . . . .HOST
. . . . . . AND
. . . . . .ITS
. . . VIRTUAL
. . . . . . . . . .MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237
...............
19.1. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES USING NFS 237
19.2. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES USING VIRTIOFS 239
19.3. USING THE WEB CONSOLE TO SHARE FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
USING VIRTIOFS 241
19.4. USING THE WEB CONSOLE TO REMOVE SHARED FILES BETWEEN THE HOST AND ITS VIRTUAL
MACHINES USING VIRTIOFS 242
. . . . . . . . . . . 20.
CHAPTER . . . .INSTALLING
. . . . . . . . . . . . . AND
. . . . . .MANAGING
. . . . . . . . . . . . WINDOWS
. . . . . . . . . . . VIRTUAL
. . . . . . . . . . MACHINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
................
20.1. INSTALLING WINDOWS VIRTUAL MACHINES 244
20.2. OPTIMIZING WINDOWS VIRTUAL MACHINES 246
20.2.1. Installing KVM paravirtualized drivers for Windows virtual machines 246
20.2.1.1. How Windows virtio drivers work 246
20.2.1.2. Preparing virtio driver installation media on a host machine 247
20.2.1.3. Installing virtio drivers on a Windows guest 248
20.2.1.4. Updating virtio drivers on a Windows guest 250
20.2.2. Enabling Hyper-V enlightenments 251
20.2.2.1. Enabling Hyper-V enlightenments on a Windows virtual machine 252
20.2.2.2. Configurable Hyper-V enlightenments 253
20.2.3. Configuring NetKVM driver parameters 256
20.2.4. NetKVM driver parameters 257
20.2.5. Optimizing background processes on Windows virtual machines 259
20.3. ENABLING STANDARD HARDWARE SECURITY ON WINDOWS VIRTUAL MACHINES 259
20.4. ENABLING ENHANCED HARDWARE SECURITY ON WINDOWS VIRTUAL MACHINES 260
20.5. NEXT STEPS 262
. . . . . . . . . . . 21.
CHAPTER . . . DIAGNOSING
. . . . . . . . . . . . . . .VIRTUAL
. . . . . . . . . MACHINE
. . . . . . . . . . .PROBLEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263
...............
21.1. GENERATING LIBVIRT DEBUG LOGS 263
21.1.1. Understanding libvirt debug logs 263
21.1.2. Enabling persistent settings for libvirt debug logs 263
21.1.3. Enabling libvirt debug logs during runtime 264
21.1.4. Attaching libvirt debug logs to support requests 265
5
Red Hat Enterprise Linux 9 Configuring and managing virtualization
.CHAPTER
. . . . . . . . . . 22.
. . . .FEATURE
. . . . . . . . . .SUPPORT
. . . . . . . . . . .AND
. . . . .LIMITATIONS
. . . . . . . . . . . . . . IN
. . . RHEL
. . . . . .9
. . VIRTUALIZATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269
...............
22.1. HOW RHEL VIRTUALIZATION SUPPORT WORKS 269
22.2. RECOMMENDED FEATURES IN RHEL 9 VIRTUALIZATION 269
22.3. UNSUPPORTED FEATURES IN RHEL 9 VIRTUALIZATION 270
22.4. RESOURCE ALLOCATION LIMITS IN RHEL 9 VIRTUALIZATION 274
22.5. HOW VIRTUALIZATION ON IBM Z DIFFERS FROM AMD64 AND INTEL 64 275
22.6. HOW VIRTUALIZATION ON ARM 64 DIFFERS FROM AMD64 AND INTEL 64 277
22.7. AN OVERVIEW OF VIRTUALIZATION FEATURES SUPPORT IN RHEL 9 278
6
Table of Contents
7
Red Hat Enterprise Linux 9 Configuring and managing virtualization
8
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
1. View the documentation in the Multi-page HTML format and ensure that you see the
Feedback button in the upper right corner after the page fully loads.
2. Use your cursor to highlight the part of the text that you want to comment on.
3. Click the Add Feedback button that appears near the highlighted text.
4. Enter your suggestion for improvement in the Description field. Include links to the relevant
parts of the documentation.
9
Red Hat Enterprise Linux 9 Configuring and managing virtualization
In other words, virtualization makes it possible to have operating systems within operating systems.
VMs enable you to safely test software configurations and features, run legacy software, or optimize the
workload efficiency of your hardware. For more information on the benefits, see Advantages of
virtualization.
For more information on what virtualization is, see the Red Hat Customer Portal .
Next steps
To start using virtualization in Red Hat Enterprise Linux 9, see Enabling virtualization in Red Hat
Enterprise Linux 9.
In addition to Red Hat Enterprise Linux 9 virtualization, Red Hat offers a number of specialized
virtualization solutions, each with a different user focus and features. For more information, see
Red Hat virtualization solutions .
For example, what the guest OS sees as its disk can be represented as a file on the host file
system, and the size of that disk is less constrained than the available sizes for physical disks.
Software-controlled configurations
The entire configuration of a VM is saved as data on the host, and is under software control.
Therefore, a VM can easily be created, removed, cloned, migrated, operated remotely, or
connected to remote storage.
10
CHAPTER 1. INTRODUCING VIRTUALIZATION IN RHEL
A single physical machine can host a large number of VMs. Therefore, it avoids the need for
multiple physical machines to do the same tasks, and thus lowers the space, power, and
maintenance requirements associated with physical hardware.
Software compatibility
Because a VM can use a different OS than its host, virtualization makes it possible to run
applications that were not originally released for your host OS. For example, using a RHEL 7
guest OS, you can run applications released for RHEL 7 on a RHEL 9 host system.
NOTE
Not all operating systems are supported as a guest OS in a RHEL 9 host. For
details, see Recommended features in RHEL 9 virtualization .
Hypervisor
The basis of creating virtual machines (VMs) in RHEL 9 is the hypervisor, a software layer that controls
hardware and enables running multiple operating systems on a host machine.
The hypervisor includes the Kernel-based Virtual Machine (KVM)module and virtualization kernel
drivers. These components ensure that the Linux kernel on the host machine provides resources for
virtualization to user-space software.
At the user-space level, the QEMU emulator simulates a complete virtualized hardware platform that
the guest operating system can run in, and manages how resources are allocated on the host and
presented to the guest.
In addition, the libvirt software suite serves as a management and communication layer, making QEMU
easier to interact with, enforcing security rules, and providing a number of additional tools for
configuring and running VMs.
XML configuration
A host-based XML configuration file (also known as a domain XML file) determines all settings and
devices in a specific VM. The configuration includes:
Metadata such as the name of the VM, time zone, and other information about the VM.
A description of the devices in the VM, including virtual CPUs (vCPUS), storage devices,
input/output devices, network interface cards, and other hardware, real and virtual.
VM settings such as the maximum amount of memory it can use, restart settings, and other
settings about the behavior of the VM.
For more information on the contents of an XML configuration, see Sample virtual machine XML
configuration.
Component interaction
When a VM is started, the hypervisor uses the XML configuration to create an instance of the VM as a
user-space process on the host. The hypervisor also makes the VM process accessible to the host-
based interfaces, such as the virsh, virt-install, and guestfish utilities, or the web console GUI.
11
Red Hat Enterprise Linux 9 Configuring and managing virtualization
When these virtualization tools are used, libvirt translates their input into instructions for QEMU. QEMU
communicates the instructions to KVM, which ensures that the kernel appropriately assigns the
resources necessary to carry out the instructions. As a result, QEMU can execute the corresponding
user-space changes, such as creating or modifying a VM, or performing an action in the VM’s guest
operating system.
NOTE
For more information on the host-based interfaces, see Tools and interfaces for virtualization
management.
Command-line interface
The CLI is the most powerful method of managing virtualization in RHEL 9. Prominent CLI commands
for virtual machine (VM) management include:
virsh - A versatile virtualization command-line utility and shell with a great variety of purposes,
depending on the provided arguments. For example:
12
CHAPTER 1. INTRODUCING VIRTUALIZATION IN RHEL
virt-install - A CLI utility for creating new VMs. For more information, see the virt-install(1)
man page.
guestfish - A utility for examining and modifying VM disk images. For more information, see the
guestfish(1) man page.
Graphical interfaces
You can use the following GUIs to manage virtualization in RHEL 9:
The RHEL 9 web console, also known as Cockpit, provides a remotely accessible and easy to
use graphical user interface for managing VMs and virtualization hosts.
For instructions on basic virtualization management with the web console, see Managing virtual
machines in the web console.
OpenShift Virtualization
Based on the KubeVirt technology, OpenShift Virtualization is a part of the Red Hat OpenShift
Container Platform, and makes it possible to run virtual machines in containers.
For more information about OpenShift Virtualization see the Red Hat Hybrid Cloud pages.
NOTE
For details on virtualization features not supported in RHEL but supported in other Red
Hat virtualization solutions, see Unsupported features in RHEL 9 virtualization.
13
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Prerequisites
Red Hat Enterprise Linux 9 is installed and registered on your host machine.
Your system meets the following hardware requirements to work as a virtualization host:
6 GB free disk space for the host, plus another 6 GB for each intended VM.
2 GB of RAM for the host, plus another 2 GB for each intended VM.
Procedure
# for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start
virt${drv}d{,-ro,-admin}.socket; done
Verification
# virt-host-validate
[...]
QEMU: Checking for device assignment IOMMU support : PASS
QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be
disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
LXC: Checking for Linux >= 2.6.26 : PASS
[...]
LXC: Checking for cgroup 'blkio' controller mount-point : PASS
LXC: Checking if device /sys/fs/fuse/connections exists : FAIL (Load the 'fuse' module to
enable /proc/ overrides)
2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.
If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.
If any of the checks return a WARN value, consider following the displayed instructions to
14
CHAPTER 2. ENABLING VIRTUALIZATION
If any of the checks return a WARN value, consider following the displayed instructions to
improve virtualization capabilities.
Troubleshooting
If KVM virtualization is not supported by your host CPU, virt-host-validate generates the
following output:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available,
performance will be significantly limited)
However, VMs on such a host system will fail to boot, rather than have performance problems.
To work around this, you can change the <domain type> value in the XML configuration of the
VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain
type, and setting this is highly discouraged in production environments.
Next steps
Prerequisites
6 GB free disk space for the host, plus another 6 GB for each intended VM.
2 GB of RAM for the host, plus another 2 GB for each intended VM.
4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat
recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive
during high load.
RHEL 9 is installed on a logical partition (LPAR). In addition, the LPAR supports the start-
interpretive execution (SIE) virtualization functions.
To verify this, search for sie in your /proc/cpuinfo file.
Procedure
15
Red Hat Enterprise Linux 9 Configuring and managing virtualization
# for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start
virt${drv}d{,-ro,-admin}.socket; done
Verification
# virt-host-validate
[...]
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'memory' controller mount-point : PASS
[...]
2. If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs.
If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.
If any of the checks return a WARN value, consider following the displayed instructions to
improve virtualization capabilities.
Troubleshooting
If KVM virtualization is not supported by your host CPU, virt-host-validate generates the
following output:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available,
performance will be significantly limited)
However, VMs on such a host system will fail to boot, rather than have performance problems.
To work around this, you can change the <domain type> value in the XML configuration of the
VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain
type, and setting this is highly discouraged in production environments.
Additional resources
IMPORTANT
Prerequisites
16
CHAPTER 2. ENABLING VIRTUALIZATION
6 GB free disk space for the host, plus another 6 GB for each intended guest.
4 GB of RAM for the host, plus another 4 GB for each intended guest.
Procedure
# for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start
virt${drv}d{,-ro,-admin}.socket; done
Verification
# virt-host-validate
[...]
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'memory' controller mount-point : PASS
[...]
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller mount-point : PASS
QEMU: Checking if IOMMU is enabled by kernel : WARN (Unknown if this platform
has IOMMU support)
2. If all virt-host-validate checks return a PASS value, your system is prepared for creating virtual
machines.
If any of the checks return a FAIL value, follow the displayed instructions to fix the problem.
If any of the checks return a WARN value, consider following the displayed instructions to
improve virtualization capabilities.
Troubleshooting
If KVM virtualization is not supported by your host CPU, virt-host-validate generates the
following output:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available,
performance will be significantly limited)
However, VMs on such a host system will fail to boot, rather than have performance problems.
To work around this, you can change the <domain type> value in the XML configuration of the
VM to qemu. Note, however, that Red Hat does not support VMs that use the qemu domain
type, and setting this is highly discouraged in production environments.
Next steps
17
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Additional resources
18
CHAPTER 3. CREATING VIRTUAL MACHINES
Prerequisites
You have sufficient amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values may vary significantly depending on the intended
tasks and workload of the VMs.
WARNING
Prerequisites
You have sufficient a amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values may vary significantly depending on the intended
tasks and workload of the VMs.
An operating system (OS) installation source is available locally or on a network. This can be one
of the following:
19
Red Hat Enterprise Linux 9 Configuring and managing virtualization
WARNING
Optional: A Kickstart file can be provided for faster and easier configuration of the installation.
Procedure
To create a VM and start its OS installation, use the virt-install command, along with the following
mandatory arguments:
Based on the chosen installation method, the necessary options and values can vary. See below for
examples:
The following creates a VM named demo-guest1 that installs the Windows 10 OS from an ISO
image locally stored in the /home/username/Downloads/Win10install.iso file. This VM is also
allocated with 2048 MiB of RAM and 2 vCPUs, and an 80 GiB qcow2 virtual disk is automatically
configured for the VM.
# virt-install \
--name demo-guest1 --memory 2048 \
--vcpus 2 --disk size=80 --os-variant win10 \
--cdrom /home/username/Downloads/Win10install.iso
# virt-install \
--name demo-guest2 --memory 4096 --vcpus 4 \
--disk none --livecd --os-variant rhel9.0 \
--cdrom /home/username/Downloads/rhel9.iso
The following creates a RHEL 9 VM named demo-guest3 that connects to an existing disk
image, /home/username/backup/disk.qcow2. This is similar to physically moving a hard drive
between machines, so the OS and data available to demo-guest3 are determined by how the
20
CHAPTER 3. CREATING VIRTUAL MACHINES
image was handled previously. In addition, this VM is allocated with 2048 MiB of RAM and 2
vCPUs.
# virt-install \
--name demo-guest3 --memory 2048 --vcpus 2 \
--os-variant rhel9.0 --import \
--disk /home/username/backup/disk.qcow2
Note that the --os-variant option is highly recommended when importing a disk image. If it is not
provided, the performance of the created VM will be negatively affected.
# virt-install \
--name demo-guest4 --memory 2048 --vcpus 2 --disk size=160 \
--os-variant rhel9.0 --location http://example.com/OS-install \
--initrd-inject /home/username/ks.cfg --extra-args="inst.ks=file:/ks.cfg console=tty0
console=ttyS0,115200n8"
The following creates a VM named demo-guest5 that installs from a RHEL9.iso image file in
text-only mode, without graphics. It connects the guest console to the serial console. The VM
has 16384 MiB of memory, 16 vCPUs, and 280 GiB disk. This kind of installation is useful when
connecting to a host over a slow network link.
# virt-install \
--name demo-guest5 --memory 16384 --vcpus 16 --disk size=280 \
--os-variant rhel9.0 --location RHEL9.iso \
--graphics none --extra-args='console=ttyS0'
The following creates a VM named demo-guest6, which has the same configuration as demo-
guest5, but resides on the 10.0.0.1 remote host.
# virt-install \
--connect qemu+ssh://[email protected]/system --name demo-guest6 --memory 16384 \
--vcpus 16 --disk size=280 --os-variant rhel9.0 --location RHEL9.iso \
--graphics none --extra-args='console=ttyS0'
The following creates a VM named demo-guest-7, which has the same configuration as demo-
guest5, but for its storage, it uses a DASD mediated device
mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8, and assigns it device number
1111.
# virt-install \
--name demo-guest7 --memory 16384 --vcpus 16 --disk size=280 \
--os-variant rhel9.0 --location RHEL9.iso --graphics none \
--disk none --hostdev
mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8,address.type=ccw,address.
cssid=0xfe,address.ssid=0x0,address.devno=0x1111,boot-order=1 \
--extra-args 'rd.dasd=0.0.1111'
21
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Note that the name of the mediated device available for installation can be retrieved using the
virsh nodedev-list --cap mdev command.
Verification
If the VM is created successfully, a virt-viewer window opens with a graphical console of the VM
and starts the guest OS installation.
Troubleshooting
b. Verify that the libvirt default network is active and configured to start automatically:
i. If activating the default network fails with the following error, the libvirt-daemon-
config-network package has not been installed correctly.
ii. If activating the default network fails with an error similar to the following, a conflict has
occurred between the default network’s subnet and an existing interface on the host.
To fix this, use the virsh net-edit default command and change the 192.168.122.*
values in the configuration to a subnet not already in use on the host.
Additional resources
22
CHAPTER 3. CREATING VIRTUAL MACHINES
Additional resources
Prerequisites
You have sufficient a amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values may vary significantly depending on the intended
tasks and workload of the VMs.
Procedure
1. In the Virtual Machines interface of the web console, click Create VM.
The Create new virtual machine dialog appears.
23
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Installation type - The installation can use a local installation medium, a URL, a PXE
network boot, a cloud base image, or download an OS from a limited set of operating
systems.
Operating system - The VM’s operating system. Note that Red Hat provides support only
for a limited set of guest operating systems .
Storage Limit - The amount of storage space with which to configure the VM.
If you want the VM to automatically install the operating system, click Create and run.
If you want to edit the VM before the operating system is installed, click Create and edit.
Additional resources
3.2.2. Creating virtual machines by importing disk images using the web console
To create a virtual machine (VM) by importing a disk image of an existing VM installation, follow the
instructions below.
Prerequisites
You have sufficient a amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values can vary significantly depending on the intended tasks
and workload of the VMs.
Procedure
1. In the Virtual Machines interface of the web console, click Import VM.
The Import a virtual machine dialog appears.
24
CHAPTER 3. CREATING VIRTUAL MACHINES
Disk image - The path to the existing disk image of a VM on the host system.
Operating system - The VM’s operating system. Note that Red Hat provides support only
for a limited set of guest operating systems .
If you want the VM to automatically install the operating system, click Import and run.
If you want to edit the VM before the operating system is installed, click Import and edit.
NOTE
If you click Create and run or Import and run when creating a new VM, the installation
routine of the operating system starts automatically when the VM is created.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM on which you want to install a guest OS.
A new page opens with basic information about the selected VM and controls for managing
various aspects of the VM.
25
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
You can change the firmware only if you select Create and edit or Import and
edit when creating a new VM, and the OS has not already been installed on the
VM.
c. Click Save.
3. Click Install.
The installation routine of the operating system runs in the VM console.
Troubleshooting
3.2.4. Creating virtual machines with cloud image authentication using the web
console
By default, distro cloud images have no login accounts. However, using the RHEL web console, you can
now create a virtual machine (VM) and specify the root and user account login credentials, which are
then passed to cloud-init.
Prerequisites
You have a sufficient amount of system resources to allocate to your VMs, such as disk space,
RAM, or CPUs. The recommended values may vary significantly depending on the intended
tasks and workload of the VMs.
Procedure
26
CHAPTER 3. CREATING VIRTUAL MACHINES
1. In the Virtual Machines interface of the web console, click Create VM.
The Create new virtual machine dialog appears.
3. On the Details tab, in the Installation type field, select Cloud base image.
27
Red Hat Enterprise Linux 9 Configuring and managing virtualization
4. In the Installation source field, set the path to the image file on your host system.
Operating system - The VM’s operating system. Note that Red Hat provides support only
for
Storage Limit - The amount of storage space with which to configure the VM.
Root password - Enter a root password for your VM. Leave the field blank if you do not wish to
set a root password.
User login - Enter a cloud-init user login. Leave this field blank if you do not wish to create a
user account.
User password - Enter a password. Leave this field blank if you do not wish to create a user
account.
28
CHAPTER 3. CREATING VIRTUAL MACHINES
Additional resources
29
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Prerequisites
Before a VM can be started, it must be created and, ideally, also installed with an OS. For
instruction to do so, see Creating virtual machines.
Prerequisites
Procedure
For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH
connection to the host.
For example, the following command starts the demo-guest1 VM on the 192.168.123.123 host.
[email protected]'s password:
Additional resources
30
CHAPTER 4. STARTING VIRTUAL MACHINES
Prerequisites
Procedure
2. Click Run.
The VM starts, and you can connect to its console or graphical output .
3. Optional: To configure the VM to start automatically when the host starts, toggle the Autostart
checkbox in the Overview section.
If you use network interfaces that are not managed by libvirt, you must also make additional
changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see
starting virtual machines automatically when the host starts .
Additional resources
Prerequisites
Procedure
1. Use the virsh autostart utility to configure the VM to start automatically when the host starts.
For example, the following command configures the demo-guest1 VM to start automatically.
2. If you use network interfaces that are not managed by libvirt, you must also make additional
31
Red Hat Enterprise Linux 9 Configuring and managing virtualization
2. If you use network interfaces that are not managed by libvirt, you must also make additional
changes to the systemd configuration. Otherwise, the affected VMs might fail to start.
NOTE
# mkdir -p /etc/systemd/system/virtqemud.service.d/
# touch /etc/systemd/system/virtqemud.service.d/10-network-online.conf
c. Add the following lines to the 10-network-online.conf file. This configuration change
ensures systemd starts the virtqemud service only after the network on the host is ready.
[Unit]
After=network-online.target
Verification
1. View the VM configuration, and check that the autostart option is enabled.
For example, the following command displays basic information about the demo-guest1 VM,
including the autostart option.
2. If you use network interfaces that are not managed by libvirt, check if the content of the 10-
network-online.conf file matches the following output.
32
CHAPTER 4. STARTING VIRTUAL MACHINES
$ cat /etc/systemd/system/virtqemud.service.d/10-network-online.conf
[Unit]
After=network-online.target
Additional resources
33
Red Hat Enterprise Linux 9 Configuring and managing virtualization
When using the web console interface, use the Virtual Machines pane in the web console
interface. For more information, see Interacting with virtual machines using the web console .
If you need to interact with a VM graphical display without using the web console, use the Virt
Viewer application. For details, see Opening a virtual machine graphical console using Virt
Viewer.
When a graphical display is not possible or not necessary, use an SSH terminal connection .
When the virtual machine is not reachable from your system by using a network, use the virsh
console.
If the VMs to which you are connecting are on a remote host rather than a local one, you can optionally
configure your system for more convenient access to remote hosts .
Prerequisites
The VMs you want to interact with are installed and started.
To interact with the VM’s graphical interface in the web console, use the graphical console.
To interact with the VM’s graphical interface in a remote viewer, use the graphical console in
remote viewers.
To interact with the VM’s CLI in the web console, use the serial console.
5.1.1. Viewing the virtual machine graphical console in the web console
Using the virtual machine (VM) console interface, you can view the graphical output of a selected VM in
the RHEL 9 web console.
Prerequisites
Ensure that both the host and the VM support a graphical interface.
Procedure
1. In the Virtual Machines interface, click the VM whose graphical console you want to view.
A new page opens with an Overview and a Console section for the VM.
34
CHAPTER 5. CONNECTING TO VIRTUAL MACHINES
The VNC console appears below the menu in the web interface.
3. Click Expand
You can now interact with the VM console using the mouse and keyboard in the same manner
you interact with a real machine. The display in the VM console reflects the activities being
performed on the VM.
NOTE
The host on which the web console is running may intercept specific key combinations,
such as Ctrl+Alt+Del, preventing them from being sent to the VM.
To send such key combinations, click the Send key menu and select the key sequence to
send.
For example, to send the Ctrl+Alt+Del combination to the VM, click the Send key and
select the Ctrl+Alt+Del menu entry.
Troubleshooting
If clicking in the graphical console does not have any effect, expand the console to full screen.
This is a known issue with the mouse cursor offset.
Additional resources
Viewing the graphical console in a remote viewer using the web console
5.1.2. Viewing the graphical console in a remote viewer using the web console
Using the web console interface, you can display the graphical console of a selected virtual machine
(VM) in a remote viewer, such as Virt Viewer.
NOTE
35
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
You can launch Virt Viewer from within the web console. Other VNC remote viewers can
be launched manually.
Prerequisites
Ensure that both the host and the VM support a graphical interface.
Before you can view the graphical console in Virt Viewer, you must install Virt Viewer on the
machine to which the web console is connected.
NOTE
Procedure
1. In the Virtual Machines interface, click the VM whose graphical console you want to view.
A new page opens with an Overview and a Console section for the VM.
You can interact with the VM console using the mouse and keyboard in the same manner in
which you interact with a real machine. The display in the VM console reflects the activities
being performed on the VM.
NOTE
36
CHAPTER 5. CONNECTING TO VIRTUAL MACHINES
NOTE
The server on which the web console is running can intercept specific key combinations,
such as Ctrl+Alt+Del, preventing them from being sent to the VM.
To send such key combinations, click the Send key menu and select the key sequence to
send.
For example, to send the Ctrl+Alt+Del combination to the VM, click the Send key menu
and select the Ctrl+Alt+Del menu entry.
Troubleshooting
If clicking in the graphical console does not have any effect, expand the console to full screen.
This is a known issue with the mouse cursor offset.
If launching the Remote Viewer in the web console does not work or is not optimal, you can
manually connect with any viewer application using the following protocols:
Address - The default address is 127.0.0.1. You can modify the vnc_listen parameter in
/etc/libvirt/qemu.conf to change it to the host’s IP address.
Additional resources
5.1.3. Viewing the virtual machine serial console in the web console
You can view the serial console of a selected virtual machine (VM) in the RHEL 9 web console. This is
useful when the host machine or the VM is not configured with a graphical interface.
For more information about the serial console, see Opening a virtual machine serial console .
Prerequisites
Procedure
1. In the Virtual Machines pane, click the VM whose serial console you want to view.
A new page opens with an Overview and a Console section for the VM.
37
Red Hat Enterprise Linux 9 Configuring and managing virtualization
You can disconnect and reconnect the serial console from the VM.
Additional resources
Viewing the graphical console in a remote viewer using the web console
Prerequisites
Your system, as well as the VM you are connecting to, must support graphical displays.
If the target VM is located on a remote host, connection and root access privileges to the host
are needed.
Optional: If the target VM is located on a remote host, set up your libvirt and SSH for more
convenient access to remote hosts.
Procedure
To connect to a local VM, use the following command and replace guest-name with the name of
the VM you want to connect to:
# virt-viewer guest-name
To connect to a remote VM, use the virt-viewer command with the SSH protocol. For example,
the following command connects as root to a VM called guest-name, located on remote system
10.0.0.1. The connection also requires root authentication for 10.0.0.1.
38
CHAPTER 5. CONNECTING TO VIRTUAL MACHINES
Verification
If the connection works correctly, the VM display is shown in the Virt Viewer window.
You can interact with the VM console using the mouse and keyboard in the same manner you interact
with a real machine. The display in the VM console reflects the activities being performed on the VM.
Troubleshooting
If clicking in the graphical console does not have any effect, expand the console to full screen.
This is a known issue with the mouse cursor offset.
Additional resources
Prerequisites
You have network connection and root access privileges to the target VM.
If the target VM is located on a remote host, you also have connection and root access
privileges to that host.
Your VM network assigns IP addresses by dnsmasq generated by libvirt. This is the case for
example in libvirt NAT networks.
Notably, if your VM is using one of the following network configurations, you cannot connect to
the VM using SSH:
hostdev interfaces
direct interfaces
bridge interaces
The libvirt-nss component is installed and enabled on the VM’s host. If it is not, do the
following:
b. Edit the /etc/nsswitch.conf file and add libvirt_guest to the hosts line:
39
Red Hat Enterprise Linux 9 Configuring and managing virtualization
[...]
passwd: compat
shadow: compat
group: compat
hosts: files libvirt_guest dns
[...]
Procedure
1. When connecting to a remote VM, SSH into its physical host first. The following example
demonstrates connecting to a host machine 10.0.0.1 using its root credentials:
# ssh [email protected]
[email protected]'s password:
Last login: Mon Sep 24 12:05:36 2021
root~#
2. Use the VM’s name and user access credentials to connect to it. For example, the following
connects to to the testguest1 VM using its root credentials:
# ssh root@testguest1
root@testguest1's password:
Last login: Wed Sep 12 12:05:36 2018
root~]#
Troubleshooting
If you do not know the VM’s name, you can list all VMs available on the host using the virsh list -
-all command:
Additional resources
Does not provide VNC protocols, and thus does not offer video display for GUI tools.
Does not have a network connection, and thus cannot be interacted with using SSH.
Prerequisites
The VM must have a serial console device configured, such as console type='pty'. To verify, do
40
CHAPTER 5. CONNECTING TO VIRTUAL MACHINES
The VM must have a serial console device configured, such as console type='pty'. To verify, do
the following:
The VM must have the serial console configured in its kernel command line. To verify this, the
cat /proc/cmdline command output on the VM should include console=ttyS0. For example:
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-948.el7.x86_64 root=/dev/mapper/rhel-root ro console=tty0
console=ttyS0,9600n8 rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb
If the serial console is not set up properly on a VM, using virsh console to connect to the VM
connects you to an unresponsive guest console. However, you can still exit the unresponsive
console by using the Ctrl+] shortcut.
ii. Clear the kernel options that might prevent your changes from taking effect.
Procedure
1. On your host system, use the virsh console command. The following example connects to the
guest1 VM, if the libvirt driver supports safe console handling:
Subscription-name
Kernel 3.10.0-948.el7.x86_64 on an x86_64
localhost login:
2. You can interact with the virsh console in the same way as with a standard command-line
interface.
Additional resources
41
Red Hat Enterprise Linux 9 Configuring and managing virtualization
When managing VMs on a remote host system using libvirt utilities, it is recommended to use the -c
qemu+ssh://root@hostname/system syntax. For example, to use the virsh list command as root on
the 10.0.0.1 host:
[email protected]'s password:
Id Name State
---------------------------------
1 remote-guest running
However, for convenience, you can remove the need to specify the connection details in full by
modifying your SSH and libvirt configuration. For example, you will be able to do:
[email protected]'s password:
Id Name State
---------------------------------
1 remote-guest running
Procedure
1. Edit or create the ~/.ssh/config file, and add the following to it, where host-alias is a shortened
name associated with a specific remote host, and hosturl is the URL address of the host.
Host host-alias
User root
Hostname hosturl
For example, the following sets up the tyrannosaurus alias for [email protected]:
Host tyrannosaurus
User root
Hostname 10.0.0.1
2. Edit or create the /etc/libvirt/libvirt.conf file, and add the following, where qemu-host-alias is a
host alias that QEMU and libvirt utilities will associate with the intended host:
uri_aliases = [
"qemu-host-alias=qemu+ssh://host-alias/system",
]
For example, the following uses the tyrannosaurus alias configured in the previous step to set up
the t-rex alias, which stands for qemu+ssh://10.0.0.1/system:
uri_aliases = [
"t-rex=qemu+ssh://tyrannosaurus/system",
]
42
CHAPTER 5. CONNECTING TO VIRTUAL MACHINES
Verification
1. Confirm that you can manage remote VMs by using libvirt-based utilities on the local system
with an added -c qemu-host-alias parameter. This automatically performs the commands over
SSH on the remote host.
For example, verify that the following lists VMs on the 10.0.0.1 remote host, the connection to
which was set up as t-rex in the previous steps:
[email protected]'s password:
Id Name State
---------------------------------
1 velociraptor running
NOTE
In addition to virsh, the -c (or --connect) option and the remote host access
configuration described above can be used by the following utilities:
virt-install
virt-viewer
Next steps
If you want to use libvirt utilities exclusively on a single remote host, you can also set a specific
connection as the default target for libvirt-based utilities. To do so, edit the
/etc/libvirt/libvirt.conf file and set the value of the uri_default parameter to qemu-host-alias.
For example, the following uses the t-rex host alias set up in the previous steps as a default
libvirt target.
As a result, all libvirt-based commands will automatically be performed on the specified remote
host.
$ virsh list
[email protected]'s password:
Id Name State
---------------------------------
1 velociraptor running
However, this is not recommended if you also want to manage VMs on your local host or on
different remote hosts.
When connecting to a remote host, you can avoid having to provide the root password to the
remote system. To do so, use one or more of the following methods:
43
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The -c (or --connect) option can be used to run the virt-install, virt-viewer, and virsh
commands on a remote host.
44
CHAPTER 6. SHUTTING DOWN VIRTUAL MACHINES
Use a shutdown command appropriate to the guest OS while connected to the guest .
[email protected]'s password:
Domain 'demo-guest1' is being shutdown
To force a VM to shut down, for example if it has become unresponsive, use the virsh destroy command
on the host:
NOTE
The virsh destroy command does not actually delete or remove the VM configuration or
disk images. It only terminates the running VM instance of the VM, similarly to pulling the
power cord from a physical machine. As such, in rare cases, virsh destroy may cause
corruption of the VM’s file system, so using this command is only recommended if all
other shutdown methods have failed.
Prerequisites
45
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Procedure
1. In the Virtual Machines interface, find the row of the VM you want to shut down.
Troubleshooting
If the VM does not shut down, click the Menu button ⋮ next to the Shut Down button and
select Force Shut Down.
To shut down an unresponsive VM, you can also send a non-maskable interrupt .
Additional resources
Prerequisites
Procedure
1. In the Virtual Machines interface, find the row of the VM you want to restart.
Troubleshooting
If the VM does not restart, click the Menu button ⋮ next to the Reboot button and select
Force Reboot.
To shut down an unresponsive VM, you can also send a non-maskable interrupt .
Additional resources
46
CHAPTER 6. SHUTTING DOWN VIRTUAL MACHINES
Sending a non-maskable interrupt (NMI) may cause an unresponsive running virtual machine (VM) to
respond or shut down. For example, you can send the Ctrl+Alt+Del NMI to a VM that is not responding
to standard input.
Prerequisites
Procedure
1. In the Virtual Machines interface, find the row of the VM to which you want to send an NMI.
Additional resources
47
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Prerequisites
Procedure
Additional resources
Prerequisites
Procedure
1. In the Virtual Machines interface, click the Menu button ⋮ of the VM that you want to delete.
A drop down menu appears with controls for various VM operations.
48
CHAPTER 7. DELETING VIRTUAL MACHINES
2. Click Delete.
A confirmation dialog appears.
3. Optional: To delete all or some of the storage files associated with the VM, select the
checkboxes next to the storage files you want to delete.
4. Click Delete.
The VM and any selected storage files are deleted.
49
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Note that to use the web console to manage your VMs on RHEL 9, you must first install a web console
plug-in for virtualization.
Next steps
For instructions on enabling VMs management in your web console, see Setting up the web
console to manage virtual machines.
For a comprehensive list of VM management actions that the web console provides, see Virtual
machine management features available in the web console.
Prerequisites
Ensure that the web console is installed and enabled on your machine.
50
CHAPTER 8. MANAGING VIRTUAL MACHINES IN THE WEB CONSOLE
If this command returns Unit cockpit.socket could not be found, follow the Installing the web
console document to enable the web console.
Procedure
Verification
1. Access the web console, for example by entering the https://localhost:9090 address in your
browser.
2. Log in.
3. If the installation was successful, Virtual Machines appears in the web console side menu.
Additional resources
Prerequisites
Procedure
1. In the Virtual Machines interface, click the Menu button ⋮ of the VM that you want to rename.
51
Red Hat Enterprise Linux 9 Configuring and managing virtualization
2. Click Rename.
The Rename a VM dialog appears.
4. Click Rename.
Verification
Table 8.1. VM management tasks that you can perform in the RHEL 9 web console
Create a VM and install it with a guest operating Creating virtual machines and installing guest
system operating systems using the web console
Start, shut down, and restart the VM Starting virtual machines using the web consoleand
Shutting down and restarting virtual machines using
the web console
Connect to and interact with a VM using a variety of Interacting with virtual machines using the web
consoles console
View a variety of information about the VM Viewing virtual machine information using the web
console
Adjust the host memory allocated to a VM Adding and removing virtual machine memory using
the web console
52
CHAPTER 8. MANAGING VIRTUAL MACHINES IN THE WEB CONSOLE
Manage network connections for the VM Using the web console for managing virtual machine
network interfaces
Manage the VM storage available on the host and Managing storage for virtual machines using the web
attach virtual disks to the VM console
Configure the virtual CPU settings of the VM Managing virtal CPUs using the web console
Manage host devices Managing host devices using the web console
53
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Procedure
54
CHAPTER 9. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
<uuid>a973434f-2f6e-4ěša-8949-76a7a98569e1</uuid>
<metadata>
[...]
VCPU: 1
CPU: 0
State: running
CPU time: 88.6s
CPU Affinity: yyyy
To configure and optimize the vCPUs in your VM, see Optimizing virtual machine CPU
performance.
55
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Persistent: yes
Autostart: yes
Bridge: virbr0
For details about network interfaces, VM networks, and instructions for configuring them, see
Configuring virtual machine network connections .
You can view information about a selected VM to which the web console session is connected. This
includes information about its disks, virtual network interface and resource usage.
Prerequisites
Procedure
Storage Pools - The number of storage pools, active or inactive, that can be accessed by the
web console and their state.
Networks - The number of networks, active or inactive, that can be accessed by the web
console and their state.
56
CHAPTER 9. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
Additional resources
Prerequisites
Procedure
Size - The current allocation and the total capacity of the storage pool.
2. Click the arrow next to the storage pool whose information you want to see.
The row expands to reveal the Overview pane with detailed information about the selected
storage pool.
57
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Target path - The source for the types of storage pools backed by directories, such as dir
or netfs.
Persistent - Indicates whether or not the storage pool has a persistent configuration.
Autostart - Indicates whether or not the storage pool starts automatically when the system
boots up.
3. To view a list of storage volumes associated with the storage pool, click Storage Volumes.
The Storage Volumes pane appears, showing a list of configured storage volumes.
Additional resources
Prerequisites
Procedure
A new page opens with an Overview section with basic information about the selected VM and a
58
CHAPTER 9. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
CPU Type - The architecture of the virtual CPUs configured for the VM.
Additional resources
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
59
Red Hat Enterprise Linux 9 Configuring and managing virtualization
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Usage.
The Usage section displays information about the memory and virtual CPU usage of the VM.
Additional resources
Using the web console, you can view detailed information about disks assigned to a selected virtual
machine (VM).
Prerequisites
Procedure
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
60
CHAPTER 9. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
Access - Whether the disk is Writeable or Read-only. For raw disks, you can also set the
access to Writeable and shared.
Additional resources
9.2.6. Viewing and editing virtual network interface information in the web console
Using the RHEL 9 web console, you can view and modify the virtual network interfaces on a selected
virtual machine (VM):
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
Type - The type of network interface for the VM. The types include virtual network, bridge
to LAN, and direct attachment.
NOTE
61
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Source - The source of the network interface. This is dependent on the network type.
3. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings
dialog opens.
NOTE
Changes to the virtual network interface settings take effect only after restarting
the VM.
Additionally, MAC address can only be modified when the VM is shut off.
Additional resources
To obtain the XML configuration of a VM, you can use the virsh dumpxml command followed by the
VM’s name.
62
CHAPTER 9. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
63
Red Hat Enterprise Linux 9 Configuring and managing virtualization
64
CHAPTER 9. VIEWING INFORMATION ABOUT VIRTUAL MACHINES
65
Red Hat Enterprise Linux 9 Configuring and managing virtualization
66
CHAPTER 10. SAVING AND RESTORING VIRTUAL MACHINES
This section provides information about saving VMs, as well as about restoring them to the same state
without a full VM boot-up.
This process frees up RAM and CPU resources on the host system in exchange for disk space, which
may improve the host system performance. When the VM is restored, because the guest OS does not
need to be booted, the long boot-up period is avoided as well.
To save a VM, you can use the command-line interface (CLI). For instructions, see Saving virtual
machines using the command line interface.
To restore a VM you can use the CLI or the web console GUI.
Prerequisites
Ensure you have sufficient disk space to save the VM and its configuration. Note that the space
occupied by the VM depends on the amount of RAM allocated to that VM.
Procedure
67
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The next time the VM is started, it will automatically restore the saved state from the above file.
Verification
List the VMs that have managed save enabled. In the following example, the VMs listed as saved have
their managed save enabled.
+ Note that to list the saved VMs that are in a shut off state, you must use the --all or --inactive options
with the command.
Troubleshooting
If the saved VM file becomes corrupted or unreadable, restoring the VM will initiate a standard
VM boot instead.
Additional resources
Prerequisites
68
CHAPTER 10. SAVING AND RESTORING VIRTUAL MACHINES
Procedure
For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH
connection to the host.
For example, the following command starts the demo-guest1 VM on the 192.168.123.123 host.
[email protected]'s password:
Additional resources
Prerequisites
Procedure
2. Click Run.
The VM starts, and you can connect to its console or graphical output .
3. Optional: To configure the VM to start automatically when the host starts, toggle the Autostart
checkbox in the Overview section.
If you use network interfaces that are not managed by libvirt, you must also make additional
69
Red Hat Enterprise Linux 9 Configuring and managing virtualization
If you use network interfaces that are not managed by libvirt, you must also make additional
changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see
starting virtual machines automatically when the host starts .
Additional resources
70
CHAPTER 11. CLONING VIRTUAL MACHINES
Cloning creates a new VM that uses its own disk image for storage, but most of the clone’s configuration
and stored data is identical to the source VM. This makes it possible to prepare multiple VMs optimized
for a certain task without the need to optimize each VM individually.
This process is faster than creating a new VM and installing it with a guest operating system, and can be
used to rapidly generate VMs with a specific configuration and content.
If you are planning to create multiple clones of a VM, first create a VM template that does not contain:
Unique settings, such as persistent network MAC configuration, which can prevent the clones
from working correctly.
Additional resources
You can create VM templates using the virt-sysprep utility or you can create them manually based on
your requirements.
Prerequisites
The guestfs-tools package, which contains the virt-sysprep utility, is installed on your host:
71
Red Hat Enterprise Linux 9 Configuring and managing virtualization
You know where the disk image for the source VM is located, and you are the owner of the VM’s
disk image file.
Note that disk images for VMs created in the system connection of libvirt are located in the
/var/lib/libvirt/images directory and owned by the root user by default:
# ls -la /var/lib/libvirt/images
-rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2
-rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
Optional: Any important data on the source VM’s disk has been backed up. If you want to
preserve the source VM intact, clone it first and turn the clone into a template.
Procedure
1. Ensure you are logged in as the owner of the VM’s disk image:
# whoami
root
# cp /var/lib/libvirt/images/a-really-important-vm.qcow2 /var/lib/libvirt/images/a-really-
important-vm-original.qcow2
This is used later to verify that the VM was successfully turned into a template.
# virt-sysprep -a /var/lib/libvirt/images/a-really-important-vm.qcow2
[ 0.0] Examining the guest ...
[ 7.3] Performing "abrt-data" ...
[ 7.3] Performing "backup-files" ...
[ 9.6] Performing "bash-history" ...
[ 9.6] Performing "blkid-tab" ...
[...]
Verification
To confirm that the process was successful, compare the modified disk image to the original
one. The following example shows a successful creation of a template:
72
CHAPTER 11. CLONING VIRTUAL MACHINES
[...]
- - 0600 409 /home/username/.bash_history
- d 0700 6 /home/username/.ssh
- - 0600 868 /root/.bash_history
[...]
Additional resources
Prerequisites
Ensure that you know the location of the disk image for the source VM and are the owner of the
VM’s disk image file.
Note that disk images for VMs created in the system connection of libvirt are by default located
in the /var/lib/libvirt/images directory and owned by the root user:
# ls -la /var/lib/libvirt/images
-rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2
-rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2
-rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
Optional: Any important data on the VM’s disk has been backed up. If you want to preserve the
source VM intact, clone it first and edit the clone to create a template.
Procedure
# rm -f /etc/udev/rules.d/70-persistent-net.rules
NOTE
73
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
If udev rules are not removed, the name of the first NIC might be eth1
instead of eth0.
NOTE
If the HWADDR does not match the new guest’s MAC address, the ifcfg
will be ignored.
ii. Configure DHCP but do not include HWADDR or any other unique information:
c. Ensure the following files also contain the same content, if they exist on your system:
/etc/sysconfig/networking/devices/ifcfg-eth[x]
/etc/sysconfig/networking/profiles/default/ifcfg-eth[x]
NOTE
If you had used NetworkManager or any special settings with the VM,
ensure that any additional unique information is removed from the ifcfg
scripts.
# rm /etc/sysconfig/rhn/systemid
# subscription-manager clean
NOTE
74
CHAPTER 11. CLONING VIRTUAL MACHINES
NOTE
The original RHSM profile remains in the Portal along with your ID code.
Use the following command to reactivate your RHSM registration on the
VM after it is cloned:
# rm -rf /etc/ssh/ssh_host_example
# rm /etc/lvm/devices/system.devices
5. Remove the gnome-initial-setup-done file to configure the VM to run the configuration wizard
on the next boot:
# rm ~/.config/gnome-initial-setup-done
NOTE
The wizard that runs on the next boot depends on the configurations that have
been removed from the VM. In addition, on the first boot of the clone, it is
recommended that you change the hostname.
Prerequisites
Ensure that there is sufficient disk space to store the cloned disk images.
Optional: When creating multiple VM clones, remove unique data and settings from the source
VM to ensure the cloned VMs work properly. For instructions, see Creating virtual machine
templates.
Procedure
1. Use the virt-clone utility with options that are appropriate for your environment and use case.
75
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The following command clones a local VM named doppelganger and creates the
doppelganger-clone VM. It also creates the doppelganger-clone.qcow2 disk image in the
same location as the disk image of the original VM, and with the same data:
The following command clones a VM named geminus1, and creates a local VM named
geminus2, which uses only two of geminus1's multiple disks.
To clone your VM to a different host, migrate the VM without undefining it on the local host.
For example, the following commands clone the previously created geminus2 VM to the
10.0.0.1 remote system, including its local disks. Note that using these commands also
requires root privileges for 10.0.0.1.
# scp /var/lib/libvirt/images/disk1-clone.qcow2
[email protected]/user@remote_host.com://var/lib/libvirt/images/
# scp /var/lib/libvirt/images/disk2-clone.qcow2
[email protected]/user@remote_host.com://var/lib/libvirt/images/
Verification
To verify the VM has been successfully cloned and is working correctly:
1. Confirm the clone has been added to the list of VMs on your host.
Additional resources
76
CHAPTER 11. CLONING VIRTUAL MACHINES
NOTE
Prerequisites
Procedure
1. In the Virtual Machines interface of the web console, click the Menu button ⋮ of the VM that
you want to clone.
A drop down menu appears with controls for various VM operations.
2. Click Clone.
The Create a clone VM dialog appears.
4. Click Clone.
A new VM is created based on the source VM.
Verification
Confirm whether the cloned VM appears in the list of VMs available on your host.
77
Red Hat Enterprise Linux 9 Configuring and managing virtualization
By default, the migrated VM is transient on the destination host, and remains defined also on the source
host.
You can migrate a running VM using live or non-live migrations. To migrate a shut-off VM, you must use
an offline migration. For details, see the following table.
Live migration The VM continues to run Useful for VMs that The VM’s disk images
on the source host require constant uptime. must be located on a
machine while KVM is However, VMs that shared network,
transferring the VM’s modify memory pages accessible both to the
memory pages to the faster than KVM can source host and the
destination host. When transfer them, such as destination host.
the migration is nearly VMs under heavy I/O
complete, KVM very load, cannot be live-
briefly suspends the VM, migrated, and non-live
and resumes it on the migration must be used
destination host. instead.
Non-live migration Suspends the VM, Creates downtime for The VM’s disk images
copies its configuration the VM, but is generally must be located on a
and its memory to the more reliable than live shared network,
destination host, and migration. accessible both to the
resumes the VM. Recommended for VMs source host and the
under heavy I/O load. destination host.
Offline migration Moves the VM’s Recommended for shut- The VM’s disk images do
configuration to the off VMs. not have to be available
destination host on a shared network,
and can be copied or
moved manually to the
destination host instead.
Additional resources
78
CHAPTER 12. MIGRATING VIRTUAL MACHINES
Load balancing
VMs can be moved to host machines with lower usage if their host becomes overloaded, or if another
host is under-utilized.
Hardware independence
When you need to upgrade, add, or remove hardware devices on the host machine, you can safely
relocate VMs to other hosts. This means that VMs do not experience any downtime for hardware
improvements.
Energy saving
VMs can be redistributed to other hosts, and the unloaded host systems can thus be powered off to
save energy and cut costs during low usage periods.
Geographic migration
VMs can be moved to another physical location for lower latency or when required for other reasons.
Migrating VMs from or to a session connection of libvirt is unreliable and therefore not
recommended.
VMs that use certain features and configurations will not work correctly if migrated, or the
migration will fail. Such features include:
Device passthrough
A migration between hosts that use Non-Uniform Memory Access (NUMA) pinning works only if
the hosts have similar topology. However, the performance on running workloads might be
negatively affected by the migration.
The emulated CPUs, both on the source VM and the destination VM, must be identical,
otherwise the migration might fail. Any differences between the VMs in the following CPU
related areas can cause problems with the migration:
CPU model
Migrating between an Intel 64 host and an AMD64 host is unsupported, even though
they share the x86-64 instruction set.
For steps to ensure that a VM will work correctly after migrating to a host with a
different CPU model, see Verifying host CPU compatibility for virtual machine
migration.
Firmware settings
Microcode version
79
Red Hat Enterprise Linux 9 Configuring and managing virtualization
BIOS version
BIOS settings
QEMU version
Kernel version
Live migrating a VM that uses more than 1 TB of memory may in some cases not work reliably.
The stability of such a migration depends on the following:
The network bandwidth that the host can use for migration
For live migration scenarios that involve VMs with more than 1 TB of memory, customers should
consult Red Hat.
NOTE
The instructions in this section use an example migration scenario with the following host
CPUs:
Prerequisites
You have administrator access to the source host and the destination host for the migration.
Procedure
1. On the source host, obtain its CPU features and paste them into a new XML file, such as
domCaps-CPUs.xml.
2. In the XML file, replace the <mode> </mode> tags with <cpu> </cpu>.
3. Optional: Verify that the content of the domCaps-CPUs.xml file looks similar to the following:
# cat domCaps-CPUs.xml
80
CHAPTER 12. MIGRATING VIRTUAL MACHINES
<cpu>
<model fallback="forbid">Skylake-Client-IBRS</model>
<vendor>Intel</vendor>
<feature policy="require" name="ss"/>
<feature policy="require" name="pdcm"/>
<feature policy="require" name="hypervisor"/>
<feature policy="require" name="tsc_adjust"/>
<feature policy="require" name="clflushopt"/>
<feature policy="require" name="umip"/>
<feature policy="require" name="md-clear"/>
<feature policy="require" name="stibp"/>
<feature policy="require" name="arch-capabilities"/>
<feature policy="require" name="ssbd"/>
<feature policy="require" name="xsaves"/>
<feature policy="require" name="pdpe1gb"/>
<feature policy="require" name="invtsc"/>
<feature policy="require" name="ibpb"/>
<feature policy="require" name="ibrs"/>
<feature policy="require" name="amd-stibp"/>
<feature policy="require" name="amd-ssbd"/>
<feature policy="require" name="rsba"/>
<feature policy="require" name="skip-l1dfl-vmentry"/>
<feature policy="require" name="pschange-mc-no"/>
<feature policy="disable" name="hle"/>
<feature policy="disable" name="rtm"/>
</cpu>
4. On the destination host, use the following command to obtain its CPU features:
81
Red Hat Enterprise Linux 9 Configuring and managing virtualization
5. Add the obtained CPU features from the destination host to the domCaps-CPUs.xml file on
the source host. Again, replace the <mode> </mode> tags with <cpu> </cpu> and save the file.
6. Optional: Verify that the XML file now contains the CPU features from both hosts.
# cat domCaps-CPUs.xml
<cpu>
<model fallback="forbid">Skylake-Client-IBRS</model>
<vendor>Intel</vendor>
<feature policy="require" name="ss"/>
<feature policy="require" name="vmx"/>
<feature policy="require" name="pdcm"/>
<feature policy="require" name="hypervisor"/>
<feature policy="require" name="tsc_adjust"/>
<feature policy="require" name="clflushopt"/>
<feature policy="require" name="umip"/>
<feature policy="require" name="md-clear"/>
<feature policy="require" name="stibp"/>
<feature policy="require" name="arch-capabilities"/>
<feature policy="require" name="ssbd"/>
<feature policy="require" name="xsaves"/>
<feature policy="require" name="pdpe1gb"/>
<feature policy="require" name="invtsc"/>
<feature policy="require" name="ibpb"/>
<feature policy="require" name="ibrs"/>
<feature policy="require" name="amd-stibp"/>
<feature policy="require" name="amd-ssbd"/>
<feature policy="require" name="rsba"/>
<feature policy="require" name="skip-l1dfl-vmentry"/>
<feature policy="require" name="pschange-mc-no"/>
<feature policy="disable" name="hle"/>
<feature policy="disable" name="rtm"/>
</cpu>
<cpu>
<model fallback="forbid">IvyBridge-IBRS</model>
<vendor>Intel</vendor>
<feature policy="require" name="ss"/>
<feature policy="require" name="vmx"/>
<feature policy="require" name="pdcm"/>
<feature policy="require" name="pcid"/>
<feature policy="require" name="hypervisor"/>
<feature policy="require" name="arat"/>
<feature policy="require" name="tsc_adjust"/>
<feature policy="require" name="umip"/>
<feature policy="require" name="md-clear"/>
<feature policy="require" name="stibp"/>
<feature policy="require" name="arch-capabilities"/>
<feature policy="require" name="ssbd"/>
<feature policy="require" name="xsaveopt"/>
<feature policy="require" name="pdpe1gb"/>
<feature policy="require" name="invtsc"/>
82
CHAPTER 12. MIGRATING VIRTUAL MACHINES
7. Use the XML file to calculate the CPU feature baseline for the VM you intend to migrate.
8. Open the XML configuration of the VM you intend to migrate, and replace the contents of the
<cpu> section with the settings obtained in the previous step.
Next steps
To perform a live migration of a virtual machine (VM) between supported KVM hosts , shared VM storage
83
Red Hat Enterprise Linux 9 Configuring and managing virtualization
To perform a live migration of a virtual machine (VM) between supported KVM hosts , shared VM storage
is required. The following procedure provides instructions for sharing a locally stored VM image with the
source host and the destination host using the NFS protocol.
Prerequisites
Optional: A host system is available for hosting the storage that is not the source or destination
host, but both the source and the destination host can reach it through the network. This is the
optimal solution for shared storage and is recommended by Red Hat.
Make sure that NFS file locking is not used as it is not supported in KVM.
The NFS is installed and enabled on the source and destination hosts. If it is not:
b. Make sure that the ports for NFS, such as 2049, are open in the firewall.
Procedure
1. Connect to the host that will provide shared storage. In this example, it is the cargo-bay host:
# ssh root@cargo-bay
root@cargo-bay's password:
Last login: Mon Sep 24 12:05:36 2019
root~#
2. Create a directory that will hold the disk image and will be shared with the migration hosts.
# mkdir /var/lib/libvirt/shared-images
3. Copy the disk image of the VM from the source host to the newly created directory. For
example, the following copies the disk image of the wanderer1 VM to the
/var/lib/libvirt/shared-images/ directory on the`cargo-bay` host:
4. On the host that you want to use for sharing the storage, add the sharing directory to the
84
CHAPTER 12. MIGRATING VIRTUAL MACHINES
4. On the host that you want to use for sharing the storage, add the sharing directory to the
/etc/exports file. The following example shares the /var/lib/libvirt/shared-images directory
with the source-example and dest-example hosts:
5. On both the source and destination host, mount the shared directory in the
/var/lib/libvirt/images directory:
Verification
To verify the process was successful, start the VM on the source host and observe if it boots
correctly.
Prerequisites
The source host and the destination host both use the KVM hypervisor.
The source host and the destination host are able to reach each other over the network. Use the
ping utility to verify this.
Port 16509 is needed for connecting to the destination host by using TLS.
Port 16514 is needed for connecting to the destination host by using TCP.
Ports 49152-49215 are needed by QEMU for transfering the memory and disk migration
data.
For the migration to be supportable by Red Hat, the source host and destination host must be
using specific operating systems and machine types. To ensure this is the case, see Supported
hosts for virtual machine migration.
The VM must be compatible with the CPU features of the destination host. To ensure this is the
case, see Verifying host CPU compatibility for virtual machine migration .
The disk images of VMs that will be migrated are located on a separate networked location
accessible to both the source host and the destination host. This is optional for offline
migration, but required for migrating a running VM.
For instructions to set up such shared VM storage, see Sharing virtual machine disk images with
other hosts.
When migrating a running VM, your network bandwidth must be higher than the rate in which the
85
Red Hat Enterprise Linux 9 Configuring and managing virtualization
When migrating a running VM, your network bandwidth must be higher than the rate in which the
VM generates dirty memory pages.
To obtain the dirty page rate of your VM before you start the live migration, do the following:
a. Monitor the rate of dirty page generation of the VM for a short period of time.
In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting
to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live
migration not to progress if you do not pause the VM or lower its workload.
To ensure that the live migration finishes successfully, Red Hat recommends that your
network bandwidth is significantly greater than the VM’s dirty page generation rate.
When migrating an existing VM in a public bridge tap network, the source and destination hosts
must be located on the same network. Otherwise, the VM network will not operate after
migration.
When performing a VM migration, the virsh client on the source host can use one of several
protocols to connect to the libvirt daemon on the destination host. Examples in the following
procedure use an SSH connection, but you can choose a different one.
If you want libvirt to use an SSH connection, ensure that the virtqemud socket is enabled
and running on the destination host.
If you want libvirt to use a TLS connection, ensure that the virtproxyd-tls socket is enabled
and running on the destination host.
If you want libvirt to use a TCP connection, ensure that the virtproxyd-tcp socket is
enabled and running on the destination host.
Procedure
1. Use the virsh migrate command with options appropriate for your migration requirements.
The following migrates the wanderer1 VM from your local host to the system connection of
the dest-example host using an SSH tunnel. The VM will remain running during the
migration.
86
CHAPTER 12. MIGRATING VIRTUAL MACHINES
The following enables you to make manual adjustments to the configuration of the
wanderer2 VM running on your local host, and then migrates the VM to the dest-example
host. The migrated VM will automatically use the updated configuration.
This procedure can be useful for example when the destination host needs to use a different
path to access the shared VM storage or when configuring a feature specific to the
destination host.
The following suspends the wanderer3 VM from the source-example host, migrates it to
the dest-example host, and instructs it to use the adjusted XML configuration, provided by
the wanderer3-alt.xml file. When the migration is completed, libvirt resumes the VM on the
destination host.
After the migration, the VM is in the shut off state on the source host, and the migrated
copy is deleted after it is shut down.
The following deletes the shut-down wanderer4 VM from the source-example host, and
moves its configuration to the dest-example host.
Note that this type of migration does not require moving the VM’s disk image to shared
storage. However, for the VM to be usable on the destination host, you also need to migrate
the VM’s disk image. For example:
2. Wait for the migration to complete. The process may take some time depending on network
bandwidth, system load, and the size of the VM. If the --verbose option is not used for virsh
migrate, the CLI does not display any progress indicators except errors.
When the migration is in progress, you can use the virsh domjobinfo utility to display the
migration statistics.
Verification
On the destination host, list the available VMs to verify if the VM has been migrated:
# virsh list
Id Name State
----------------------------------
10 wanderer1 running
87
Red Hat Enterprise Linux 9 Configuring and managing virtualization
If the migration is still running, this command will list the VM state as paused.
Troubleshooting
In some cases, the target host will not be compatible with certain values of the migrated VM’s
XML configuration, such as the network name or CPU type. As a result, the VM will fail to boot
on the target host. To fix these problems, you can update the problematic values by using the
virsh edit command. After updating the values, you must restart the VM for the changes to be
applied.
If a live migration is taking a long time to complete, this may be because the VM is under heavy
load and too many memory pages are changing for live migration to be possible. To fix this
problem, change the migration to a non-live one by suspending the VM.
Additional resources
WARNING
For tasks that modify memory pages faster than KVM can transfer them, such as
heavy I/O load tasks, it is recommended that you do not live migrate the VM.
Prerequisites
Port 16509 is needed for connecting to the destination host by using TLS.
Port 16514 is needed for connecting to the destination host by using TCP.
Ports 49152-49215 are needed by QEMU for transfering the memory and disk migration
data.
88
CHAPTER 12. MIGRATING VIRTUAL MACHINES
The VM must be compatible with the CPU features of the destination host. To ensure this is the
case, see Verifying host CPU compatibility for virtual machine migration .
The VM’s disk images are located on a shared storage that is accessible to the source host as
well as the destination host.
When migrating a running VM, your network bandwidth must be higher than the rate in which the
VM generates dirty memory pages.
To obtain the dirty page rate of your VM before you start the live migration, do the following in
your command-line interface:
a. Monitor the rate of dirty page generation of the VM for a short period of time.
In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting
to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live
migration not to progress if you do not pause the VM or lower its workload.
To ensure that the live migration finishes successfully, Red Hat recommends that your
network bandwidth is significantly greater than the VM’s dirty page generation rate.
Procedure
1. In the Virtual Machines interface of the web console, click the Menu button ⋮ of the VM that
you want to migrate.
A drop down menu appears with controls for various VM operations.
2. Click Migrate
The Migrate VM to another host dialog appears.
89
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Permanent - Do not check the box if you wish to migrate the VM permanently. Permanent
migration completely removes the VM configuration from the source host.
Temporary - Temporary migration migrates a copy of the VM to the destination host. This
copy is deleted from the destination host when the VM is shut down. The original VM
remains on the source host.
5. Click Migrate
Your VM is migrated to the destination host.
Verification
To verify whether the VM has been successfully migrated and is working correctly:
Confirm whether the VM appears in the list of VMs available on the destination host.
NOTE
90
CHAPTER 12. MIGRATING VIRTUAL MACHINES
NOTE
Support level is different for other virtualization solutions provided by Red Hat, including
RHOSP and OpenShift Virtualization.
91
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The following sections provide a general overview of what virtual devices are, and instructions on how to
manage them using the CLI or the web console.
The basics
Virtual devices attached to a VM can be configured when creating the VM, and can also be managed on
an existing VM. Generally, virtual devices can be attached or detached from a VM only when the VM is
shut off, but some can be added or removed when the VM is running. This feature is referred to as
device hot plug and hot unplug.
When creating a new VM, libvirt automatically creates and configures a default set of essential virtual
devices, unless specified otherwise by the user. These are based on the host system architecture and
machine type, and usually include:
the CPU
memory
a keyboard
a video card
a sound card
To manage virtual devices after the VM is created, use the command-line interface (CLI). However, to
manage virtual storage devices and NICs, you can also use the RHEL 9 web console.
Performance or flexibility
For some types of devices, RHEL 9 supports multiple implementations, often with a trade-off between
performance and flexibility.
For example, the physical storage used for virtual disks can be represented by files in various formats,
such as qcow2 or raw, and presented to the VM using a variety of controllers:
an emulated controller
virtio-scsi
virtio-blk
An emulated controller is slower than a virtio controller, because virtio devices are designed specifically
92
CHAPTER 13. MANAGING VIRTUAL DEVICES
An emulated controller is slower than a virtio controller, because virtio devices are designed specifically
for virtualization purposes. On the other hand, emulated controllers make it possible to run operating
systems that have no drivers for virtio devices. Similarly, virtio-scsi offers a more complete support for
SCSI commands, and makes it possible to attach a larger number of disks to the VM. Finally, virtio-blk
provides better performance than both virtio-scsi and emulated controllers, but a more limited range of
use-cases. For example, attaching a physical disk as a LUN device to a VM is not possible when using
virtio-blk.
For more information on types of virtual devices, see Types of virtual devices.
Emulated devices
Emulated devices are software implementations of widely used physical devices. Drivers designed for
physical devices are also compatible with emulated devices. Therefore, emulated devices can be
used very flexibly.
However, since they need to faithfully emulate a particular type of hardware, emulated devices may
suffer a significant performance loss compared with the corresponding physical devices or more
optimized virtual devices.
Virtual CPUs (vCPUs), with a large choice of CPU models available. The performance impact
of emulation depends significantly on the differences between the host CPU and the
emulated vCPU.
Paravirtualized devices
Paravirtualization provides a fast and efficient method for exposing virtual devices to VMs.
Paravirtualized devices expose interfaces that are designed specifically for use in VMs, and thus
significantly increase device performance. RHEL 9 provides paravirtualized devices to VMs using the
virtio API as a layer between the hypervisor and the VM. The drawback of this approach is that it
requires a specific device driver in the guest operating system.
It is recommended to use paravirtualized devices instead of emulated devices for VM whenever
possible, notably if they are running I/O intensive applications. Paravirtualized devices decrease I/O
latency and increase I/O throughput, in some cases bringing them very close to bare-metal
performance. Other paravirtualized devices also add functionality to VMs that is not otherwise
available.
93
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Nevertheless, some devices can be shared across multiple VMs. For example, a single physical device
can in certain cases provide multiple mediated devices, which can then be assigned to distinct VMs.
USB, PCI, and SCSI passthrough - expose common industry standard buses directly to VMs in
order to make their specific features available to guest software.
N_Port ID virtualization (NPIV) - a Fibre Channel technology to share a single physical host bus
adapter (HBA) with multiple virtual ports.
GPUs and vGPUs - accelerators for specific kinds of graphic or compute workloads. Some
GPUs can be attached directly to a VM, while certain types also offer the ability to create virtual
GPUs (vGPUs) that share the underlying physical hardware.
Attach devices
Modify devices
Remove devices
94
CHAPTER 13. MANAGING VIRTUAL DEVICES
The following procedure demostrates how to create and attach virtual devices to your virtual machines
(VMs) using the command-line interface (CLI). Some devices can also be attached to VMs using the
RHEL web console.
For example, you can increase the storage capacity of a VM by attaching a new virtual disk device to it.
This is also referred to as memory hot plug.
WARNING
Removing a memory device from a VM, also known as memory hot unplug, is not
supported in RHEL 9, and Red Hat highly discourages its use.
Prerequisites
Obtain the required options for the device you intend to attach to a VM. To see the available
options for a specific device, use the virt-xml --device=? command. For example:
# virt-xml --network=?
--network options:
[...]
address.unit
boot_order
clearxml
driver_name
[...]
Procedure
1. To attach a device to a VM, use the virt-xml --add-device command, including the definition of
the device and the required options:
For example, the following command creates a 20GB newdisk qcow2 disk image in the
/var/lib/libvirt/images/ directory, and attaches it as a virtual disk to the running testguest
VM on the next start-up of the VM:
The following attaches a USB flash drive, attached as device 004 on bus 002 on the host,
to the testguest2 VM while the VM is running:
95
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The bus-device combination for defining the USB can be obtained using the lsusb
command.
Verification
To verify the device has been added, do any of the following:
Use the virsh dumpxml command and see if the device’s XML definition has been added to the
<devices> section in the VM’s XML configuration.
For example, the following output shows the configuration of the testguest VM and confirms
that the 002.004 USB flash disk device has been added.
Run the VM and test if the device is present and works properly.
Additional resources
The following procedure provides general instructions for modifying virtual devices using the command-
line interface (CLI). Some devices attached to your VM, such as disks and NICs, can also be modified
using the RHEL 9 web console .
Prerequisites
Obtain the required options for the device you intend to attach to a VM. To see the available
options for a specific device, use the virt-xml --device=? command. For example:
# virt-xml --network=?
--network options:
[...]
address.unit
boot_order
clearxml
driver_name
[...]
96
CHAPTER 13. MANAGING VIRTUAL DEVICES
Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and
sending the output to a file. For example, the following backs up the configuration of your
Motoko VM as the motoko.xml file:
Procedure
1. Use the virt-xml --edit command, including the definition of the device and the required
options:
For example, the following clears the <cpu> configuration of the shut-off testguest VM and sets
it to host-model:
Verification
To verify the device has been modified, do any of the following:
Run the VM and test if the device is present and reflects the modifications.
Use the virsh dumpxml command and see if the device’s XML definition has been modified in
the VM’s XML configuration.
For example, the following output shows the configuration of the testguest VM and confirms
that the CPU mode has been configured as host-model.
Troubleshooting
If modifying a device causes your VM to become unbootable, use the virsh define utility to
restore the XML configuration by reloading the XML configuration file you backed up previously.
NOTE
For small changes to the XML configuration of your VM, you can use the virsh edit
command - for example virsh edit testguest. However, do not use this method for more
extensive changes, as it is more likely to break the configuration in ways that could
prevent the VM from booting.
97
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Additional resources
The following procedure demonstrates how to remove virtual devices from your virtual machines (VMs)
using the command-line interface (CLI). Some devices, such as disks or NICs, can also be removed from
VMs using the RHEL 9 web console .
Prerequisites
Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and
sending the output to a file. For example, the following backs up the configuration of your
Motoko VM as the motoko.xml file:
Procedure
1. Use the virt-xml --remove-device command, including a definition of the device. For example:
The following removes the storage device marked as vdb from the running testguest VM
after it shuts down:
The following immediately removes a USB flash drive device from the running testguest2
VM:
Troubleshooting
If removing a device causes your VM to become unbootable, use the virsh define utility to
restore the XML configuration by reloading the XML configuration file you backed up previously.
Additional resources
98
CHAPTER 13. MANAGING VIRTUAL DEVICES
Host devices are physical devices that are attached to the host system. Based on your requirements,
you can enable your VMs to directly access these hardware devices and components.
View devices
Attach devices
Remove devices
13.4.1. Viewing devices attached to virtual machines using the web console
Before adding or modifying the devices attached to your virtual machine (VM), you may want to view
the devices that are already attached to your VM. The following procedure provides instructions for
viewing such devices using the web console.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with detailed information about the VM.
99
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Additional resources
NOTE
Attaching multiple host devices at the same time does not work. You can attach only one
device at a time.
Prerequisites
If you are attaching PCI devices, ensure that the status of the managed attribute of the
hostdev element is set to yes.
NOTE
When attaching PCI devices to your VM, do not omit the managed attribute of
the hostdev element, or set it to no. If you do so, PCI devices cannot
automatically detach from the host when you pass them to the VM. They also
cannot automatically reattach to the host when you turn off the VM.
You can find the status of the managed attribute in your VM’s XML configuration. The
following example opens the XML configuration of the Ag47 VM:
Optional: Back up the XML configuration of your VM. For example, to back up the Centurion
VM:
100
CHAPTER 13. MANAGING VIRTUAL DEVICES
Procedure
1. In the Virtual Machines interface, click the VM to which you want to attach a host device.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
5. Click Add
The selected device is attached to the VM.
Verification
Run the VM and check if the device appears in the Host devices section.
13.4.3. Removing devices from virtual machines using the web console
101
Red Hat Enterprise Linux 9 Configuring and managing virtualization
To free up resources, modify the functionalities of your VM, or both, you can use the web console to
modify the VM and remove host devices that are no longer required.
WARNING
Removing attached USB host devices using the web console may fail because of
incorrect correlation between the device and bus numbers of the USB device.
As a workaround, remove the <hostdev> part of the USB device, from the VM’s XML
configuration, using the "virsh" utility. The following example opens the XML
configuration of the Ag47 VM:
Prerequisites
Optional: Back up the XML configuration of your VM by using virsh dumpxml vm-name and
sending the output to a file. For example, the following backs up the configuration of your
Motoko VM as the motoko.xml file:
Procedure
1. In the Virtual Machines interface, click the VM from which you want to remove a host device.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
102
CHAPTER 13. MANAGING VIRTUAL DEVICES
3. Click the Remove button next to the device you want to remove from the VM.
A remove device confirmation dialog appears.
4. Click Remove.
The device is removed from the VM.
Troubleshooting
If removing a host device causes your VM to become unbootable, use the virsh define utility to
restore the XML configuration by reloading the XML configuration file you backed up previously.
The following sections provide information about using the command line to:
Prerequisites
Ensure the device you want to pass through to the VM is attached to the host.
103
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Procedure
1. Locate the bus and device values of the USB that you want to attach to the VM.
For example, the following command displays a list of USB devices attached to the host. The
device we will use in this example is attached on bus 001 as device 005.
# lsusb
[...]
Bus 001 Device 003: ID 2567:0a2b Intel Corp.
Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
[...]
NOTE
To attach a USB device to a running VM, add the --update argument to the previous
command.
Verification
Run the VM and test if the device is present and works as expected.
Use the virsh dumpxml command to see if the device’s XML definition has been added to the
<devices> section in the VM’s XML configuration file.
Additional resources
104
CHAPTER 13. MANAGING VIRTUAL DEVICES
Procedure
1. Locate the bus and device values of the USB that you want to remove from the VM.
For example, the following command displays a list of USB devices attached to the host. The
device we will use in this example is attached on bus 001 as device 005.
# lsusb
[...]
Bus 001 Device 003: ID 2567:0a2b Intel Corp.
Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
[...]
NOTE
To remove a USB device from a running VM, add the --update argument to the previous
command.
Verification
Run the VM and check if the device has been removed from the list of devices.
Additional resources
The following sections provide information about using the command line to:
Prerequisites
105
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Prerequisites
Procedure
Verification
Run the VM and test if the device is present and works as expected.
Additional resources
Prerequisites
Procedure
1. Locate the target device where the CD-ROM is attached to the VM. You can find this
information in the VM’s XML configuration file.
For example, the following command displays the DN1 VM’s XML configuration file, where the
target device for CD-ROM is sda.
For example, the following command replaces the Doc10 ISO image, attached to the DN1 VM
at target sda, with the DrDN ISO image stored in the /Dvrs/current/ directory.
Verification
Run the VM and test if the device is replaced and works as expected.
Additional resources
Procedure
1. Locate the target device where the CD-ROM is attached to the VM. You can find this
information in the VM’s XML configuration file.
For example, the following command displays the DN1 VM’s XML configuration file, where the
target device for CD-ROM is sda.
Verification
Additional resources
107
Red Hat Enterprise Linux 9 Configuring and managing virtualization
To remove an optical drive attached to a virtual machine (VM), edit the XML configuration file of the
VM.
Procedure
1. Locate the target device where the CD-ROM is attached to the VM. You can find this
information in the VM’s XML configuration file.
For example, the following command displays the DN1 VM’s XML configuration file, where the
target device for CD-ROM is sda.
Verification
Confirm that the device is no longer listed in the XML configuration file of the VM.
Additional resources
Is able to provide the same or similar service as the original PCIe device.
For example, a single SR-IOV capable network device can present VFs to multiple VMs. While all of the
108
CHAPTER 13. MANAGING VIRTUAL DEVICES
For example, a single SR-IOV capable network device can present VFs to multiple VMs. While all of the
VFs use the same physical card, the same network connection, and the same network cable, each of the
VMs directly controls its own hardware network device, and uses no extra resources from the host.
Physical functions (PFs) - A PCIe function that provides the functionality of its device (for
example networking) to the host, but can also create and manage a set of VFs. Each SR-IOV
capable device has one or more PFs.
Virtual functions (VFs) - Lightweight PCIe functions that behave as independent devices. Each
VF is derived from a PF. The maximum number of VFs a device can have depends on the device
hardware. Each VF can be assigned only to a single VM at a time, but a VM can have multiple
VFs assigned to it.
VMs recognize VFs as virtual devices. For example, a VF created by an SR-IOV network device appears
as a network card to a VM to which it is assigned, in the same way as a physical network card appears to
the host system.
Benefits
The primary advantages of using SR-IOV VFs rather than emulated devices are:
Improved performance
109
Red Hat Enterprise Linux 9 Configuring and managing virtualization
For example, a VF attached to a VM as a vNIC performs at almost the same level as a physical NIC, and
much better than paravirtualized or emulated NICs. In particular, when multiple VFs are used
simultaneously on a single host, the performance benefits can be significant.
Disadvantages
To modify the configuration of a PF, you must first change the number of VFs exposed by the
PF to zero. Therefore, you also need to remove the devices provided by these VFs from the VM
to which they are assigned.
In addition, VFIO-assigned devices require pinning of VM memory, which increases the memory
consumption of the VM and prevents the use of memory ballooning on the VM.
Additional resources
Prerequisites
The CPU and the firmware of your host support the I/O Memory Management Unit (IOMMU).
If using an Intel CPU, it must support the Intel Virtualization Technology for Directed I/O
(VT-d).
The host system uses Access Control Service (ACS) to provide direct memory access (DMA)
isolation for PCIe topology. Verify this with the system vendor.
For additional information, see Hardware Considerations for Implementing SR-IOV .
The physical network device supports SR-IOV. To verify if any network devices on your system
support SR-IOV, use the lspci -v command and look for Single Root I/O Virtualization (SR-
IOV) in the output.
# lspci -v
[...]
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
Flags: bus master, fast devsel, latency 0, IRQ 16, NUMA node 0
Memory at fcba0000 (32-bit, non-prefetchable) [size=128K]
[...]
Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
110
CHAPTER 13. MANAGING VIRTUAL DEVICES
The host network interface you want to use for creating VFs is running. For example, to activate
the eth1 interface and verify it is running:
For SR-IOV device assignment to work, the IOMMU feature must be enabled in the host BIOS
and kernel. To do so:
Procedure
1. Optional: Confirm the maximum number of VFs your network device can use. To do so, use the
following command and replace eth1 with your SR-IOV compatible network device.
# cat /sys/class/net/eth1/device/sriov_totalvfs
7
VF-number with the number of VFs you want to create on the PF.
network-interface with the name of the network interface for which the VFs will be created.
111
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The following example creates 2 VFs from the eth1 network interface:
4. Make the created VFs persistent by creating a udev rule for the network interface you used to
create the VFs. For example, for the eth1 interface, create the /etc/udev/rules.d/eth1.rules file,
and add the following line:
This ensures that the two VFs that use the ixgbe driver will automatically be available for the
eth1 interface when the host starts. If you do not require persistent SR-IOV devices, skip this
step.
WARNING
Currently, the setting described above does not work correctly when
attempting to make VFs persistent on Broadcom NetXtreme II BCM57810
adapters. In addition, attaching VFs based on these adapters to Windows
VMs is currently not reliable.
Verification
If the procedure is successful, the guest operating system detects a new network interface card.
Networking devices
112
CHAPTER 13. MANAGING VIRTUAL DEVICES
Prerequisites
Your host system is using the IBM Z hardware architecture and supports the FICON protocol.
The necessary kernel modules have been loaded on the host. To verify, use:
vfio_ccw
vfio_mdev
vfio_iommu_type1
You have a spare DASD device for exclusive use by the VM, and you know the device’s
identifier.
This procedure uses 0.0.002c as an example. When performing the commands, replace 0.0.002c
with the identifier of your DASD device.
Procedure
# lscss -d 0.0.002c
Device Subchan. DevType CU Type Use PIM PAM POM CHPIDs
113
Red Hat Enterprise Linux 9 Configuring and managing virtualization
----------------------------------------------------------------------
0.0.002c 0.0.29a8 3390/0c 3990/e9 yes f0 f0 ff 02111221 00000000
In this example, the subchannel identifier is detected as 0.0.29a8. In the following commands of
this procedure, replace 0.0.29a8 with the detected subchannel identifier of your device.
2. If the lscss command in the previous step only displayed the header output and no device
information, perform the following steps:
# cio_ignore -r 0.0.002c
b. In the guest OS, edit the kernel command line of the VM and add the device identifier with a
! mark to the line that starts with cio_ignore=, if it is not present already.
cio_ignore=all,!condev,!0.0.002c
NOTE
This binds the 0.0.29a8 subchannel to vfio_ccw persistently, which means the
DASD will not be usable on the host. If you need to use the device on the host,
you must first remove the automatic binding to 'vfio_ccw' and rebind the
subchannel to the default driver:
4. Generate an UUID.
# uuidgen
30820a6f-b1a5-4503-91ca-0c10ba12345a
8. Attach the mediated device to the VM. To do so, use the virsh edit utility to edit the XML
configuration of the VM, add the following section to the XML, and replace the uuid value with
the UUID you generated in the previous step.
114
CHAPTER 13. MANAGING VIRTUAL DEVICES
Verification
1. Obtain the identifier that libvirt assigned to the mediated DASD device. To do so, display the
XML configuration of the VM and look for a vfio-ccw device.
<domain>
[...]
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ccw'>
<source>
<address uuid='10620d2f-ed4d-437b-8aff-beda461541f9'/>
</source>
<alias name='hostdev0'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0009'/>
</hostdev>
[...]
</domain>
3. In the guest OS, confirm that the DASD device is listed. For example:
# chccwdev -e 0.0009
Setting device 0.0.0009 online
Done
Additional resources
115
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The following sections provide information about the different types of VM storage, how they work, and
how you can manage them using the CLI or the web console.
Storage pools
Storage volumes
Overview of VM storage
Furthermore, multiple VMs can share the same storage pool, allowing for better allocation of storage
resources.
A persistent storage pool survives a system restart of the host machine. You can use the
virsh pool-define to create a persistent storage pool.
A transient storage pool only exists until the host reboots. You can use the virsh pool-
create command to create a transient storage pool.
Local storage pools are useful for development, testing, and small deployments that do not
require migration or have a large number of VMs.
116
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
On the host machine, a storage volume is referred to by its name and an identifier for the storage pool
from which it derives. On the virsh command line, this takes the form --pool storage_pool
volume_name.
For example, to display information about a volume named firstimage in the guest_images pool.
You can use the libvirt API to query the list of volumes in a storage pool or to get information regarding
the capacity, allocation, and available storage in that storage pool. For storage pools that support it, you
can also use the libvirt API to create, clone, resize, and delete storage volumes. Furthermore, you can
use the libvirt API to upload data to storage volumes, download data from storage volumes, or wipe
data from storage volumes.
As a storage administrator:
You can define an NFS storage pool on the virtualization host to describe the exported server
path and the client target path. Consequently, libvirt can mount the storage either
automatically when libvirt is started or as needed while libvirt is running.
You can simply add the storage pool and storage volume to a VM by name. You do not need to
add the target path to the volume. Therefore, even if the target client path changes, it does not
affect the VM.
You can configure storage pools to autostart. When you do so, libvirt automatically mounts the
NFS shared disk on the directory which is specified when libvirt is started. libvirt mounts the
share on the specified directory, similar to the command mount
nfs.example.com:/path/to/share /vmdata.
You can query the storage volume paths using the libvirt API. These storage volumes are
117
Red Hat Enterprise Linux 9 Configuring and managing virtualization
You can query the storage volume paths using the libvirt API. These storage volumes are
basically the files present in the NFS shared disk. You can then copy these paths into the section
of a VM’s XML definition that describes the source storage for the VM’s block devices.
In the case of NFS, you can use an application that uses the libvirt API to create and delete
storage volumes in the storage pool (files in the NFS share) up to the limit of the size of the pool
(the storage capacity of the share).
Note that, not all storage pool types support creating and deleting volumes.
You can stop a storage pool when no longer required. Stopping a storage pool (pool-destroy)
undoes the start operation, in this case, unmounting the NFS share. The data on the share is not
modified by the destroy operation, despite what the name of the command suggests. For more
information, see man virsh.
The following is a list of libvirt storage pool types not supported by RHEL:
118
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Create SCSI-based storage pools with vHBA devices using the CLI
Procedure
Additional resources
Prerequisites
Procedure
119
Red Hat Enterprise Linux 9 Configuring and managing virtualization
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Directory-based storage pool parameters .
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 31 19:38 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
120
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
Prerequisites
Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for
example, /dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or
block device (for example, /dev/sdb), the VM will likely partition it or create its own LVM groups
on it. This can result in system errors on the host.
However, if you require using an entire block device for the storage pool, Red Hat recommends
protecting any important partitions on the device from GRUB’s os-prober function. To do so,
edit the /etc/default/grub file and apply one of the following configurations:
Disable os-prober.
GRUB_DISABLE_OS_PROBER=true
GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"
Back up any data on the selected storage device before creating a storage pool. Depending on
the version of libvirt being used, dedicating a disk to a storage pool may reformat and erase all
data currently stored on the disk device.
Procedure
Use the virsh pool-define-as command to define and create a disk-type storage pool. The
121
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Use the virsh pool-define-as command to define and create a disk-type storage pool. The
following example creates a storage pool named guest_images_disk that uses the /dev/sdb
device and is mounted on the /dev directory.
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Disk-based storage pool parameters .
NOTE
Building the target path is only necessary for disk-based, file system-based, and
logical storage pools. If libvirt detects that the source storage device’s data
format differs from the selected storage pool type, the build fails, unless the
overwrite option is specified.
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
122
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
Prerequisites
Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for
example, /dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or
block device (for example, /dev/sdb), the VM will likely partition it or create its own LVM groups
on it. This can result in system errors on the host.
However, if you require using an entire block device for the storage pool, Red Hat recommends
protecting any important partitions on the device from GRUB’s os-prober function. To do so,
edit the /etc/default/grub file and apply one of the following configurations:
Disable os-prober.
GRUB_DISABLE_OS_PROBER=true
GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"
Procedure
Use the virsh pool-define-as command to define and create a filesystem-type storage pool.
123
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Use the virsh pool-define-as command to define and create a filesystem-type storage pool.
For example, to create a storage pool named guest_images_fs that uses the /dev/sdc1
partition, and is mounted on the /guest_images directory:
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Filesystem-based storage pool parameters .
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 31 19:38 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
124
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Verification
1. Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
2. Verify there is a lost+found directory in the target path on the file system, indicating that the
device is mounted.
# ls -la /guest_images
total 24
drwxr-xr-x. 3 root root 4096 May 31 19:47 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
drwx------. 2 root root 16384 May 31 14:18 lost+found
Prerequisites
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
125
Red Hat Enterprise Linux 9 Configuring and managing virtualization
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see iSCSI-based storage pool parameters .
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
126
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Recommendations
Be aware of the following before creating an LVM-based storage pool:
libvirt supports thin logical volumes, but does not provide the features of thin storage pools.
LVM-based storage pools are volume groups. You can create volume groups using the virsh
utility, but this way you can only have one device in the created volume group. To create a
volume group with multiple devices, use the LVM utility instead, see How to create a volume
group in Linux with LVM.
For more detailed information about volume groups, refer to the Red Hat Enterprise Linux
Logical Volume Manager Administration Guide.
LVM-based storage pools require a full disk partition. If you activate a new partition or device
using virsh commands, the partition will be formatted and all data will be erased. If you are using
a host’s existing volume group, as in these procedures, nothing will be erased.
Prerequisites
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see LVM-based storage pool parameters .
127
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
Prerequisites
Procedure
128
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see NFS-based storage pool parameters .
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
14.2.8. Creating SCSI-based storage pools with vHBA devices using the CLI
129
Red Hat Enterprise Linux 9 Configuring and managing virtualization
If you want to have a storage pool on a Small Computer System Interface (SCSI) device, your host must
be able to connect to the SCSI device using a virtual host bus adapter (vHBA). You can then use the
virsh utility to create SCSI-based storage pools.
Prerequisites
Before creating a SCSI-based storage pools with vHBA devices, create a vHBA. For more
information, see Creating vHBAs.
Procedure
If you already have an XML configuration of the storage pool you want to create, you can also
define the pool based on the XML. For details, see Parameters for SCSI-based storage pools
with vHBA devices.
NOTE
The virsh pool-start command is only necessary for persistent storage pools.
Transient storage pools are automatically started when they are created.
130
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Verification
Use the virsh pool-info command to verify that the storage pool is in the running state. Check
if the sizes reported are as expected and if autostart is configured correctly.
Procedure
1. List the defined storage pools using the virsh pool-list command.
2. Stop the storage pool you want to delete using the virsh pool-destroy command.
3. Optional: For some types of storage pools, you can remove the directory where the storage
pool resides using the virsh pool-delete command. Note that to do so, the directory must be
empty.
4. Delete the definition of the storage pool using the virsh pool-undefine command.
131
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Verification
Prerequisites
Procedure
132
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Size - The current allocation and the total capacity of the storage pool.
2. Click the arrow next to the storage pool whose information you want to see.
The row expands to reveal the Overview pane with detailed information about the selected
storage pool.
Target path - The source for the types of storage pools backed by directories, such as dir
or netfs.
Persistent - Indicates whether or not the storage pool has a persistent configuration.
Autostart - Indicates whether or not the storage pool starts automatically when the system
boots up.
3. To view a list of storage volumes associated with the storage pool, click Storage Volumes.
The Storage Volumes pane appears, showing a list of configured storage volumes.
133
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Additional resources
Prerequisites
Procedure
1. In the RHEL web console, click Storage pools in the Virtual Machines tab.
The Storage pools window appears, showing a list of configured storage pools, if any.
134
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
NOTE
If you do not see the Filesystem directory option in the drop down menu, then
your hypervisor does not support directory-based storage pools.
Target path - The source for the types of storage pools backed by directories, such as dir
or netfs.
Startup - Whether or not the storage pool starts when the host boots.
6. Click Create.
The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool
appears in the list of storage pools.
Additional resources
Prerequisites
Procedure
1. In the RHEL web console, click Storage pools in the Virtual Machines tab.
The Storage pools window appears, showing a list of configured storage pools, if any.
135
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
If you do not see the Network file system option in the drop down menu, then
your hypervisor does not support nfs-based storage pools.
Target path - The path specifying the target. This will be the path used for the storage
pool.
Host - The hostname of the network server where the mount point is located. This can be a
hostname or an IP address.
Startup - Whether or not the storage pool starts when the host boots.
136
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
6. Click Create.
The storage pool is created. The Create storage pool dialog closes, and the new storage pool
appears in the list of storage pools.
Additional resources
Prerequisites
Procedure
1. In the RHEL web console, click Storage pools in the Virtual Machines tab.
The Storage pools window appears, showing a list of configured storage pools, if any.
137
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Target Path - The path specifying the target. This will be the path used for the storage
pool.
Source path - The unique iSCSI Qualified Name (IQN) of the iSCSI target.
Startup - Whether or not the storage pool starts when the host boots.
6. Click Create.
The storage pool is created. The Create storage pool dialog closes, and the new storage pool
appears in the list of storage pools.
Additional resources
138
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
WARNING
When whole disks or block devices are passed to the VM, the VM will likely
partition it or create its own LVM groups on it. This can cause the host
machine to detect these partitions or LVM groups and cause errors.
These errors can also occur when you manually create partitions or LVM
groups and pass them to the VM.
Prerequisites
Procedure
1. In the RHEL web console, click Storage pools in the Virtual Machines tab.
The Storage pools window appears, showing a list of configured storage pools, if any.
139
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
If you do not see the Physical disk device option in the drop down menu, then
your hypervisor does not support disk-based storage pools.
Target Path - The path specifying the target device. This will be the path used for the
storage pool.
Source path - The path specifying the storage device. For example, /dev/sdb.
Startup - Whether or not the storage pool starts when the host boots.
6. Click Create.
The storage pool is created. The Create storage pool dialog closes, and the new storage pool
appears in the list of storage pools.
Additional resources
NOTE
140
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
NOTE
libvirt supports thin logical volumes, but does not provide the features of thin
storage pools.
LVM-based storage pools require a full disk partition. If you activate a new
partition or device using virsh commands, the partition will be formatted and all
data will be erased. If you are using a host’s existing volume group, as in these
procedures, nothing will be erased.
To create a volume group with multiple devices, use the LVM utility instead, see
How to create a volume group in Linux with LVM .
For more detailed information about volume groups, refer to the Red Hat
Enterprise Linux Logical Volume Manager Administration Guide.
Prerequisites
Procedure
1. In the RHEL web console, click Storage pools in the Virtual Machines tab.
The Storage pools window appears, showing a list of configured storage pools, if any.
141
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
If you do not see the LVM volume group option in the drop down menu, then
your hypervisor does not support LVM-based storage pools.
Source volume group - The name of the LVM volume group that you wish to use.
Startup - Whether or not the storage pool starts when the host boots.
6. Click Create.
The storage pool is created. The Create storage pool dialog closes, and the new storage pool
appears in the list of storage pools.
Additional resources
IMPORTANT
Unless explicitly specified, deleting a storage pool does not simultaneously delete the
storage volumes inside that pool.
To temporarily deactivate a storage pool instead of deleting it, see Deactivating storage pools using the
web console
Prerequisites
142
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
If you want to delete the associated storage volumes along with the pool, activate the pool.
Procedure
2. Click the Menu button ⋮ of the storage pool you want to delete and click Delete.
A confirmation dialog appears.
3. Optional: To delete the storage volumes inside the pool, select the corresponding check boxes
in the dialog.
4. Click Delete.
The storage pool is deleted. If you had selected the checkbox in the previous step, the
associated storage volumes are deleted as well.
Additional resources
When you deactivate a storage pool, no new volumes can be created in that pool. However, any virtual
143
Red Hat Enterprise Linux 9 Configuring and managing virtualization
When you deactivate a storage pool, no new volumes can be created in that pool. However, any virtual
machines (VMs) that have volumes in that pool will continue to run. This is useful for a number of
reasons, for example, you can limit the number of volumes that can be created in a pool to increase
system performance.
To deactivate a storage pool using the RHEL web console, see the following procedure.
Prerequisites
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
Additional resources
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
144
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Parameters
The following table provides a list of required parameters for the XML file for a directory-based storage
pool.
Description XML
The path specifying the target. This will be the path <target>
used for the storage pool. <path>target_path</path>
</target>
Example
The following is an example of an XML file for a storage pool based on the /guest_images directory:
<pool type='dir'>
<name>dirpool</name>
<target>
<path>/guest_images</path>
</target>
</pool>
Additional resources
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a disk-based storage pool.
145
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Description XML
The path specifying the target device. This will be the <target>
path used for the storage pool. <path>target_path</path>
</target>
Example
The following is an example of an XML file for a disk-based storage pool:
<pool type='disk'>
<name>phy_disk</name>
<source>
<device path='/dev/sdb'/>
<format type='gpt'/>
</source>
<target>
<path>/dev</path>
</target>
</pool>
Additional resources
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a filesystem-based
storage pool.
146
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Description XML
The file system type, for example ext4. <format type=fs_type />
</source>
The path specifying the target. This will be the path <target>
used for the storage pool. <path>path-to-pool</path>
</target>
Example
The following is an example of an XML file for a storage pool based on the /dev/sdc1 partition:
<pool type='fs'>
<name>guest_images_fs</name>
<source>
<device path='/dev/sdc1'/>
<format type='auto'/>
</source>
<target>
<path>/guest_images</path>
</target>
</pool>
Additional resources
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for an iSCSI-based storage
pool.
147
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Description XML
The path specifying the target. This will be the path <target>
used for the storage pool. <path>/dev/disk/by-path</path>
</target>
NOTE
The IQN of the iSCSI initiator can be determined using the virsh find-storage-pool-
sources-as iscsi command.
Example
The following is an example of an XML file for a storage pool based on the specified iSCSI device:
<pool type='iscsi'>
<name>iSCSI_pool</name>
<source>
<host name='server1.example.com'/>
<device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
Additional resources
148
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a LVM-based storage
pool.
Description XML
NOTE
If the logical volume group is made of multiple disk partitions, there may be multiple
source devices listed. For example:
<source>
<device path='/dev/sda1'/>
<device path='/dev/sdb3'/>
<device path='/dev/sdc2'/>
...
</source>
Example
The following is an example of an XML file for a storage pool based on the specified LVM:
<pool type='logical'>
<name>guest_images_lvm</name>
<source>
<device path='/dev/sdc'/>
<name>libvirt_lvm</name>
149
Red Hat Enterprise Linux 9 Configuring and managing virtualization
<format type='lvm2'/>
</source>
<target>
<path>/dev/libvirt_lvm</path>
</target>
</pool>
Additional resources
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for an NFS-based storage
pool.
Description XML
The path specifying the target. This will be the path <target>
used for the storage pool. <path>target_path</path>
</target>
150
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Example
The following is an example of an XML file for a storage pool based on the /home/net_mount directory
of the file_server NFS server:
<pool type='netfs'>
<name>nfspool</name>
<source>
<host name='file_server'/>
<format type='nfs'/>
<dir path='/home/net_mount'/>
</source>
<target>
<path>/var/lib/libvirt/images/nfspool</path>
</target>
</pool>
Additional resources
You can use the virsh pool-define command to create a storage pool based on the XML configuration
in a specified file. For example:
Parameters
The following table provides a list of required parameters for the XML file for a SCSI-based storage
pool with vHBA.
Table 14.7. Parameters for SCSI-based storage pools with vHBA devices
Description XML
151
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Description XML
The target path. This will be the path used for the <target>
storage pool. <path=target_path />
</target>
IMPORTANT
When the <path> field is /dev/, libvirt generates a unique short device path for the
volume device path. For example, /dev/sdc. Otherwise, the physical host path is used. For
example, /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0. The
unique short device path allows the same volume to be listed in multiple virtual machines
(VMs) by multiple storage pools. If the physical host path is used by multiple VMs,
duplicate device type warnings may occur.
NOTE
The parent attribute can be used in the <adapter> field to identify the physical HBA
parent from which the NPIV LUNs by varying paths can be used. This field, scsi_hostN, is
combined with the vports and max_vports attributes to complete the parent
identification. The parent, parent_wwnn, parent_wwpn, or parent_fabric_wwn
attributes provide varying degrees of assurance that after the host reboots the same
HBA is used.
If no parent is specified, libvirt uses the first scsi_hostN adapter that supports
NPIV.
If only the parent is specified, problems can arise if additional SCSI host adapters
are added to the configuration.
If parent_fabric_wwn is used, after the host reboots an HBA on the same fabric
is selected, regardless of the scsi_hostN used.
Examples
The following are examples of XML files for SCSI-based storage pools with vHBA.
<pool type='scsi'>
<name>vhbapool_host3</name>
<source>
<adapter type='fc_host' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
152
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
A storage pool that is one of several storage pools that use a single vHBA and uses the parent
attribute to identify the SCSI host device:
<pool type='scsi'>
<name>vhbapool_host3</name>
<source>
<adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1'
wwpn='5001a4ace3ee047d'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
Additional resources
Creating SCSI-based storage pools with vHBA devices using the CLI
Procedure
1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.
2. Use the virsh vol-info command to list the storage volumes in a specified storage pool.
153
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Name: RHEL_Volume.qcow2
Type: file
Capacity: 60.00 GiB
Allocation: 13.93 GiB
Prerequisites
If you do not have an existing storage pool, create one. For more information, see Managing
storage for virtual machines.
Procedure
1. Create a storage volume using the virsh vol-create-as command. For example, to create a 20
GB qcow2 volume based on the guest-images-fs storage pool:
Important: Specific storage pool types do not support the virsh vol-create-as command and
instead require specific processes to create storage volumes:
2. Create an XML file, and add the following lines in it. This file will be used to add the storage
volume as a disk to a VM.
This example specifies a virtual disk that uses the vm-disk1 volume, created in the previous
154
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
This example specifies a virtual disk that uses the vm-disk1 volume, created in the previous
step, and sets the volume to be set up as disk hdk on an ide bus. Modify the respective
parameters as appropriate for your environment.
Important: With specific storage pool types, you must use different XML formats to describe a
storage volume disk.
3. Use the XML file to assign the storage volume as a disk to a VM. For example, to assign a disk
defined in ~/vm-disk1.xml to the testguest1 VM:
Verification
In the guest operating system of the VM, confirm that the disk image has become available as
an un-formatted and un-allocated disk.
Prerequisites
Any virtual machine that uses the storage volume you want to delete is shut down.
Procedure
1. Use the virsh vol-list command to list the storage volumes in a specified storage pool.
155
Red Hat Enterprise Linux 9 Configuring and managing virtualization
.bashrc /home/VirtualMachines/.bashrc
.git-prompt.sh /home/VirtualMachines/.git-prompt.sh
.gitconfig /home/VirtualMachines/.gitconfig
vm-disk1 /home/VirtualMachines/vm-disk1
2. Optional: Use the virsh vol-wipe command to wipe a storage volume. For example, to wipe a
storage volume named vm-disk1 associated with the storage pool RHEL-SP:
3. Use the virsh vol-delete command to delete a storage volume. For example, to delete a
storage volume named vm-disk1 associated with the storage pool RHEL-SP:
Verification
Use the virsh vol-list command again to verify that the storage volume was deleted.
To create storage volumes using the web console, see the following procedure.
Prerequisites
156
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
2. In the Storage Pools window, click the storage pool from which you want to create a storage
volume.
The row expands to reveal the Overview pane with basic information about the selected storage
pool.
3. Click Storage Volumes next to the Overview tab in the expanded row.
The Storage Volume tab appears with basic information about existing storage volumes, if any.
157
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Format - The format of the storage volume. The supported types are qcow2 and raw.
6. Click Create.
The storage volume is created, the Create Storage Volume dialog closes, and the new storage
volume appears in the list of storage volumes.
Additional resources
To remove storage volumes using the RHEL web console, see the following procedure.
Prerequisites
Any virtual machine that uses the storage volume you want to delete is shut down.
Procedure
1. Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears,
showing a list of configured storage pools.
158
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
2. In the Storage Pools window, click the storage pool from which you want to remove a storage
volume.
The row expands to reveal the Overview pane with basic information about the selected storage
pool.
3. Click Storage Volumes next to the Overview tab in the expanded row.
The Storage Volume tab appears with basic information about existing storage volumes, if any.
Additional resources
159
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Attach disks to a VM .
Using the web console, you can view detailed information about disks assigned to a selected virtual
machine (VM).
Prerequisites
Procedure
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
Access - Whether the disk is Writeable or Read-only. For raw disks, you can also set the
access to Writeable and shared.
160
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Additional resources
14.7.2. Adding new disks to virtual machines using the web console
You can add new disks to virtual machines (VMs) by creating a new storage volume and attaching it to a
VM using the RHEL 9 web console.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM for which you want to create and attach a new
disk.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
161
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Pool - Select the storage pool from which the virtual disk will be created.
Name - Enter a name for the virtual disk that will be created.
Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created.
Format - Select the format for the virtual disk that will be created. The supported types are
qcow2 and raw.
Persistence - If checked, the virtual disk is persistent. If not checked, the virtual disk is
transient.
NOTE
6. Click Add.
The virtual disk is created and connected to the VM.
Additional resources
14.7.3. Attaching existing disks to virtual machines using the web console
Using the web console, you can attach existing storage volumes as disks to a virtual machine (VM).
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM for which you want to create and attach a new
disk.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
162
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
Pool - Select the storage pool from which the virtual disk will be attached.
163
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Persistence - Available when the VM is running. Select the Always attach checkbox to
make the virtual disk persistent. Clear the checkbox to make the virtual disk transient.
6. Click Add
The selected virtual disk is attached to the VM.
Additional resources
14.7.4. Detaching disks from virtual machines using the web console
Using the web console, you can detach disks from virtual machines (VMs).
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM from which you want to detach a disk.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
2. Scroll to Disks.
The Disks section displays information about the disks assigned to the VM as well as options to
Add, Remove, or Edit disks.
3. Click the Remove button next to the disk you want to detach from the VM. A Remove Disk
confirmation dialog box appears.
Additional resources
164
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
The following provides instructions for securing iSCSI-based storage pools with libvirt secrets.
NOTE
This procedure is required if a user_ID and password were defined when creating the
iSCSI target.
Prerequisites
Ensure that you have created an iSCSI-based storage pool. For more information, see Creating
iSCSI-based storage pools using the CLI.
Procedure
1. Create a libvirt secret file with a challenge-handshake authentication protocol (CHAP) user
name. For example:
# virsh secret-list
UUID Usage
-------------------------------------------------------------------
2d7891af-20be-4e5e-af83-190e8a922360 iscsi iscsirhel7secret
4. Assign a secret to the UUID in the output of the previous step using the virsh secret-set-value
command. This ensures that the CHAP username and password are in a libvirt-controlled secret
list. For example:
5. Add an authentication entry in the storage pool’s XML file using the virsh edit command, and
165
Red Hat Enterprise Linux 9 Configuring and managing virtualization
5. Add an authentication entry in the storage pool’s XML file using the virsh edit command, and
add an <auth> element, specifying authentication type, username, and secret usage.
For example:
<pool type='iscsi'>
<name>iscsirhel7pool</name>
<source>
<host name='192.168.122.1'/>
<device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
<auth type='chap' username='redhat'>
<secret usage='iscsirhel7secret'/>
</auth>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
NOTE
The <auth> sub-element exists in different locations within the virtual machine’s
<pool> and <disk> XML elements. For a <pool>, <auth> is specified within the
<source> element, as this describes where to find the pool sources, since
authentication is a property of some pool sources (iSCSI and RBD). For a <disk>,
which is a sub-element of a domain, the authentication to the iSCSI or RBD disk is
a property of the disk. In addition, the <auth> sub-element for a disk differs from
that of a storage pool.
<auth username='redhat'>
<secret type='iscsi' usage='iscsirhel7secret'/>
</auth>
6. To activate the changes, activate the storage pool. If the pool has already been started, stop and
restart the storage pool:
Procedure
1. Locate the HBAs on your host system, using the virsh nodedev-list --cap vports command.
The following example shows a host that has two HBAs that support vHBA:
166
CHAPTER 14. MANAGING STORAGE FOR VIRTUAL MACHINES
2. View the HBA’s details, using the virsh nodedev-dumpxml HBA_device command.
The output from the command lists the <name>, <wwnn>, and <wwpn> fields, which are used
to create a vHBA. <max_vports> shows the maximum number of supported vHBAs. For
example:
<device>
<name>scsi_host3</name>
<path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path>
<parent>pci_0000_10_00_0</parent>
<capability type='scsi_host'>
<host>3</host>
<unique_id>0</unique_id>
<capability type='fc_host'>
<wwnn>20000000c9848140</wwnn>
<wwpn>10000000c9848140</wwpn>
<fabric_wwn>2002000573de9a81</fabric_wwn>
</capability>
<capability type='vport_ops'>
<max_vports>127</max_vports>
<vports>0</vports>
</capability>
</capability>
</device>
In this example, the <max_vports> value shows there are a total 127 virtual ports available for
use in the HBA configuration. The <vports> value shows the number of virtual ports currently
being used. These values update after creating a vHBA.
3. Create an XML file similar to one of the following for the vHBA host. In these examples, the file
is named vhba_host3.xml.
This example uses scsi_host3 to describe the parent vHBA.
<device>
<parent>scsi_host3</parent>
<capability type='scsi_host'>
<capability type='fc_host'>
</capability>
</capability>
</device>
<device>
<name>vhba</name>
<parent wwnn='20000000c9848140' wwpn='10000000c9848140'/>
<capability type='scsi_host'>
<capability type='fc_host'>
167
Red Hat Enterprise Linux 9 Configuring and managing virtualization
</capability>
</capability>
</device>
NOTE
The WWNN and WWPN values must match those in the HBA details seen in the
previous step.
The <parent> field specifies the HBA device to associate with this vHBA device. The details in
the <device> tag are used in the next step to create a new vHBA device for the host. For more
information on the nodedev XML format, see the libvirt upstream pages .
NOTE
The virsh command does not provide a way to define the parent_wwnn,
parent_wwpn, or parent_fabric_wwn attributes.
4. Create a VHBA based on the XML file created in the previous step using the virsh nodev-
create command.
Verification
Verify the new vHBA’s details (scsi_host5) using the virsh nodedev-dumpxml command:
Additional resources
Creating SCSI-based storage pools with vHBA devices using the CLI
168
CHAPTER 15. MANAGING GPU DEVICES IN VIRTUAL MACHINES
You can detach the GPU from the host and pass full control of the GPU directly to the VM.
You can create multiple mediated devices from a physical GPU, and assign these devices as
virtual GPUs (vGPUs) to multiple guests. This is currently only supported on selected NVIDIA
GPUs, and only one mediated device can be assigned to a single guest.
NOTE
If you are looking for information about assigning a virtual GPU, see Managing NVIDIA
vGPU devices.
Prerequisites
NOTE
Procedure
169
Red Hat Enterprise Linux 9 Configuring and managing virtualization
b. Prevent the host’s graphics driver from using the GPU. To do so, use the GPU’s PCI ID with
the pci-stub driver.
For example, the following command prevents the driver from binding to the GPU attached
at the 10de:11fa bus:
2. Optional: If certain GPU functions, such as audio, cannot be passed through to the VM due to
support limitations, you can modify the driver bindings of the endpoints within an IOMMU group
to pass through only the necessary GPU functions.
a. Convert the GPU settings to XML and note the PCI address of the endpoints that you want
to prevent from attaching to the host drivers.
To do so, convert the GPU’s PCI bus address to a libvirt-compatible format by adding the
pci_ prefix to the address, and converting the delimiters to underscores.
For example, the following command displays the XML configuration of the GPU attached
at the 0000:02:00.0 bus address.
<device>
<name>pci_0000_02_00_0</name>
<path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path>
<parent>pci_0000_00_03_0</parent>
<driver>
<name>pci-stub</name>
</driver>
<capability type='pci'>
<domain>0</domain>
<bus>2</bus>
<slot>0</slot>
<function>0</function>
<product id='0x11fa'>GK106GL [Quadro K4000]</product>
<vendor id='0x10de'>NVIDIA Corporation</vendor>
<iommuGroup number='13'>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
</iommuGroup>
<pci-express>
<link validity='cap' port='0' speed='8' width='16'/>
<link validity='sta' speed='2.5' width='16'/>
</pci-express>
</capability>
</device>
170
CHAPTER 15. MANAGING GPU DEVICES IN VIRTUAL MACHINES
a. Create an XML configuration file for the GPU by using the PCI bus address.
For example, you can create the following XML file, GPU-Assign.xml, by using parameters
from the GPU’s bus address.
NOTE
Verification
The device appears under the <devices> section in VM’s XML configuration. For more
information, see Sample virtual machine XML configuration .
Known Issues
The number of GPUs that can be attached to a VM is limited by the maximum number of
assigned PCI devices, which in RHEL 9 is currently 64. However, attaching multiple GPUs to a
VM is likely to cause problems with memory-mapped I/O (MMIO) on the guest, which may result
in the GPUs not being available to the VM.
To work around these problems, set a larger 64-bit MMIO space and configure the vCPU
physical address bits to make the extended 64-bit MMIO space addressable.
Attaching an NVIDIA GPU device to a VM that uses a RHEL 9 guest operating system currently
disables the Wayland session on that VM, and loads an Xorg session instead. This is because of
incompatibilities between NVIDIA drivers and Wayland.
171
Red Hat Enterprise Linux 9 Configuring and managing virtualization
IMPORTANT
Assigning a physical GPU to VMs, with or without using mediated devices, makes it
impossible for the host to use the GPU.
Prerequisites
Your GPU supports vGPU mediated devices. For an up-to-date list of NVIDIA GPUs that
support creating vGPUs, see the NVIDIA vGPU software documentation.
If you do not know which GPU your host is using, install the lshw package and use the lshw -
C display command. The following example shows the system is using an NVIDIA Tesla P4
GPU, compatible with vGPU.
# lshw -C display
*-display
description: 3D controller
product: GP104GL [Tesla P4]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress cap_list
configuration: driver=vfio-pci latency=0
resources: irq:16 memory:f6000000-f6ffffff memory:e0000000-efffffff
memory:f0000000-f1ffffff
Procedure
1. Download the NVIDIA vGPU drivers and install them on your system. For instructions, see the
NVIDIA documentation.
blacklist nouveau
options nouveau modeset=0
172
CHAPTER 15. MANAGING GPU DEVICES IN VIRTUAL MACHINES
3. Regenerate the initial ramdisk for the current kernel, then reboot.
# dracut --force
# reboot
4. Check that the kernel has loaded the nvidia_vgpu_vfio module and that the nvidia-vgpu-
mgr.service service is running.
In addition, if creating vGPU based on an NVIDIA Ampere GPU device, ensure that virtual
functions are enable for the physical GPU. For instructions, see the NVIDIA documentation.
# uuidgen
30820a6f-b1a5-4503-91ca-0c10ba58692a
6. Prepare an XML file with a configuration of the mediated device, based on the detected GPU
hardware. For example, the following configures a mediated device of the nvidia-63 vGPU type
on an NVIDIA Tesla P4 card that runs on the 0000:01:00.0 PCI bus and uses the UUID
generated in the previous step.
<device>
<parent>pci_0000_01_00_0</parent>
<capability type="mdev">
<type id="nvidia-63"/>
<uuid>30820a6f-b1a5-4503-91ca-0c10ba58692a</uuid>
</capability>
</device>
7. Define a vGPU mediated device based on the XML file you prepared. For example:
173
Red Hat Enterprise Linux 9 Configuring and managing virtualization
11. Set the vGPU device to start automatically after the host reboots
# virsh nodedev-autostart
mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
Device mdev_d196754e_d8ed_4f43_bf22_684ed698b08b_0000_9b_00_0 marked as
autostarted
12. Attach the mediated device to a VM that you want to share the vGPU resources. To do so, add
the following lines, along with the previously genereated UUID, to the <devices/> sections in the
XML configuration of the VM.
Note that each UUID can only be assigned to one VM at a time. In addition, if the VM does not
have QEMU video devices, such as virtio-vga, add also the ramfb='on' parameter on the
<hostdev> line.
13. For full functionality of the vGPU mediated devices to be available on the assigned VMs, set up
NVIDIA vGPU guest software licensing on the VMs. For further information and instructions, see
the NVIDIA Virtual GPU Software License Server User Guide .
Verification
1. Query the capabilities of the vGPU you created, and ensure it is listed as active and persistent.
2. Start the VM and verify that the guest operating system detects the mediated device as an
NVIDIA GPU. For example, if the VM uses Linux:
# lspci -d 10de: -k
07:00.0 VGA compatible controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB]
(rev a1)
174
CHAPTER 15. MANAGING GPU DEVICES IN VIRTUAL MACHINES
Known Issues
Assigning an NVIDIA vGPU mediated device to a VM that uses a RHEL 9 guest operating
system currently disables the Wayland session on that VM, and loads an Xorg session instead.
This is because of incompatibilities between NVIDIA drivers and Wayland.
Additional resources
Prerequisites
The VM from which you want to remove the device is shut down.
Procedure
# virsh nodedev-destroy
mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
Destroyed node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'
4. Remove the device from the XML configuration of the VM. To do so, use the virsh edit utility to
edit the XML configuration of the VM, and remove the mdev’s configuration segment. The
segment will look similar to the following:
175
Red Hat Enterprise Linux 9 Configuring and managing virtualization
<source>
<address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/>
</source>
</hostdev>
Note that stopping and detaching the mediated device does not delete it, but rather keeps it as
defined. As such, you can restart and attach the device to a different VM.
# virsh nodedev-undefine
mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
Undefined node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'
Verification
If you only stopped and detached the device, ensure the mediated device is listed as inactive.
If you also deleted the device, ensure the following command does not display it.
Additional resources
Procedure
To see the available GPUs devices on your host that can support vGPU mediated devices, use
the virsh nodedev-list --cap mdev_types command. For example, the following shows a
system with two NVIDIA Quadro RTX6000 devices.
To display vGPU types supported by a specific GPU device, as well as additional metadata, use
the virsh nodedev-dumpxml command.
176
CHAPTER 15. MANAGING GPU DEVICES IN VIRTUAL MACHINES
<name>pci_0000_9b_00_0</name>
<path>/sys/devices/pci0000:9a/0000:9a:00.0/0000:9b:00.0</path>
<parent>pci_0000_9a_00_0</parent>
<driver>
<name>nvidia</name>
</driver>
<capability type='pci'>
<class>0x030000</class>
<domain>0</domain>
<bus>155</bus>
<slot>0</slot>
<function>0</function>
<product id='0x1e30'>TU102GL [Quadro RTX 6000/8000]</product>
<vendor id='0x10de'>NVIDIA Corporation</vendor>
<capability type='mdev_types'>
<type id='nvidia-346'>
<name>GRID RTX6000-12C</name>
<deviceAPI>vfio-pci</deviceAPI>
<availableInstances>2</availableInstances>
</type>
<type id='nvidia-439'>
<name>GRID RTX6000-3A</name>
<deviceAPI>vfio-pci</deviceAPI>
<availableInstances>8</availableInstances>
</type>
[...]
<type id='nvidia-440'>
<name>GRID RTX6000-4A</name>
<deviceAPI>vfio-pci</deviceAPI>
<availableInstances>6</availableInstances>
</type>
<type id='nvidia-261'>
<name>GRID RTX6000-8Q</name>
<deviceAPI>vfio-pci</deviceAPI>
<availableInstances>3</availableInstances>
</type>
</capability>
<iommuGroup number='216'>
<address domain='0x0000' bus='0x9b' slot='0x00' function='0x3'/>
<address domain='0x0000' bus='0x9b' slot='0x00' function='0x1'/>
<address domain='0x0000' bus='0x9b' slot='0x00' function='0x2'/>
<address domain='0x0000' bus='0x9b' slot='0x00' function='0x0'/>
</iommuGroup>
<numa node='2'/>
<pci-express>
<link validity='cap' port='0' speed='8' width='16'/>
<link validity='sta' speed='2.5' width='8'/>
</pci-express>
</capability>
</device>
Additional resources
177
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NICE DCV
Mechdyne TGX
178
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
You can enable the VMs on your host to be discovered and connected to by locations outside
the host, as if the VMs were on the same network as the host.
You can partially or completely isolate a VM from inbound network traffic to increase its security
and minimize the risk of any problems with the VM impacting the host.
The following sections explain the various types of VM network configuration and provide instructions
for setting up selected VM network configurations.
The following figure shows a virtual network switch connecting two VMs to the network:
From the perspective of a guest operating system, a virtual network connection is the same as a physical
network connection. Host machines view virtual network switches as network interfaces. When the
virtnetworkd service is first installed and started, it creates virbr0, the default network interface for
VMs.
To view information about this interface, use the ip utility on the host.
179
Red Hat Enterprise Linux 9 Configuring and managing virtualization
By default, all VMs on a single host are connected to the same NAT-type virtual network, named
default, which uses the virbr0 interface. For details, see Virtual networking default configuration .
For basic outbound-only network access from VMs, no additional network setup is usually needed,
because the default network is installed along with the libvirt-daemon-config-network package, and is
automatically started when the virtnetworkd service is started.
If a different VM network functionality is needed, you can create additional virtual networks and network
interfaces and configure your VMs to use them. In addition to the default NAT, these networks and
interfaces can be configured to use one of the following modes:
Routed mode
Bridged mode
Isolated mode
Open mode
VMs on the network are visible to the host and other VMs on the host, but the network traffic is
affected by the firewalls in the guest operating system’s network stack and by the libvirt
network filtering rules attached to the guest interface.
VMs on the network can connect to locations outside the host but are not visible to them.
Outbound traffic is affected by the NAT rules, as well as the host system’s firewall.
180
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
Add network interfaces to virtual machines , and disconnect or delete the interfaces.
16.2.1. Viewing and editing virtual network interface information in the web console
Using the RHEL 9 web console, you can view and modify the virtual network interfaces on a selected
virtual machine (VM):
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
181
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Type - The type of network interface for the VM. The types include virtual network, bridge
to LAN, and direct attachment.
NOTE
Source - The source of the network interface. This is dependent on the network type.
3. To edit the virtual network interface settings, Click Edit. The Virtual Network Interface Settings
dialog opens.
NOTE
Changes to the virtual network interface settings take effect only after restarting
the VM.
Additionally, MAC address can only be modified when the VM is shut off.
Additional resources
182
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
16.2.2. Adding and connecting virtual network interfaces in the web console
Using the RHEL 9 web console, you can create a virtual network interface and connect a virtual machine
(VM) to it.
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
3. Click Plug in the row of the virtual network interface you want to connect.
The selected virtual network interface connects to the VM.
16.2.3. Disconnecting and removing virtual network interfaces in the web console
Using the RHEL 9 web console, you can disconnect the virtual network interfaces connected to a
selected virtual machine (VM).
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
183
Red Hat Enterprise Linux 9 Configuring and managing virtualization
3. Click Unplug in the row of the virtual network interface you want to disconnect.
The selected virtual network interface disconnects from the VM.
If you require a VM to appear on the same external network as the hypervisor, you must use bridged
mode instead. To do so, attach the VM to a bridge device connected to the hypervisor’s physical
network device. To use the command-line interface for this, follow the instructions below.
Prerequisites
The IP configuration of the hypervisor. This varies depending on the network connection of the
host. As an example, this procedure uses a scenario where the host is connected to the network
using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a
DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.
To obtain the IP configuration of the ethernet interface, use the ip addr utility:
# ip addr
[...]
enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
group default qlen 1000
link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.148/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s25
Procedure
1. Create and set up a bridge connection for the physical interface on the host. For instructions,
see the Configuring a network bridge .
Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of
the physical ethernet interface to the bridge interface.
2. Modify the VM’s network to use the created bridged interface. For example, the following sets
testguest to use bridge0.
184
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
4. In the guest operating system, adjust the IP and DHCP settings of the system’s network
interface as if the VM was another physical system in the same network as the hypervisor.
The specific steps for this will differ depending on the guest OS used by the VM. For example, if
the guest OS is RHEL 9, see Configuring an Ethernet connection .
Verification
1. Ensure the newly created bridge is running and contains both the host’s physical interface and
the interface of the VM.
a. In the guest operating system, obtain the network ID of the system. For example, if it is a
Linux guest:
# ip addr
[...]
enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.150/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s0
b. From an external system connected to the local network, connect to the VM using the
obtained ID.
# ssh [email protected]
[email protected]'s password:
Last login: Mon Sep 24 12:05:36 2019
root~#*
Troubleshooting
In certain situations, such as when using a client-to-site VPN while the VM is hosted on the
client, using bridged mode for making your VMs available to external locations is not possible.
To work around this problem, you can set destination NAT using nftables for the VM.
Additional resources
185
Red Hat Enterprise Linux 9 Configuring and managing virtualization
16.3.2. Configuring externally visible virtual machines using the web console
By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual
bridge on the host. This ensures that the VM can use the host’s network interface controller (NIC) for
connecting to outside networks, but the VM is not reachable from external systems.
If you require a VM to appear on the same external network as the hypervisor, you must use bridged
mode instead. To do so, attach the VM to a bridge device connected to the hypervisor’s physical
network device. To use the RHEL 9 web console for this, follow the instructions below.
Prerequisites
The IP configuration of the hypervisor. This varies depending on the network connection of the
host. As an example, this procedure uses a scenario where the host is connected to the network
using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a
DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.
To obtain the IP configuration of the ethernet interface, go to the Networking tab in the web
console, and see the Interfaces section.
Procedure
1. Create and set up a bridge connection for the physical interface on the host. For
instructions, see Configuring network bridges in the web console .
Note that in a scenario where static IP assignment is used, you must move the IPv4 setting
of the physical ethernet interface to the bridge interface.
2. Modify the VM’s network to use the bridged interface. In the Network Interfaces tab of the
VM:
c. Click Add
d. Optional: Click Unplug for all the other interfaces connected to the VM.
4. In the guest operating system, adjust the IP and DHCP settings of the system’s network
interface as if the VM was another physical system in the same network as the hypervisor.
The specific steps for this will differ depending on the guest OS used by the VM. For
example, if the guest OS is RHEL 9, see Configuring an Ethernet connection .
Verification
1. In the Networking tab of the host’s web console, click the row with the newly created bridge to
ensure it is running and contains both the host’s physical interface and the interface of the VM.
186
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
a. In the guest operating system, obtain the network ID of the system. For example, if it is a
Linux guest:
# ip addr
[...]
enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.150/24 brd 10.0.0.255 scope global dynamic noprefixroute enp0s0
b. From an external system connected to the local network, connect to the VM using the
obtained ID.
# ssh [email protected]
[email protected]'s password:
Last login: Mon Sep 24 12:05:36 2019
root~#*
Troubleshooting
In certain situations, such as when using a client-to-site VPN while the VM is hosted on the
client, using bridged mode for making your VMs available to external locations is not possible.
Additional resources
187
Red Hat Enterprise Linux 9 Configuring and managing virtualization
WARNING
Virtual network switches use NAT configured by firewall rules. Editing these rules
while the switch is running is not recommended, because incorrect rules may result
in the switch being unable to communicate.
188
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
A common topology that uses routed mode is virtual server hosting (VSH). A VSH provider may have
several host machines, each with two physical network connections. One interface is used for
management and accounting, the other for the VMs to connect through. Each VM has its own public IP
address, but the host machines use private IP addresses so that only internal administrators can manage
the VMs.
+ image::vn-10-routed-mode-datacenter.png[]
In bridged mode, the VM appear within the same subnet as the host machine. All other physical
machines on the same physical network can detect the VM and access it.
mode 1
mode 2
mode 4
In contrast, using modes 0, 3, 5, or 6 is likely to cause the connection to fail. Also note that media-
independent interface (MII) monitoring should be used to monitor bonding modes, as Address
Resolution Protocol (ARP) monitoring does not work correctly.
For more information on bonding modes, refer to the Red Hat Knowledgebase .
189
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Common scenarios
The most common use cases for bridged mode include:
Deploying VMs in an existing network alongside host machines, making the difference between
virtual and physical machines invisible to the end user.
Deploying VMs without making any changes to existing physical network configuration settings.
Deploying VMs that must be easily accessible to an existing physical network. Placing VMs on a
physical network where they must access DHCP services.
Connecting VMs to an existing network where virtual LANs (VLANs) are used.
A demilitarized zone (DMZ) network. For a DMZ deployment with VMs, Red Hat recommends
setting up the DMZ at the physical network router and switches, and connecting the VMs to the
physical network using bridged mode.
Additional resources
190
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
WARNING
These procedures are provided only as an example. Ensure that you have sufficient
backups before proceeding.
Prerequisites
dnsmasq
191
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Cobbler server
Procedure
7. Edit the <ip> element to include the appropriate address, network mask, DHCP address range,
and boot file, where BOOT_FILENAME is the name of the boot image file.
Verification
# virsh net-list
Name State Autostart Persistent
---------------------------------------------------
default active no no
192
CHAPTER 16. CONFIGURING VIRTUAL MACHINE NETWORK CONNECTIONS
Additional resources
Prerequisites
A PXE boot server is set up on the virtual network as described in Setting up a PXE boot server
on a virtual network.
Procedure
Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the
default virtual network, into a new 10 GB qcow2 image file:
Alternatively, you can manually edit the XML configuration file of an existing VM:
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='network'/>
<boot dev='hd'/>
</os>
ii. Ensure the guest network is configured to use your virtual network:
<interface type='network'>
<mac address='52:54:00:66:79:14'/>
<source network='default'/>
<target dev='vnet0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Verification
Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from
a boot image available on the PXE server.
Prerequisites
193
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Procedure
Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the
breth0 bridged network, into a new 10 GB qcow2 image file:
Alternatively, you can manually edit the XML configuration file of an existing VM:
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='network'/>
<boot dev='hd'/>
</os>
<interface type='bridge'>
<mac address='52:54:00:5a:ad:cb'/>
<source bridge='breth0'/>
<target dev='vnet0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Verification
Start the VM using the virsh start command. If PXE is configured correctly, the VM boots from
a boot image available on the PXE server.
Additional resources
194
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler.
VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the
host kernel.
Disk and network I/O settings of the host might have a significant performance impact on the
VM.
Depending on the host devices and their models, there might be significant overhead due to
emulation of particular hardware.
The severity of the virtualization impact on the VM performance is influenced by a variety factors, which
include:
The TuneD service can automatically optimize the resource distribution and performance of
your VMs.
Block I/O tuning can improve the performances of the VM’s block devices, such as disks.
IMPORTANT
Tuning VM performance can have adverse effects on other virtualization functions. For
example, it can make migrating the modified VM more difficult.
195
Red Hat Enterprise Linux 9 Configuring and managing virtualization
For RHEL 9 virtual machines, use the virtual-guest profile. It is based on the generally
applicable throughput-performance profile, but also decreases the swappiness of virtual
memory.
For RHEL 9 virtualization hosts, use the virtual-host profile. This enables more aggressive
writeback of dirty memory pages, which benefits the host performance.
Prerequisites
Procedure
To enable a specific TuneD profile:
# tuned-adm list
Available profiles:
- balanced - General non-specialized TuneD profile
- desktop - Optimize for the desktop use-case
[...]
- virtual-guest - Optimize for running inside a virtual guest
- virtual-host - Optimize for running KVM guests
Current active profile: balanced
Additional resources
196
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
Monolithic libvirt
The traditional libvirt daemon, libvirtd, controls a wide variety of virtualization drivers, using a single
configuration file - /etc/libvirt/libvirtd.conf.
As such, libvirtd allows for centralized hypervisor configuration, but may use system resources
inefficiently. Therefore, libvirtd will become unsupported in a future major release of RHEL.
However, if you updated to RHEL 9 from RHEL 8, your host still uses libvirtd by default.
Modular libvirt
Newly introduced in RHEL 9, modular libvirt provides a specific daemon for each virtualization driver.
These include the following:
Each of the daemons has a separate configuration file - for example /etc/libvirt/virtqemud.conf. As
such, modular libvirt daemons provide better options for fine-tuning libvirt resource management.
Next steps
If your RHEL 9 uses libvirtd, Red Hat recommends switching to modular daemons. For
instructions, see Enabling modular libvirt daemons.
If you performed a fresh install of a RHEL 9 host, your hypervisor uses modular libvirt daemons by
197
Red Hat Enterprise Linux 9 Configuring and managing virtualization
If you performed a fresh install of a RHEL 9 host, your hypervisor uses modular libvirt daemons by
default. However, if you upgraded your host from RHEL 8 to RHEL 9, your hypervisor uses the
monolithic libvirtd daemon, which is the default in RHEL 8.
If that is the case, Red Hat recommends enabling the modular libvirt daemons instead, because they
provide better options for fine-tuning libvirt resource management. In addition, libvirtd will become
unsupported in a future major release of RHEL.
Prerequisites
Your hypervisor is using the monolithic libvirtd service. To learn whether this is the case:
Procedure
# for drv in qemu interface network nodedev nwfilter secret storage; do systemctl
unmask virt${drv}d.service; systemctl unmask virt${drv}d{,-ro,-admin}.socket;
systemctl enable virt${drv}d.service; systemctl enable virt${drv}d{,-ro,-admin}.socket;
done
# for drv in qemu network nodedev nwfilter secret storage; do systemctl start
virt${drv}d{,-ro,-admin}.socket; done
5. Optional: If you require connecting to your host from remote hosts, enable and start the
virtualization proxy daemon.
Verification
198
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
# virsh uri
qemu:///system
If this command displays active, you have successfully enabled modular libvirt daemons.
To perform these actions, you can use the web console or the command-line interface.
17.4.1. Adding and removing virtual machine memory using the web console
To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you
can use the web console to adjust amount of memory allocated to the VM.
Prerequisites
The guest OS is running the memory balloon drivers. To verify this is the case:
If this commands displays any output and the model is not set to none, the memballoon
device is present.
In Windows guests, the drivers are installed as a part of the virtio-win driver package.
For instructions, see Installing KVM paravirtualized drivers for Windows virtual
machines.
In Linux guests, the drivers are generally included by default and activate when the
memballoon device is present.
Procedure
1. Optional: Obtain the information about the maximum memory and currently used memory for a
VM. This will serve as a baseline for your changes, and also for verification.
199
Red Hat Enterprise Linux 9 Configuring and managing virtualization
2. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
Maximum allocation - Sets the maximum amount of host memory that the VM can use for
its processes. You can specify the maximum memory when creating the VM or increase it
later. You can specify memory as multiples of MiB or GiB.
Adjusting maximum memory allocation is only possible on a shut-off VM.
Current allocation - Sets the actual amount of memory allocated to the VM. This value can
be less than the Maximum allocation but cannot exceed it. You can adjust the value to
regulate the memory available to the VM for its processes. You can specify memory as
multiples of MiB or GiB.
If you do not specify this value, the default allocation is the Maximum allocation value.
5. Click Save.
The memory allocation of the VM is adjusted.
Additional resources
Adding and removing virtual machine memory using the command-line interface
17.4.2. Adding and removing virtual machine memory using the command-line
interface
To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you
can use the CLI to adjust amount of memory allocated to the VM.
Prerequisites
The guest OS is running the memory balloon drivers. To verify this is the case:
200
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
If this commands displays any output and the model is not set to none, the memballoon
device is present.
In Windows guests, the drivers are installed as a part of the virtio-win driver package.
For instructions, see Installing KVM paravirtualized drivers for Windows virtual
machines.
In Linux guests, the drivers are generally included by default and activate when the
memballoon device is present.
Procedure
1. Optional: Obtain the information about the maximum memory and currently used memory for a
VM. This will serve as a baseline for your changes, and also for verification.
2. Adjust the maximum memory allocated to a VM. Increasing this value improves the performance
potential of the VM, and reducing the value lowers the performance footprint the VM has on
your host. Note that this change can only be performed on a shut-off VM, so adjusting a running
VM requires a reboot to take effect.
For example, to change the maximum memory that the testguest VM can use to 4096 MiB:
To increase the maximum memory of a running VM, you can attach a memory device to the VM.
This is also referred to as memory hot plug. For details, see Attaching memory devices to virtual
machines.
WARNING
3. Optional: You can also adjust the memory currently used by the VM, up to the maximum
allocation. This regulates the memory load that the VM has on the host until the next reboot,
without changing the maximum VM allocation.
201
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Verification
2. Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics
of the VM to evaluate how effectively it regulates its memory use.
Additional resources
Adding and removing virtual machine memory using the web console
Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it
with more host resources. Similarly, reducing a device’s weight makes it consume less host resources.
NOTE
202
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
NOTE
Each device’s weight value must be within the 100 to 1000 range. Alternatively, the value
can be 0, which removes that device from per-device listings.
Procedure
To display and set a VM’s block I/O parameters:
<domain>
[...]
<blkiotune>
<weight>800</weight>
<device>
<path>/dev/sda</path>
<weight>1000</weight>
</device>
<device>
<path>/dev/sdb</path>
<weight>500</weight>
</device>
</blkiotune>
[...]
</domain>
For example, the following changes the weight of the /dev/sda device in the liftrul VM to 500.
To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to
VMs to the host machine.
Procedure
1. Use the virsh domblklist command to list the names of all the disk devices on a specified VM.
203
Red Hat Enterprise Linux 9 Configuring and managing virtualization
vda /var/lib/libvirt/images/rollin-coal.qcow2
sda -
sdb /home/horridly-demanding-processes.iso
2. Find the host block device where the virtual disk that you want to throttle is mounted.
For example, if you want to throttle the sdb virtual disk from the previous step, the following
output shows that the disk is mounted on the /dev/nvme0n1p3 partition.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
zram0 252:0 0 4G 0 disk [SWAP]
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:1 0 600M 0 part /boot/efi
├─nvme0n1p2 259:2 0 1G 0 part /boot
└─nvme0n1p3 259:3 0 236.9G 0 part
└─luks-a1123911-6f37-463c-b4eb-fxzy1ac12fea 253:0 0 236.9G 0 crypt /home
3. Set I/O limits for the block device using the virsh blkiotune command.
The following example throttles the sdb disk on the rollin-coal VM to 1000 read and write I/O
operations per second and to 50 MB per second read and write throughput.
Additional information
Disk I/O throttling can be useful in various situations, for example when VMs belonging to
different customers are running on the same host, or when quality of service guarantees are
given for different VMs. Disk I/O throttling can also be used to simulate slower disks.
I/O throttling can be applied independently to each block device attached to a VM and
supports limits on throughput and I/O operations.
Red Hat does not support using the virsh blkdeviotune command to configure I/O throttling in
VMs. For more information on unsupported features when using RHEL 9 as a VM host, see
Unsupported features in RHEL 9 virtualization .
Procedure
To enable multi-queue virtio-scsi support for a specific VM, add the following to the VM’s XML
configuration, where N is the total number of vCPU queues:
204
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
1. Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web
console.
2. Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the
testguest1 VM to use the CPU model of the host:
4. If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA
for its VMs. This maps the host’s CPU and memory processes onto the CPU and memory
processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a
more streamlined access to the system memory allocated to the VM, which can improve the
vCPU processing effectiveness.
For details, see Configuring NUMA in a virtual machine and Sample vCPU performance tuning
scenario.
17.6.1. Adding and removing virtual CPUs using the command-line interface
To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual
CPUs (vCPUs) assigned to the VM.
When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging.
However, note that vCPU hot unplug is not supported in RHEL 9, and Red Hat highly discourages its use.
Prerequisites
Optional: View the current state of the vCPUs in the targeted VM. For example, to display the
number of vCPUs on the testguest VM:
This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot
plugged to it to increase the VM’s performance. However, after reboot, the number of vCPUs
testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs.
Procedure
1. Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the
205
Red Hat Enterprise Linux 9 Configuring and managing virtualization
1. Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the
VM’s next boot.
For example, to increase the maximum vCPU count for the testguest VM to 8:
Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor,
and other factors.
2. Adjust the current number of vCPUs attached to a VM, up to the maximum configured in the
previous step. For example:
This increases the VM’s performance and host load footprint of testguest until the VM’s
next boot.
This decreases the VM’s performance and host load footprint of testguest after the VM’s
next boot. However, if needed, additional vCPUs can be hot plugged to the VM to
temporarily increase its performance.
Verification
Confirm that the current state of vCPU for the VM reflects your changes.
Additional resources
Prerequisites
Procedure
1. In the Virtual Machines interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a
206
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
A new page opens with an Overview section with basic information about the selected VM and a
Console section to access the VM’s graphical interface.
NOTE
vCPU Maximum - The maximum number of virtual CPUs that can be configured for the
VM. If this value is higher than the vCPU Count, additional vCPUs can be attached to the
VM.
Cores per socket - The number of cores for each socket to expose to the VM.
Threads per core - The number of threads for each core to expose to the VM.
Note that the Sockets, Cores per socket, and Threads per core options adjust the CPU
topology of the VM. This may be beneficial for vCPU performance and may impact the
functionality of certain software in the guest OS. If a different setting is not required by your
deployment, keep the default values.
2. Click Apply.
The virtual CPUs for the VM are configured.
NOTE
Changes to virtual CPU settings only take effect after the VM is restarted.
Additional resources
The following methods can be used to configure Non-Uniform Memory Access (NUMA) settings of a
207
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The following methods can be used to configure Non-Uniform Memory Access (NUMA) settings of a
virtual machine (VM) on a RHEL 9 host.
Prerequisites
The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh
nodeinfo command and see the NUMA cell(s) line:
# virsh nodeinfo
CPU model: x86_64
CPU(s): 48
CPU frequency: 1200 MHz
CPU socket(s): 1
Core(s) per socket: 12
Thread(s) per core: 2
NUMA cell(s): 2
Memory size: 67012964 KiB
Procedure
For ease of use, you can set up a VM’s NUMA configuration using automated utilities and services.
However, manual NUMA setup is more likely to yield a significant performance improvement.
Automatic methods
Set the VM’s NUMA policy to Preferred. For example, to do so for the testguest5 VM:
Use the numad command to automatically align the VM CPU with memory resources.
# numad
Manual methods
1. Pin specific vCPU threads to a specific host CPU or range of CPUs. This is also possible on non-
NUMA hosts and VMs, and is recommended as a safe method of vCPU performance
improvement.
For example, the following commands pin vCPU threads 0 to 5 of the testguest6 VM to host
CPUs 1, 3, 5, 7, 9, and 11, respectively:
208
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
2. After pinning vCPU threads, you can also pin QEMU process threads associated with a specified
VM to a specific host CPU or range of CPUs. For example, the following commands pin the
QEMU process thread of testguest6 to CPUs 13 and 15, and verify this was successful:
3. Finally, you can also specify which host NUMA nodes will be assigned specifically to a certain
VM. This can improve the host memory usage by the VM’s vCPU. For example, the following
commands set testguest6 to use host NUMA nodes 3 to 5, and verify this was successful:
NOTE
For best performance results, it is recommended to use all of the manual tuning methods
listed above
Known issues
Additional resources
View the current NUMA configuration of your system using the numastat utility
Starting scenario
2 NUMA nodes
209
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The output of virsh nodeinfo of such a machine would look similar to:
# virsh nodeinfo
CPU model: x86_64
CPU(s): 12
CPU frequency: 3661 MHz
CPU socket(s): 2
Core(s) per socket: 3
Thread(s) per core: 2
NUMA cell(s): 2
Memory size: 31248692 KiB
You intend to modify an existing VM to have 8 vCPUs, which means that it will not fit in a single
NUMA node.
Therefore, you should distribute 4 vCPUs on each NUMA node and make the vCPU topology
resemble the host topology as closely as possible. This means that vCPUs that run as sibling
threads of a given physical CPU should be pinned to host threads on the same core. For details,
see the Solution below:
Solution
# virsh capabilities
The output should include a section that looks similar to the following:
<topology>
<cells num="2">
<cell id="0">
<memory unit="KiB">15624346</memory>
<pages unit="KiB" size="4">3906086</pages>
<pages unit="KiB" size="2048">0</pages>
<pages unit="KiB" size="1048576">0</pages>
<distances>
<sibling id="0" value="10" />
<sibling id="1" value="21" />
</distances>
<cpus num="6">
<cpu id="0" socket_id="0" core_id="0" siblings="0,3" />
<cpu id="1" socket_id="0" core_id="1" siblings="1,4" />
<cpu id="2" socket_id="0" core_id="2" siblings="2,5" />
<cpu id="3" socket_id="0" core_id="0" siblings="0,3" />
<cpu id="4" socket_id="0" core_id="1" siblings="1,4" />
<cpu id="5" socket_id="0" core_id="2" siblings="2,5" />
</cpus>
</cell>
<cell id="1">
<memory unit="KiB">15624346</memory>
<pages unit="KiB" size="4">3906086</pages>
<pages unit="KiB" size="2048">0</pages>
210
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
2. Optional: Test the performance of the VM using the applicable tools and utilities.
default_hugepagesz=1G hugepagesz=1G
[Unit]
Description=HugeTLB Gigantic Pages Reservation
DefaultDependencies=no
Before=dev-hugepages.mount
ConditionPathExists=/sys/devices/system/node
ConditionKernelCommandLine=hugepagesz=1G
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/etc/systemd/hugetlb-reserve-pages.sh
[Install]
WantedBy=sysinit.target
#!/bin/sh
nodes_path=/sys/devices/system/node/
if [ ! -d $nodes_path ]; then
echo "ERROR: $nodes_path does not exist"
exit 1
fi
reserve_pages()
{
211
Red Hat Enterprise Linux 9 Configuring and managing virtualization
reserve_pages 4 node1
reserve_pages 4 node2
This reserves four 1GiB huge pages from node1 and four 1GiB huge pages from node2.
# chmod +x /etc/systemd/hugetlb-reserve-pages.sh
4. Use the virsh edit command to edit the XML configuration of the VM you wish to optimize, in
this example super-VM:
a. Set the VM to use 8 static vCPUs. Use the <vcpu/> element to do this.
b. Pin each of the vCPU threads to the corresponding host CPU threads that it mirrors in the
topology. To do so, use the <vcpupin/> elements in the <cputune> section.
Note that, as shown by the virsh capabilities utility above, host CPU threads are not
ordered sequentially in their respective cores. In addition, the vCPU threads should be
pinned to the highest available set of host cores on the same NUMA node. For a table
illustration, see the Sample topology section below.
The XML configuration for steps a. and b. can look similar to:
<cputune>
<vcpupin vcpu='0' cpuset='1'/>
<vcpupin vcpu='1' cpuset='4'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='5'/>
<vcpupin vcpu='4' cpuset='7'/>
<vcpupin vcpu='5' cpuset='10'/>
<vcpupin vcpu='6' cpuset='8'/>
<vcpupin vcpu='7' cpuset='11'/>
<emulatorpin cpuset='6,9'/>
</cputune>
<memoryBacking>
<hugepages>
<page size='1' unit='GiB'/>
</hugepages>
</memoryBacking>
d. Configure the VM’s NUMA nodes to use memory from the corresponding NUMA nodes on
212
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
d. Configure the VM’s NUMA nodes to use memory from the corresponding NUMA nodes on
the host. To do so, use the <memnode/> elements in the <numatune/> section:
<numatune>
<memory mode="preferred" nodeset="1"/>
<memnode cellid="0" mode="strict" nodeset="0"/>
<memnode cellid="1" mode="strict" nodeset="1"/>
</numatune>
e. Ensure the CPU mode is set to host-passthrough, and that the CPU uses cache in
passthrough mode:
<cpu mode="host-passthrough">
<topology sockets="2" cores="2" threads="2"/>
<cache mode="passthrough"/>
Verification
1. Confirm that the resulting XML configuration of the VM includes a section similar to the
following:
[...]
<memoryBacking>
<hugepages>
<page size='1' unit='GiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>8</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='1'/>
<vcpupin vcpu='1' cpuset='4'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='5'/>
<vcpupin vcpu='4' cpuset='7'/>
<vcpupin vcpu='5' cpuset='10'/>
<vcpupin vcpu='6' cpuset='8'/>
<vcpupin vcpu='7' cpuset='11'/>
<emulatorpin cpuset='6,9'/>
</cputune>
<numatune>
<memory mode="preferred" nodeset="1"/>
<memnode cellid="0" mode="strict" nodeset="0"/>
<memnode cellid="1" mode="strict" nodeset="1"/>
</numatune>
<cpu mode="host-passthrough">
<topology sockets="2" cores="2" threads="2"/>
<cache mode="passthrough"/>
<numa>
<cell id="0" cpus="0-3" memory="2" unit="GiB">
<distances>
<sibling id="0" value="10"/>
<sibling id="1" value="21"/>
</distances>
</cell>
<cell id="1" cpus="4-7" memory="2" unit="GiB">
213
Red Hat Enterprise Linux 9 Configuring and managing virtualization
<distances>
<sibling id="0" value="21"/>
<sibling id="1" value="10"/>
</distances>
</cell>
</numa>
</cpu>
</domain>
2. Optional: Test the performance of the VM using the applicable tools and utilities to evaluate
the impact of the VM’s optimization.
Sample topology
The following tables illustrate the connections between the vCPUs and the host CPUs they
should be pinned to:
CPU threads 0 3 1 4 2 5 6 9 7 10 8 11
Cores 0 1 2 3 4 5
Sockets 0 1
NUMA nodes 0 1
vCPU threads 0 1 2 3 4 5 6 7
Cores 0 1 2 3
Sockets 0 1
NUMA nodes 0 1
vCPU threads 0 1 2 3 4 5 6 7
Host CPU 0 3 1 4 2 5 6 9 7 10 8 11
threads
Cores 0 1 2 3 4 5
Sockets 0 1
NUMA nodes 0 1
214
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
In this scenario, there are 2 NUMA nodes and 8 vCPUs. Therefore, 4 vCPU threads should be
pinned to each node.
In addition, Red Hat recommends leaving at least a single CPU thread available on each node
for host system operations.
Because in this example, each NUMA node houses 3 cores, each with 2 host CPU threads, the
set for node 0 translates as follows:
Depending on your requirements, you can either enable or disable KSM for a single session or
persistently.
NOTE
Prerequisites
Procedure
Disable KSM:
To deactivate KSM for a single session, use the systemctl utility to stop ksm and
ksmtuned services.
To deactivate KSM persistently, use the systemctl utility to disable ksm and ksmtuned
services.
NOTE
215
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
Memory pages shared between VMs before deactivating KSM will remain shared. To stop
sharing, delete all the PageKSM pages in the system using the following command:
After anonymous pages replace the KSM pages, the khugepaged kernel service will
rebuild transparent hugepages on the VM’s physical memory.
Enable KSM:
WARNING
Enabling KSM increases CPU utilization and affects overall CPU performance.
To enable KSM for a single session, use the systemctl utility to start the ksm and
ksmtuned services.
To enable KSM persistently, use the systemctl utility to enable the ksm and ksmtuned
services.
Procedure
Use any of the following methods and observe if it has a beneficial effect on your VM network
performance:
216
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
If the output of this command is blank, enable the vhost_net kernel module:
# modprobe vhost_net
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<driver name='vhost' queues='N'/>
</interface>
SR-IOV
If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs. For more
information, see Managing SR-IOV devices.
Additional resources
On your RHEL 9 host, as root, use the top utility or the system monitor application, and look
217
Red Hat Enterprise Linux 9 Configuring and managing virtualization
On your RHEL 9 host, as root, use the top utility or the system monitor application, and look
for qemu and virt in the output. This shows how much host system resources your VMs are
consuming.
If the monitoring tool displays that any of the qemu or virt processes consume a large
portion of the host CPU or memory capacity, use the perf utility to investigate. For details,
see below.
On the guest operating system, use performance utilities and applications available on the
system to evaluate which processes consume the most system resources.
perf kvm
You can use the perf utility to collect and analyze virtualization-specific statistics about the
performance of your RHEL 9 host. To do so:
2. Use one of the perf kvm stat commands to display perf statistics for your virtualization host:
For real-time monitoring of your hypervisor, use the perf kvm stat live command.
To log the perf data of your hypervisor over a period of time, activate the logging using the
perf kvm stat record command. After the command is canceled or interrupted, the data is
saved in the perf.data.guest file, which can be analyzed using the perf kvm stat report
command.
3. Analyze the perf output for types of VM-EXIT events and their distribution. For example, the
PAUSE_INSTRUCTION events should be infrequent, but in the following output, the high
occurrence of this event suggests that the host CPUs are not handling the running vCPUs well.
In such a scenario, consider shutting down some of your active VMs, removing vCPUs from
these VMs, or tuning the performance of the vCPUs.
VM-EXIT Samples Samples% Time% Min Time Max Time Avg time
218
CHAPTER 17. OPTIMIZING VIRTUAL MACHINE PERFORMANCE
( +- 0.70% )
HLT 20440 1.77% 69.83% 0.62us 79319.41us 14134.56us ( +- 0.79%
)
VMCALL 12426 1.07% 0.03% 1.02us 5416.25us 8.77us ( +- 7.36%
)
EXCEPTION_NMI 27 0.00% 0.00% 0.69us 1.34us 0.98us ( +-
3.50% )
EPT_MISCONFIG 5 0.00% 0.00% 5.15us 10.85us 7.88us ( +-
11.67% )
Other event types that can signal problems in the output of perf kvm stat include:
For more information on using perf to monitor virtualization performance, see the perf-kvm man page.
numastat
To see the current NUMA configuration of your system, you can use the numastat utility, which is
provided by installing the numactl package.
The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This
is not optimal for vCPU performance, and warrants adjusting:
# numastat -c qemu-kvm
In contrast, the following shows memory being provided to each VM by a single node, which is
significantly more efficient.
# numastat -c qemu-kvm
219
Red Hat Enterprise Linux 9 Configuring and managing virtualization
220
CHAPTER 18. SECURING VIRTUAL MACHINES
This document outlines the mechanics of securing VMs on a RHEL 9 host and provides a list of methods
to increase the security of your VMs.
Because the hypervisor uses the host kernel to manage VMs, services running on the VM’s operating
system are frequently used for injecting malicious code into the host system. However, you can protect
your system against such security threats by using a number of security features on your host and your
guest systems.
These features, such as SELinux or QEMU sandboxing, provide various measures that make it more
difficult for malicious code to attack the hypervisor and transfer between your host and your VMs.
Many of the features that RHEL 9 provides for VM security are always active and do not have to be
enabled or configured. For details, see Automatic features for virtual machine security .
In addition, you can adhere to a variety of best practices to minimize the vulnerability of your VMs and
your hypervisor. For more information, see Best practices for securing virtual machines .
Secure the virtual machine as if it was a physical machine. The specific methods available to
enhance security depend on the guest OS.
If your VM is running RHEL 9, see Securing Red Hat Enterprise Linux 9 for detailed instructions
on improving the security of your guest system.
When managing VMs remotely, use cryptographic utilities such as SSH and network protocols
such as SSL for connecting to the VMs.
# getenforce
Enforcing
If SELinux is disabled or in Permissive mode, see the Using SELinux document for instructions
on activating Enforcing mode.
NOTE
222
CHAPTER 18. SECURING VIRTUAL MACHINES
NOTE
SELinux Enforcing mode also enables the sVirt RHEL 9 feature. This is a set of
specialized SELinux booleans for virtualization, which can be manually adjusted
for fine-grained VM security management.
SecureBoot can only be applied when installing a Linux VM that uses OVMF firmware. For
instructions, see Creating a SecureBoot virtual machine .
Additional resources
Prerequisites
An operating system (OS) installation source is available locally or on a network. This can be one
of the following formats:
223
Red Hat Enterprise Linux 9 Configuring and managing virtualization
WARNING
Optional: A Kickstart file can be provided for faster and easier configuration of the installation.
Procedure
1. Use the virt-install command to create a VM as detailed in Creating virtual machines using the
command-line interface. For the --boot option, use the
uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd value. This uses the
OVMF_VARS.secboot.fd and OVMF_CODE.secboot.fd files as templates for the VM’s non-
volatile RAM (NVRAM) settings, which enables the SecureBoot feature.
For example:
Verification
1. After the guest OS is installed, access the VM’s command line by opening the terminal in the
graphical guest console or connecting to the guest OS using SSH.
2. To confirm that SecureBoot has been enabled on the VM, use the mokutil --sb-state command:
# mokutil --sb-state
SecureBoot enabled
Additional resources
Procedure
1. Optional: Ensure your system’s polkit control policies related to libvirt are set up according to
224
CHAPTER 18. SECURING VIRTUAL MACHINES
1. Optional: Ensure your system’s polkit control policies related to libvirt are set up according to
your preferences.
ii. Add your custom policies to this file, and save it.
For further information and examples of libvirt control policies, see the libvirt upstream
documentation.
3. For each file that you modified in the previous step, restart the corresponding service.
For example, if you have modified /etc/libvirt/virtqemud.conf, restart the virtqemud service.
Verification
As a user whose VM actions you intended to limit, perform one of the restricted actions.
For example, if unprivileged users are restricted from viewing VMs created in the system
session:
If this command does not list any VMs even though one or more VMs exist on your system,
polkit successfully restricts the action for unprivileged users.
Troubleshooting
Currently, configuring libvirt to use polkit makes it impossible to connect to VMs using the
RHEL 9 web console, due to an incompatibility with the libvirt-dbus service.
If you require fine-grained access control of VMs in the web console, create a custom D-Bus
policy. For instructions, see How to configure fine-grained control of Virtual Machines in
Cockpit in the Red Hat Knowledgebase.
Additional resources
225
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Additional resources
The man polkit command
NOTE
To list all virtualization-related booleans and their statuses, use the getsebool -a | grep virt command:
To enable a specific boolean, use the setsebool -P boolean_name on command as root. To disable a
226
CHAPTER 18. SECURING VIRTUAL MACHINES
To enable a specific boolean, use the setsebool -P boolean_name on command as root. To disable a
boolean, use setsebool -P boolean_name off.
The following table lists virtualization-related booleans available in RHEL 9 and what they do when
enabled:
227
Red Hat Enterprise Linux 9 Configuring and managing virtualization
When using IBM Z hardware to run a RHEL 9 host, you can improve the security of your virtual machines
(VMs) by configuring IBM Secure Execution for the VMs.
IBM Secure Execution, also known as Protected Virtualization, prevents the host system from accessing
a VM’s state and memory contents. As a result, even if the host is compromised, it cannot be used as a
vector for attacking the guest operating system. In addition, Secure Execution can be used to prevent
untrusted hosts from obtaining sensitive information from the VM.
The following procedure describes how to convert an existing VM on an IBM Z host into a secured VM.
Prerequisites
The Secure Execution feature is enabled for your system. To verify, use:
If this command displays any output, your CPU is compatible with Secure Execution.
# ls /sys/firmware | grep uv
If the command generates any output, your kernel supports Secure Execution.
The host CPU model contains the unpack facility. To confirm, use:
If the command generates the above output, your CPU host model is compatible with Secure
Execution.
The CPU mode of the VM is set to host-model. To confirm this, use the following and replace
vm-name with the name of your VM.
If the command generates any output, the VM’s CPU mode is set correctly.
You have obtained and verified the IBM Z host key document. For instructions to do so, see
Verifying the host key document in IBM documentation.
Procedure
Do the following steps on your host:
228
CHAPTER 18. SECURING VIRTUAL MACHINES
1. Add the prot_virt=1 kernel parameter to the boot configuration of the host.
# zipl
3. Use virsh edit to modify the XML configuration of the VM you want to secure.
4. Add <launchSecurity type="s390-pv"/> to the under the </devices> line. For example:
[...]
</memballoon>
</devices>
<launchSecurity type="s390-pv"/>
</domain>
Do the following steps in the guest operating systemof the VM you want to secure.
# touch ~/secure-parameters
2. In the /boot/loader/entries directory, identify the boot loader entry with the latest version:
# ls /boot/loader/entries -l
[...]
-rw-r--r--. 1 root root 281 Oct 9 15:51 3ab27a195c2849429927b00679db15c1-4.18.0-
240.el8.s390x.conf
# cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-
240.el8.s390x.conf | grep options
options root=/dev/mapper/rhel-root
rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap
4. Add the content of the options line and swiotlb=262144 to the created parameters file.
229
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Using the genprotimg utility creates the secure image, which contains the kernel parameters,
initial RAM disk, and boot image.
6. Update the VM’s boot menu to boot from the secure image. In addition, remove the lines
starting with initrd and options, as they are not needed.
For example, in a RHEL 8.3 VM, the boot menu can be edited in the /boot/loader/entries/
directory:
# cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-
240.el8.s390x.conf
title Red Hat Enterprise Linux 8.3
version 4.18.0-240.el8.s390x
linux /boot/secure-image
[...]
# zipl -V
# shred /boot/vmlinuz-4.18.0-240.el8.s390x
# shred /boot/initramfs-4.18.0-240.el8.s390x.img
# shred secure-parameters
The original boot image, the initial RAM image, and the kernel parameter file are unprotected,
and if they are not removed, VMs with Secure Execution enabled can still be vulnerable to
hacking attempts or sensitive data mining.
Verification
On the host, use the virsh dumpxml utility to confirm the XML configuration of the secured
VM. The configuration must include the <launchSecurity type="s390-pv"/> element, and no
<rng model="virtio"> lines.
230
CHAPTER 18. SECURING VIRTUAL MACHINES
</devices>
<launchSecurity type="s390-pv"/>
</domain>
Additional resources
Prerequisites
The cryptographic coprocessor is compatible with device assignment. To confirm this, ensure
that the type of your coprocessor is listed as CEX4 or later.
# lszcrypt -V
# modprobe vfio_ap
# lszdev --list-types
...
ap Cryptographic Adjunct Processor (AP) device
...
231
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Procedure
1. Obtain the decimal values for the devices that you want to assign to the VM. For example, for
the devices 05.0004 and 05.00ab:
NOTE
# lszcrypt -V
If the DRIVER values of the domain queues changed to vfio_ap, the reassignment succeeded.
# uuidgen
669d9b23-fe1b-4ecb-be08-a2fabca99b71
232
CHAPTER 18. SECURING VIRTUAL MACHINES
# cat /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-
passthrough/devices/669d9b23-fe1b-4ecb-be08-a2fabca99b71/matrix
05.0004
05.00ab
If the output contains the numerical values of queues that you have previously assigned to vfio-
ap, the process was successful.
8. Use the virsh edit command to open the XML configuration of the VM where you want to use
the crypto devices.
9. Add the following lines to the <devices> section in the XML configuration, and save it.
Verification
2. After the guest operating system (OS) boots, ensure that it detects the assigned crypto
devices.
# lszcrypt -V
The output of this command in the guest OS will be identical to that on a host logical partition
with the same cryptographic coprocessor devices available.
3. In the guest OS, confirm that a control domain has been successfully assigned to the crypto
233
Red Hat Enterprise Linux 9 Configuring and managing virtualization
3. In the guest OS, confirm that a control domain has been successfully assigned to the crypto
devices.
# lszcrypt -d C
DOMAIN 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f
------------------------------------------------------
00 . . . . . . . . . . . . . . . .
10 . . . . . . . . . . . . . . . .
20 . . . . . . . . . . . B . . . .
30 . . B B . . . . . . . . . . . .
40 . . . . . . . . . . . . . . . .
50 . . . . . . . . . . . . . . . .
60 . . . . . . . . . . . . . . . .
70 . . . . . . . . . . . . . . . .
80 . . . . . . . . . . . . . . . .
90 . . . . . . . . . . . . . . . .
a0 . . . . . . . . . . . . . . . .
b0 . . . . . . . . . . . . . . . .
c0 . . . . . . . . . . . . . . . .
d0 . . . . . . . . . . . . . . . .
e0 . . . . . . . . . . . . . . . .
f0 . . . . . . . . . . . . . . . .
------------------------------------------------------
If lszcrypt -d C displays B intersections in the crypto device matrix, the control domain
assignment was successful.
Prerequisites
Make sure you have installed the latest WHQL certified VirtIO drivers.
Procedure
1. Enable TPM 2.0 by adding the following parameters to the <devices> section in the VM’s XML
234
CHAPTER 18. SECURING VIRTUAL MACHINES
1. Enable TPM 2.0 by adding the following parameters to the <devices> section in the VM’s XML
configuration.
<devices>
[...]
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
[...]
</devices>
2. Install Windows in UEFI mode. For more information on how to do so, see Creating a
SecureBoot virtual machine.
3. Install the VirtIO drivers on the Windows VM. For more information on how to do so, see
Installing virtio drivers on a Windows guest .
4. In UEFI, enable Secure Boot. For more information on how to do so, see Secure Boot.
Verification
Ensure that the Device Security page on your Windows machine displays the following
message:
Settings > Update & Security > Windows Security > Device Security
Prerequisites
Ensure that standard hardware security is enabled. For more information, see Enabling standard
hardware security on Windows virtual machines.
Ensure you have enabled Hyper-V enlightenments. For more information, see Enabling Hyper-V
enlightenments.
Procedure
1. Open the XML configuration of the Windows VM. The following example opens the
configuration of the Example-L1 VM:
2. Under the <cpu> section, specify the CPU mode and add the policy flag.
IMPORTANT
235
Red Hat Enterprise Linux 9 Configuring and managing virtualization
IMPORTANT
If you do not wish to specify a custom CPU, you can set the <cpu mode> as
host-passthrough.
4. On the VMs operating system, navigate to the Core isolation details page:
Settings > Update & Security > Windows Security > Device Security > Core isolation details
NOTE
For other methods of enabling HVCI, see the relevant Microsoft documentation.
Verification
Ensure that the Device Security page on your Windows VM displays the following message:
Settings > Update & Security > Windows Security > Device Security
236
CHAPTER 19. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
Prerequisites
A directory that you want to share with your VMs. If you do not want to share any of your existing
directories, create a new one, for example named shared-files.
# mkdir shared-files
When connected to a VM, the host is visible and reachable over a network. This is generally the
case if the VM uses the NAT and bridge type of virtual networks.
Optional: For improved security, ensure your VMs are compatible with NFS version 4 or later.
Procedure
1. On the host, export a directory with the files you want to share as a network file system (NFS).
a. Obtain the IP address of each VM with which you want to share files. The following example
obtains the IPs of testguest1 and testguest2.
b. Edit the /etc/exports file on the host and add a line that includes the directory you want to
share, IPs of VMs you want to share it with, and sharing options.
For example, the following shares the /usr/local/shared-files directory on the host with
237
Red Hat Enterprise Linux 9 Configuring and managing virtualization
For example, the following shares the /usr/local/shared-files directory on the host with
testguest1 and testguest2, and enables the VMs to edit the content of the directory:
NOTE
If you want to share a directory with a Windows VM, you must ensure the
Windows NFS client has write permissions in the shared directory. A simple
way to do so, is to use the all_squash, anonuid, and anongid options in the
/etc/exports file.
For example:
/usr/local/shared-files/
192.168.124.220(rw,sync,all_squash,anonuid=<directory-owner-
UID>,anongid=<directory-owner-GID>)
To explore other options for managing NFS client permissions, follow the
Securing NFS guide.
# exportfs -a
e. Obtain the IP address of the host system. This will be used for mounting the shared
directory on the VMs later.
# ip addr
[...]
5: virbr0: [BROADCAST,MULTICAST,UP,LOWER_UP] mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether 52:54:00:32:ff:a5 brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
[...]
Note that the relevant network is the one that is used for connecting to the host by the VMs
you want to share files with. Usually, this is virbr0.
2. Mount the shared directory on a Linux VM that is specified in the /etc/exports file.
In this example:
238
CHAPTER 19. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
/mnt/host-share is a mount point on the VM. The mount point must be an empty directory.
3. To mount the shared directory on a Windows VM that is specified in the /etc/exports file:
b. Install the NFS-Client package. The installation command is different for the server and
desktop versions of Windows.
On a server version of Windows:
# Install-WindowsFeature NFS-Client
In this example:
Z: is the drive letter that will be used as a mount point. You must choose a drive letter
that is not in use on the system.
Verification
To verify you can share files between the host and the VM, list the content of the shared
directory on the VM. In the following example, replace <mount_point> with a file system path to
the mounted shared directory.
$ ls <mount_point>
shared-file1 shared-file2 shared-file3
Additional resources
Securing NFS
239
Red Hat Enterprise Linux 9 Configuring and managing virtualization
When using RHEL 9 as your hypervisor, you can efficiently share files between your host system and its
virtual machines (VM) using the virtiofs feature.
Prerequisites
A directory that you want to share with your VMs. If you do not want to share any of your existing
directories, create a new one, for example named shared-files.
# mkdir /root/shared-files
The VM you want to share data with is using a Linux distribution as its guest OS.
Procedure
1. For each directory on the host that you want to share with your VM, set it as a virtiofs file system
in the VM’s XML configuration.
b. Add an entry similar to the following to the <devices> section of the VM’s XML
configuration.
This example sets the /root/shared-files directory on the host to be visible as host-file-
share to the VM.
2. Add a NUMA topology for shared memory to the XML configuration. The following example
adds a basic topology for all CPUs and all RAM.
3. Add shared memory backing to the <domain> section of the XML configuration:
<domain>
[...]
<memoryBacking>
<access mode='shared'/>
240
CHAPTER 19. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
</memoryBacking>
[...]
</domain>
5. Mount the file system in the guest operating system (OS). The following example mounts the
previously configured host-file-share directory with a Linux guest OS.
Verification
Ensure that the shared directory became accessible on the VM and that you can now open files
stored in the directory.
File-system mount options related to access time, such as noatime and strictatime, are not
likely to work with virtiofs, and Red Hat discourages their use.
Troubleshooting
If virtiofs is not optimal for your usecase or supported for your system, you can use NFS
instead.
Prerequisites
A directory that you want to share with your VMs. If you do not want to share any of your existing
directories, create a new one, for example named centurion.
# mkdir /home/centurion
The VM you want to share data with is using a Linux distribution as its guest OS.
Procedure
1. In the Virtual Machines interface, click the VM with which you want to share files.
A new page opens with an Overview section with basic information about the selected VM and a
Console section.
The Shared directories section displays information about the host files and directories shared
241
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The Shared directories section displays information about the host files and directories shared
with that VM and options to Add or Remove a shared directory.
Source path - The path to the host directory that you want to share.
Mount tag - The tag that the VM uses to mount the directory.
Extended attributes - Set whether to enable extended attributes, xattr, on the shared files
and directories.
6. Click Share.
The selected directory is shared with the VM.
Verification
Ensure that the shared directory is accessible on the VM and you can now open files stored in
that directory.
Prerequisites
242
CHAPTER 19. SHARING FILES BETWEEN THE HOST AND ITS VIRTUAL MACHINES
Procedure
1. In the Virtual Machines interface, click on the VM from which you want to remove the shared
files.
A new page opens with an Overview section with basic information about the selected VM and a
Console section.
3. Click Remove next to the directory you wish to unshare with the VM.
The Remove filesystem dialog appears.
4. Click Remove.
The selected directory is unshared with the VM.
Verification
243
Red Hat Enterprise Linux 9 Configuring and managing virtualization
For this purpose, the following sections provide information on installing and optimizing Windows VMs
on the host, as well as installing and configuring drivers in these VMs.
To create the VM and to install the Windows guest OS, use the virt-install command or the RHEL 9 web
console.
Prerequisites
A Windows OS installation source, which can be one of the following, and be available locally or
on a network:
If you are installing Windows 11, the edk2-ovmf, swtpm and libtpms packages must be installed
on the host.
Procedure
1. Create the VM. For instructions, see Creating virtual machines, but keep in mind the following
specifics.
If using the virt-install utility to create the VM, add the following options to the command:
The storage medium with the KVM virtio drivers. For example:
--disk path=/usr/share/virtio-win/virtio-win.iso,device=cdrom
The Windows version you will install. For example, for Windows 10 and 11:
--os-variant win10
For a list of available Windows versions and the appropriate option, use the following
command:
# osinfo-query os
If you are installing Windows 11, enable Unified Extensible Firmware Interface (UEFI) and
244
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
If you are installing Windows 11, enable Unified Extensible Firmware Interface (UEFI) and
virtual Trusted Platform Module (vTPM):
--boot uefi
If using the web console to create the VM, specify your version of Windows in the
Operating system field of the Create new virtual machine window.
If you are installing Windows versions prior to Windows 11 and Windows Server 2022,
start the installation by clicking Create and run.
If you are installing Windows 11, or you want to use additional Windows Server 2022
features, confirm by clicking Create and edit and enable UEFI and vTPM using the CLI:
<os firmware='efi'>
<type arch='x86_64' machine='pc-q35-6.2'>hvm</type>
<boot dev='hd'/>
</os>
<devices>
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
</devices>
D. Start the Windows installation by clicking Install in the Virtual machines table.
3. If using the web console to create the VM, attach the storage medium with virtio drivers to the
VM using the Disks interface. For instructions, see Attaching existing disks to virtual machines
using the web console.
4. Configure KVM virtio drivers in the Windows guest OS. For details, see Installing KVM
paravirtualized drivers for Windows virtual machines.
Additional resources
245
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Therefore, Red Hat recommends optimizing your Windows VMs by doing any combination of the
following:
Using paravirtualized drivers. For more information, see Installing KVM paravirtualized drivers
for Windows virtual machines.
Enabling Hyper-V enlightenments. For more information, see Enabling Hyper-V enlightenments.
Configuring NetKVM driver parameters. For more information, see Configuring NetKVM driver
parameters.
Optimizing or disabling Windows background processes. For more information, see Optimizing
background processes on Windows virtual machines.
To do so:
1. Prepare the install media on the host machine. For more information, see Preparing virtio driver
installation media on a host machine.
2. Attach the install media to an existing Windows VM, or attach it when creating a new Windows
VM.
3. Install the virtio drivers on the Windows guest OS. For more information, see Installing virtio
drivers on a Windows guest.
Paravirtualized drivers enhance the performance of virtual machines (VMs) by decreasing I/O latency
and increasing throughput to almost bare-metal levels. Red Hat recommends that you use
paravirtualized drivers for VMs that run I/O-heavy tasks and applications.
virtio drivers are KVM’s paravirtualized device drivers, available for Windows VMs running on KVM hosts.
These drivers are provided by the virtio-win package, which includes drivers for:
Video controllers
246
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
NOTE
For additional information about emulated, virtio, and assigned devices, refer to
Managing virtual devices.
Using KVM virtio drivers, the following Microsoft Windows versions are expected to run similarly to
physical systems:
Windows Server versions: See Certified guest operating systems for Red Hat
Enterprise Linux with KVM in the Red Hat Knowledgebase.
Windows 11 (64-bit)
To install or update KVM virtio drivers on a Windows virtual machine (VM), you must first prepare the
virtio driver installation media on the host machine. To do so, attach the .iso file, provided by the virtio-
win package, as a storage device to the Windows VM.
Prerequisites
Ensure that virtualization is enabled in your RHEL 9 host system. For more information, see
Enabling virtualization.
Procedure
# subscription-manager refresh
All local data refreshed
If virtio-win is installed:
If the installation succeeds, the virtio-win driver files are available in the /usr/share/virtio-win/
directory. These include ISO files and a drivers directory with the driver files in directories, one
for each architecture and supported Windows version.
247
Red Hat Enterprise Linux 9 Configuring and managing virtualization
# ls /usr/share/virtio-win/
drivers/ guest-agent/ virtio-win-1.9.9.iso virtio-win.iso
When creating a new Windows VM , attach the file using the virt-install command options.
When installing the drivers on an existing Windows VM, attach the file as a CD-ROM using
the virt-xml utility:
Additional resources
To install KVM virtio drivers on a Windows guest operating system (OS), you must add a storage device
that contains the drivers - either when creating the virtual machine (VM) or afterwards - and install the
drivers in the Windows guest OS.
This example shows how to install the drivers using the graphical interface. You can also use the
Microsoft Windows Installer (MSI) command line interface.
Prerequisites
An installation medium with the KVM virtio drivers must be attached to the VM. For instructions
on preparing the medium, see Preparing virtio driver installation media on a host machine .
Procedure
4. Based on the architecture of the VM’s vCPU, run one of the installers on the medium.
248
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
5. In the Virtio-win-guest-tools setup wizard that opens, follow the displayed instructions until
you reach the Custom Setup step.
6. In the Custom Setup window, select the device drivers you want to install. The recommended
driver set is selected automatically, and the descriptions of the drivers are displayed on the right
of the list.
249
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Verification
Next steps
If you install the NetKVM driver, you may also need to configure the Windows guest’s networking
parameters.
To update KVM virtio drivers on a Windows guest operating system (OS), you can use the
Windows Update service, if the Windows OS version supports it. If it does not, reinstall the drivers from
virtio driver installation media attached to the Windows virtual machine (VM).
Prerequisites
If not using Windows Update, an installation medium with up-to-date KVM virtio drivers must
be attached to the Windows VM. For instructions on preparing the medium, see Preparing virtio
driver installation media on a host machine.
250
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
b. In the window that appears, type cmd and press Ctrl+Shift+Enter to run as administrator.
4. Reinstall KVM virtio drivers from the attached installation media. Do one of the following:
Reinstall the drivers using the Windows Command Prompt, where X is the installation media
drive letter. The following commands install all virtio drivers.
Reinstall the drivers using the graphical interface without rebooting the VM.
C:\WINDOWS\system32\netsh -f backup.txt
Additional resources
251
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The following sections provide information about the supported Hyper-V enlightenments and how to
enable them.
Hyper-V enlightenments provide better performance in a Windows virtual machine (VM) running in a
RHEL 9 host. For instructions on how to enable them, see the following.
Procedure
1. Use the virsh edit command to open the XML configuration of the VM. For example:
2. Add the following <hyperv> sub-section to the <features> section of the XML:
<features>
[...]
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vpindex state='on'/>
<runtime state='on' />
<synic state='on'/>
<stimer state='on'>
<direct state='on'/>
</stimer>
<frequencies state='on'/>
<reset state='on'/>
<relaxed state='on'/>
<time state='on'/>
<tlbflush state='on'/>
<reenlightenment state='on'/>
<stimer state='on'>
<direct state='on'/>
</stimer>
<ipi state='on'/>
<crash state='on'/>
<evmcs state='on'/>
</hyperv>
[...]
</features>
<clock offset='localtime'>
...
<timer name='hypervclock' present='yes'/>
</clock>
252
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
Verification
Use the virsh dumpxml command to display the XML configuration of the running VM. If it
includes the following segments, the Hyper-V enlightenments are enabled on the VM.
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vpindex state='on'/>
<runtime state='on' />
<synic state='on'/>
<stimer state='on'/>
<frequencies state='on'/>
<reset state='on'/>
<relaxed state='on'/>
<time state='on'/>
<tlbflush state='on'/>
<reenlightenment state='on'/>
<stimer state='on'>
<direct state='on'/>
</stimer>
<ipi state='on'/>
<crash state='on'/>
<evmcs state='on'/>
</hyperv>
<clock offset='localtime'>
...
<timer name='hypervclock' present='yes'/>
</clock>
You can configure certain Hyper-V features to optimize Windows VMs. The following table provides
information about these configurable Hyper-V features and their values.
253
Red Hat Enterprise Linux 9 Configuring and managing virtualization
NOTE
If hv_crash is
enabled, Windows
crash dumps are
not created.
NOTE
This feature is
exclusive to Intel
processors.
254
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
Used by Hyper-V to
indicate to the virtual
machine’s operating
system the number of
times a spinlock
acquisition should be
attempted before
indicating an excessive
spin situation to Hyper-
V.
255
Red Hat Enterprise Linux 9 Configuring and managing virtualization
MSR-based 82 Hyper-V
clock source
(HV_X64_MSR_TIME_RE
F_COUNT,
0x40000020)
Id value - string of up to
12 characters
IMPORTANT
Modifying the driver’s parameters causes Windows to reload that driver. This interrupts
existing network activity.
Prerequisites
256
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
Procedure
b. Under the list of network adapters, double-click Red Hat VirtIO Ethernet Adapter.
The Properties window for the device opens.
Parameter Description 2
NOTE
257
Red Hat Enterprise Linux 9 Configuring and managing virtualization
The following table provides information on the configurable NetKVM driver initial parameters.
Parameter Description
Valid values are: 16, 32, 64, 128, 256, 512, and 1024.
Valid values are: 16, 32, 64, 128, 256, 512, and 1024.
258
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
WARNING
Certain processes might not work as expected if you change their configuration.
Procedure
You can optimize your Windows VMs by performing any combination of the following:
Remove unused devices, such as USBs or CD-ROMs, and disable the ports.
Disable background services, such as SuperFetch and Windows Search. For more information
about stopping services, see Disabling system services or Stop-Service.
Review and disable unnecessary scheduled tasks, such as scheduled disk defragmentation. For
more information on how to do so, see Disable Scheduled Tasks.
Reduce periodic activity of server applications. You can do so by editing the respective timers.
For more information, see Multimedia Timers.
Disable the antivirus software. Note that disabling the antivirus might compromise the security
of the VM.
Prerequisites
Make sure you have installed the latest WHQL certified VirtIO drivers.
259
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Procedure
1. Enable TPM 2.0 by adding the following parameters to the <devices> section in the VM’s XML
configuration.
<devices>
[...]
<tpm model='tpm-crb'>
<backend type='emulator' version='2.0'/>
</tpm>
[...]
</devices>
2. Install Windows in UEFI mode. For more information on how to do so, see Creating a
SecureBoot virtual machine.
3. Install the VirtIO drivers on the Windows VM. For more information on how to do so, see
Installing virtio drivers on a Windows guest .
4. In UEFI, enable Secure Boot. For more information on how to do so, see Secure Boot.
Verification
Ensure that the Device Security page on your Windows machine displays the following
message:
Settings > Update & Security > Windows Security > Device Security
Prerequisites
Ensure that standard hardware security is enabled. For more information, see Enabling standard
hardware security on Windows virtual machines.
Ensure you have enabled Hyper-V enlightenments. For more information, see Enabling Hyper-V
260
CHAPTER 20. INSTALLING AND MANAGING WINDOWS VIRTUAL MACHINES
Ensure you have enabled Hyper-V enlightenments. For more information, see Enabling Hyper-V
enlightenments.
Procedure
1. Open the XML configuration of the Windows VM. The following example opens the
configuration of the Example-L1 VM:
2. Under the <cpu> section, specify the CPU mode and add the policy flag.
IMPORTANT
If you do not wish to specify a custom CPU, you can set the <cpu mode> as
host-passthrough.
4. On the VMs operating system, navigate to the Core isolation details page:
Settings > Update & Security > Windows Security > Device Security > Core isolation details
NOTE
For other methods of enabling HVCI, see the relevant Microsoft documentation.
Verification
Ensure that the Device Security page on your Windows VM displays the following message:
Settings > Update & Security > Windows Security > Device Security
261
Red Hat Enterprise Linux 9 Configuring and managing virtualization
To use utilities for accessing, editing, and creating virtual machine disks or other disk images for
a Windows VM, install the guestfs-tools and guestfs-winsupport packages on the host
machine:
To share files between your RHEL 9 host and its Windows VMs, you can use virtiofs or NFS.
262
CHAPTER 21. DIAGNOSING VIRTUAL MACHINE PROBLEMS
The following sections provide detailed information about generating logs and diagnosing some
common VM problems, as well as about reporting these problems.
The following sections explain what debug logs are , how you can set them to be persistent , enable them
during runtime, and attach them when reporting problems.
Debug logging is not enabled by default and has to be enabled when libvirt starts. You can enable
logging for a single session or persistently. You can also enable logging when a libvirt daemon session is
already running by modifying the daemon run-time settings .
Attaching the libvirt debug logs is also useful when requesting support with a VM problem.
NOTE
In some cases, for example when you upgrade from RHEL 8, libvirtd might still be the
enabled libvirt daemon. In that case, you must edit the libvirtd.conf file instead.
Procedure
3 logs all warning and error messages. This is the default value.
263
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Log all error and warning messages from the remote, util.json, and rpc layers
This is useful when restarting the libvirt daemon is not possible because restarting fixes the problem, or
because there is another process, such as migration or backup, running at the same time. Modifying
runtime settings is also useful if you want to try a command without editing the configuration files or
restarting the daemon.
Prerequisites
Procedure
NOTE
It is recommended that you back up the active set of filters so that you can
restore them after generating the logs. If you do not restore the filters, the
messages will continue to be logged which may affect system performance.
2. Use the virt-admin utility to enable debugging and set the filters according to your
requirements.
264
CHAPTER 21. DIAGNOSING VIRTUAL MACHINE PROBLEMS
3 logs all warning and error messages. This is the default value.
Logs all error and warning messages from the remote, util.json, and rpc layers
3. Use the virt-admin utility to save the logs to a specific file or directory.
For example, the following command saves the log output to the libvirt.log file in the
/var/log/libvirt/ directory.
4. Optional: You can also remove the filters to generate a log file that contains all VM-related
information. However, it is not recommended since this file may contain a large amount of
redundant information produced by libvirt’s modules.
5. Optional: Restore the filters to their original state using the backup file.
Perform the second step with the saved values to restore the filters.
Procedure
265
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Based on the encountered problems, attach the following logs along with your report:
For problems with the libvirt service, attach the /var/log/libvirt/libvirt.log file from the host.
For problems with a specific VM, attach its respective log file.
For example, for the testguest1 VM, attach the testguest1.log file, which can be found at
/var/log/libvirt/qemu/testguest1.log.
Additional resources
This section provides a brief introduction to core dumping and explains how you can dump a VM core to
a specific file.
In such cases, you can use the virsh dump utility to save (or dump) the core of a VM to a file before you
reboot the VM. The core dump file contains a raw physical memory image of the VM which contains
detailed information about the VM. This information can be used to diagnose VM problems, either
manually, or by using a tool such as the crash utility.
Additional resources
Prerequisites
Make sure you have sufficient disk space to save the file. Note that the space occupied by the
VM depends on the amount of RAM allocated to the VM.
Procedure
266
CHAPTER 21. DIAGNOSING VIRTUAL MACHINE PROBLEMS
IMPORTANT
The crash utility no longer supports the default file format of the virsh dump command.
To analyze a core dump file using crash, you must create the file using the --memory-
only option.
Additionally, you must use the --memory-only option when creating a core dump file to
attach to a Red Hat Support Case.
Troubleshooting
If the virsh dump command fails with a System is deadlocked on memory error, ensure you are
assigning sufficient memory for the core dump file. To do so, use the following crashkernel option
value. Alternatively, do not use crashkernel at all, which assigns core dump memory automatically.
crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Additional resources
Prerequisites
Make sure you know the PID of the processes that you want to backtrace.
You can find the PID by using the pgrep command followed by the name of the process. For
example:
# pgrep libvirt
22014
22025
Procedure
Use the gstack utility followed by the PID of the process you wish to backtrace.
For example, the following command backtraces the libvirt process with the PID 22014.
267
Red Hat Enterprise Linux 9 Configuring and managing virtualization
# gstack 22014
Thread 3 (Thread 0x7f33edaf7700 (LWP 22017)):
#0 0x00007f33f81aef21 in poll () from /lib64/libc.so.6
#1 0x00007f33f89059b6 in g_main_context_iterate.isra () from /lib64/libglib-2.0.so.0
#2 0x00007f33f8905d72 in g_main_loop_run () from /lib64/libglib-2.0.so.0
...
Additional resources
Additional resources for reporting virtual machine problems and providing logs
To request additional help and support, you can:
Raise a service request using the redhat-support-tool command line option, the Red Hat Portal
UI, or several different methods using FTP.
Upload the SOS Report and the log files when you submit a service request.
This ensures that the Red Hat support engineer has all the necessary diagnostic information for
reference.
For more information about SOS reports, see What is an SOS Report and how to create one
in Red Hat Enterprise Linux?
For information about attaching log files, see How to provide files to Red Hat Support?
268
CHAPTER 22. FEATURE SUPPORT AND LIMITATIONS IN RHEL 9 VIRTUALIZATION
Features listed in Recommended features in RHEL 9 virtualization have been tested and certified by
Red Hat to work with the KVM hypervisor on a RHEL 9 system. Therefore, they are fully supported and
recommended for use in virtualization in RHEL 9.
Features listed in Unsupported features in RHEL 9 virtualization may work, but are not supported and
not intended for use in RHEL 9. Therefore, Red Hat strongly recommends not using these features in
RHEL 9 with KVM.
Resource allocation limits in RHEL 9 virtualization lists the maximum amount of specific resources
supported on a KVM guest in RHEL 9. Guests that exceed these limits are not supported by Red Hat.
In addition, unless stated otherwise, all features and solutions used by the documentation for RHEL 9
virtualization are supported. However, some of these have not been completely tested and therefore
may not be fully optimized.
IMPORTANT
Any other hardware architectures are not supported for using RHEL 9 as a KVM virtualization host, and
Red Hat highly discourages doing so. Notably, this includes the 64-bit ARM architecture (ARM 64),
which is only provided as Technology Preview.
269
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Note, however, that by default, your guest OS does not use the same subscription as your host.
Therefore, you must activate a separate licence or subscription for the guest OS to work properly.
Machine types
To ensure that your VM is compatible with your host architecture and that the guest OS runs optimally,
the VM must use an appropriate machine type.
IMPORTANT
In RHEL 9, pc-i440fx-rhel7.5.0 and earlier machine types, which were default in earlier
major versions of RHEL, are no longer supported. As a consequence, attempting to start
a VM with such machine types on a RHEL 9 host fails with an unsupported configuration
error. If you encounter this problem after upgrading your host to RHEL 9, see the Red
Hat KnowledgeBase.
When creating a VM using the command line , the virt-install utility provides multiple methods of setting
the machine type.
When you use the --os-variant option, virt-install automatically selects the machine type
recommended for your host CPU and supported by the guest OS.
If you do not use --os-variant or require a different machine type, use the --machine option to
specify the machine type explicitly.
If you specify a --machine value that is unsupported or not compatible with your host, virt-
install fails and displays an error message.
The recommended machine types for KVM virtual machines on supported architectures, and the
corresponding values for the --machine option, are as follows. Y stands for the latest minor version of
RHEL 9.
# /usr/libexec/qemu-kvm -M help
Additional resources
IMPORTANT
270
CHAPTER 22. FEATURE SUPPORT AND LIMITATIONS IN RHEL 9 VIRTUALIZATION
IMPORTANT
Many of these limitations may not apply to other virtualization solutions provided by
Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
Features supported by other virtualization solutions are described as such in the following
paragraphs.
Notably, the 64-bit ARM architecture (ARM 64) is provided only as a Technology Preview for KVM
virtualization on RHEL 9, and Red Hat therefore discourages its use in production environments.
macOS
For a list of guest OSs supported on RHEL hosts and other virtualization solutions, see Certified Guest
Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and
Red Hat Enterprise Linux with KVM.
Other solutions:
To create VMs in containers, Red Hat recommends using the OpenShift Virtualization offering.
virsh blkdeviotune
271
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Instead, use libvirt utilities, such as virt-install, virt-xml, and supported virsh commands, as these
orchestrate QEMU according to the best practices.
To set up I/O throttling in RHEL 9, use virsh blkiotune. This is also known as libvirt-side I/O throttling.
For instructions, see Disk I/O throttling in virtual machines .
Other solutions:
QEMU-side I/O throttling is also supported in RHOSP. For details, see Setting Resource
Limitation on Disk and the Use Quality-of-Service Specifications section in the RHOSP
Storage Guide.
Other solutions:
Storage live migration is also supported in RHOSP, but with some limitations. For details, see
Migrate a Volume .
It is also possible live-migrate VM storage when using OpenShift Virtualization. For more
infrmation, see Virtual machine live migration .
Live snapshots
Creating or loading a snapshot of a running VM, also referred to as a live snapshot, is not supported in
RHEL 9.
In addition, note that non-live VM snapshots are deprecated in RHEL 9. Therefore, creating or loading a
snapshot of a shut-down VM is supported, but Red Hat recommends not using it.
Other solutions:
RHOSP also supports live snapshots. For details, see Importing virtual machines into the
overcloud.
272
CHAPTER 22. FEATURE SUPPORT AND LIMITATIONS IN RHEL 9 VIRTUALIZATION
vhost-user
RHEL 9 does not support the implementation of a user-space vHost interface.
Other solutions:
vhost-user is supported in RHOSP, but only for virtio-net interfaces. For details, see virtio-net
implementation and vhost user ports.
Note that the S3 and S4 states are also currently not supported in any other virtualization solution
provided by Red Hat.
virtio-crypto
Using the virtio-crypto device in RHEL 9 is not supported and its use is therefore highly discouraged.
Note that virtio-crypto devices are also not supported in any other virtualization solution provided by
Red Hat.
net_failover
Using the net_failover driver to set up an automated network device failover mechanism is not
supported in RHEL 9.
Note that net_failover is also currently not supported in any other virtualization solution provided by
Red Hat.
Multi-FD migration
Migrating VMs using multiple file descriptors (FDs), also known as multi-FD migration, is not supported
in RHEL 9.
Note that multi-FD migrations are also currently not supported in any other virtualization solution
provided by Red Hat.
NVMe devices
273
Red Hat Enterprise Linux 9 Configuring and managing virtualization
Attaching Non-volatile Memory express (NVMe) devices to VMs as a PCIe device with PCI-passthrough
is not supported.
Note that attaching NVMe devices to VMs is also currently not supported in any other virtualization
solution provided by Red Hat.
TCG
QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This
mode does not require hardware virtualization support. However, TCG is not supported by Red Hat.
TCG-based guests can be recognized by examining its XML configuration, for example using the virsh
dumpxml command.
<domain type='qemu'>
<domain type='kvm'>
Additional resources
IMPORTANT
Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some
274
CHAPTER 22. FEATURE SUPPORT AND LIMITATIONS IN RHEL 9 VIRTUALIZATION
Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some
buses do not make all 256 device addresses available for the user; for example, the root bus has several
built-in devices occupying slots.
NOTE
275
Red Hat Enterprise Linux 9 Configuring and managing virtualization
VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported
on any other architecture.
vfio-ccw
VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any
other architecture.
SMBIOS
SMBIOS configuration is not available on IBM Z.
Watchdog devices
If using watchdog devices in your VM on an IBM Z host, use the diag288 model. For example:
<devices>
<watchdog model='diag288' action='poweroff'/>
</devices>
kvm-clock
The kvm-clock service is specific to AMD64 and Intel 64 systems, and does not have to be
configured for VM time management on IBM Z.
v2v and p2v
The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and
are not provided on IBM Z.
Migrations
To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the
hypervisor, use the host-model CPU mode. The host-passthrough and maximum CPU modes are
not recommended, as they are generally not migration-safe.
If you want to specify an explicit CPU model in the custom CPU mode, follow these guidelines:
To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of
QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without -base
at the end.
If you have both the source host and the destination host running, you can instead use the
virsh hypervisor-cpu-baseline command on the destination host to obtain a suitable CPU
model. For details, see Verifying host CPU compatibility for virtual machine migration .
For more information about supported machine types in RHEL 9, see Recommended
features in RHEL 9 virtualization.
# pxelinux
default linux
label linux
276
CHAPTER 22. FEATURE SUPPORT AND LIMITATIONS IN RHEL 9 VIRTUALIZATION
kernel kernel.img
initrd initrd.img
append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/
Secure Execution
You can boot a VM with a prepared secure guest image by defining <launchSecurity type="s390-
pv"/> in the XML configuration of the VM. This encrypts the VM’s memory to protect it from
unwanted access by the hypervisor.
Note that the following features are not supported when running a VM in secure execution mode:
Full memory dumps. Instead, specify the --memory-only option for the virsh dump command.
248 or more vCPUs. The vCPU limit for secure guests is 247.
Additional resources
Support
Virtualization on ARM 64 is only provided as a Technology Preview on RHEL 9, and is therefore
unsupported.
Guest operating systems
The only guest operating system currently working on ARM 64 virtual machines (VMs) is RHEL 9.
Web console management
Some features of VM management in the RHEL 9 web console may not work correctly on ARM 64
hardware.
vCPU hot plug and hot unplug
Attaching a virtual CPU (vCPU) to a running VM, also referred to as a vCPU hot plug, is not
supported on ARM 64 hosts. In addition, like on AMD64 and Intel 64 hosts, removing a vCPU from a
running VM (vCPU hot unplug), is not supported on ARM 64.
SecureBoot
277
Red Hat Enterprise Linux 9 Configuring and managing virtualization
SecureBoot
Management Mode
TPM-1.2 support
kvm-clock
The kvm-clock service does not have to be configured for time management in VMs on ARM 64.
Peripheral devices
ARM 64 systems do not support all the peripheral devices that are supported on AMD64 and Intel 64
systems. In some cases, the device functionality is not supported at all, and in other cases, a different
device is supported for the same functionality.
Serial console configuration
When setting up a serial console on a VM , use the console=ttyAMA0 kernel option instead of
console=ttyS0 with the grubby utility.
Non-maskable interrupts
Sending non-maskable interrupts (NMIs) to an ARM 64 VM is currently not possible.
Nested virtualization
Creating nested VMs is currently not possible on ARM 64 hosts.
v2v and p2v
The virt-v2v and virt-p2v utilities are only supported on the AMD64 and Intel 64 architecture and
are, therefore, not provided on ARM 64.
278
CHAPTER 22. FEATURE SUPPORT AND LIMITATIONS IN RHEL 9 VIRTUALIZATION
(Technology Preview)
Note that some of the unsupported features are supported on other Red Hat products, such as
Red Hat Virtualization and Red Hat OpenStack platform. For more information, see Unsupported
features in RHEL 9 virtualization.
Additional sources
279
Red Hat Enterprise Linux 9 Configuring and managing virtualization
280