Vmware Horizon View Graphics Acceleration Deployment
Vmware Horizon View Graphics Acceleration Deployment
W H I T E PA P E R
Table of Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Why 3D Matters for VMware Horizon View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Understanding the Differences Between Soft 3D/SVGA, vDGA and vSGA . . . . . . . . Soft 3D and SVGA The VMware Graphics Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . vDGA Virtual Dedicated Graphics Acceleration (Tech Preview). . . . . . . . . . . . . . . vDGA Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vDGA Does Not Support Live VMware vSphere vMotion Capabilities . . . . . . . . . vSGA Virtual Shared Graphics Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure vSGA in VMware vSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure vSGA in Horizon View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Servers with Compatible Power and PCI Slot Capacity . . . . . . . . . . . . . . . . . . . . . Physical Host Size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PCIe x16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host PSU (Power Supply Unit). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Technology for Directed I/O (VTd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Display Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supported Graphics Cards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . End-User Clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Requirements and Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DirectX 9.0c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OpenGL 2.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphics Card Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quadro Range. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tesla M2075. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kepler (Grid K1 and K2 Boards) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confirm Successful Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vSGA Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NVIDIA Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vSGA Post-Installation Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xorg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gpuvm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nvidia-smi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vDGA Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable the Host for GPU Passthrough. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Check VT-d or AMD IOMMU Is Enabled. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable Device Passthrough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable the Virtual Machine for GPU Passthrough. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 . 4 . 5 . 5 . 5 . 6 . 6 . 6 . 6 . 7 . 8 . 8 . 8 . 8 . 8 . 8 . 8 . 8 . 9 . 9 . 9 11 11 11 11 13 13 13 13 13 15 15 16 16 16 16 17 18 18 18 18 18
W H I T E PA P E R / 2
Update to Hardware Version 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reserve All Configured Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adjust pciHole.start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add the PCI Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Install the NVIDIA Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Install the View Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable Proprietary NVIDIA Capture APIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VMware Horizon View Pool Configuration for vSGA. . . . . . . . . . . . . . . . . . . . . . . . . . . . Horizon View Pool Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Video Memory (VRAM) Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Screen Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Horizon View Pool 3D Rendering Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manage Using vSphere Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Best Practices for Configuring 3D Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manage Using vSphere Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable Horizon View Pools for vSGA 3D Hardware Rendering . . . . . . . . . . . . . . . . . Enable an Existing Horizon View Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable a New Horizon View Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance Tuning Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Machine Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PCoIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relative Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling Relative Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Machines Using VMXNET3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Workaround for CAD Performance Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resource Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gpuvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nvidia-smi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xorg 27 Xorg Fails to Start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that the NVIDIA VIB Bundle Is Installed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that the NVIDIA Driver Loads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that Display Devices Are Present in the Host . . . . . . . . . . . . . . . . . . . . . . . . Possible PCI Bus Slot Order Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Check Xorg Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sched.mem.min Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the Author and Contributors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18 18 18 19 19 19 19 20 20 20 21 21 21 21 21 21 22 22 22 22 22 22 22 22 23 24 24 24 24 24 24 25 26 26 26 27 27 27 27 27 28 28 28 29
W H I T E PA P E R / 3
Introduction
The purpose of this document is to explain the various types of virtual machine graphics acceleration available, how to implement and troubleshoot them, and to provide knowledge around the benefits offered by each technology.
vDGA
Oil & Gas
vSGA
Engineering Multi-Monitor
Software 3D
1080p Aero
Task Worker
Productivity/Knowledge Worker
Standard productivity tools are central to work
Workstation Users
W H I T E PA P E R / 4
Graphics acceleration capability provided by VMware ESXi for high-end workstation graphics where a discrete GPU is needed Multiple virtual machines leverage physical GPUs installed locally in the ESXi hosts to provide hardware-accelerated 3D graphics to multiple virtual desktops
vSGA
W H I T E PA P E R / 5
vDGA can be costly to implement, but should offer a reduction in cost compared to individual high-end workstations. The amount of 3D hardwareaccelerated virtual machines per host is limited to the number of PCIe x16 slots in the server. Server hardware is available with up to four PCIe x16 slots and room in the chassis for high-end GPUs. Some blade enclosure hardware vendors also offer a sidecar-type expansion unit that can support up to eight GPUs. Note: Both vSGA and vDGA can support a maximum of eight GPU cards per ESXi host. vDGA Deployment When you deploy vDGA, it uses the graphics driver from the GPU vendor rather than the virtual machines SVGA 3D driver. vDGA uses an interface between the remoting protocol and graphics driver to provide frame buffer access. Because of the nature of vDGA configuration, it is not a candidate for automated deployment using VMware View Composer, as each individual virtual machine has a one-to-one relationship with a specific GPU. Users should configure each virtual machine manually, and carefully select the correct PCI device for each one. vDGA Does Not Support Live VMware vSphere vMotion Capabilities Live VMware vSphere vMotion is not supported with vDGA. vDGA uses VMware vSphere DirectPath I/O to allow direct access to the GPU card, bypassing the virtualization layer. By enabling direct passthrough from the virtual machine to the PCI device installed on the host, the virtual machine is effectively locked to that specific host. If a user needs to move a vDGA-enabled virtual machine to a different host, they should power off the virtual machine, use vSphere vMotion to migrate it to another host that has a GPU card installed, and re-enable passthrough to the specific PCI device on that host. Only then should the user power on the virtual machine.
W H I T E PA P E R / 6
Hardware only uses hardware-accelerated GPUs. If a hardware GPU is not present in a host, the virtual machine will not start; or you will not be able to perform a live vSphere vMotion migration to that host. vSphere vMotion is possible with this specification as long as the host the virtual machine is being moved to has a capable and available hardware GPU. This setting can be used to guarantee that a virtual machine will always use hardware 3D rendering when a GPU is available; but that in turn limits the virtual machine to hosts with hardware GPUs. Configure vSGA in Horizon View To configure vSGA in Horizon View 5.2 Pool Settings, there are five 3D rendering options: Manage Using VMware vSphere Client; Automatic; Software; Hardware; and Disabled. Manage Using vSphere Client will not make any changes to the 3D settings of the individual virtual machines in that pool. This allows individual virtual machines to have different settings set through vSphere. The most likely use for this setting is during testing or for manual desktop pools. Automatic uses hardware acceleration if there is a capable, and available, hardware GPU in the host that the virtual machine is starting in. However, if a hardware GPU is not available it uses software 3D rendering for any 3D tasks. This allows the virtual machine to be started on, or migrated (via vSphere vMotion) to any host (vSphere version 5.0 and higher); and to use the best solution available on that host. Software only uses vSphere software 3D rendering, even if there is an available hardware GPU in the host the virtual machine is running on. This will not provide the performance benefits that hardware 3D acceleration offers. However, it both allows the virtual machine to run on any host (vSphere 5.0 and higher), and allows you to block virtual machines from using a hardware GPU in a host, if that level of performance is not required (like the 3D aspects of Microsoft Office). Hardware only uses hardware-accelerated GPUs. If a hardware GPU is not present in a host, the virtual machine will not start, or you will not be able to perform a live vSphere vMotion migration to that host. vSphere vMotion is possible with this specification as long as the host the virtual machine is being moved to has a capable and available hardware GPU. This setting can be used to guarantee that a virtual machine will always use hardware 3D rendering when a GPU is available; but that in turn limits the virtual machine to hosts with hardware GPUs. Disabled does not use 3D rendering at all (software or hardware), and overrides vSphere 3D settings to disable 3D. Use the Disabled setting to ensure that non-graphical workload Horizon View desktop pools do not use unnecessary resources, like sharing a hardware GPU, when running on the same cluster as heavier graphics workload Horizon View desktops.
W H I T E PA P E R / 7
Prerequisites
This section lists both the hardware and software required to support the use of vSGA and vDGA.
W H I T E PA P E R / 8
To configure an ESXi host with only a single GPU, first find the PCI ID of the graphics device by running the following command: # lspci | grep -i display Youll see something similar to this: 000:128:00.0 Display controller: NVIDIA Corporation GT200b [GeForce GTX 275] Then you must reset the ownership flags referencing the PCI ID above (in bold): # vmkchdev -v 0:128:0:0 Important: This setting is not persistent, and must be re-run each time ESXi reboots.
Software Requirements
Software requirements for both vSGA and vDGA are documented below.
P RO D UCT DESCR IPTION
Hypervisor
ESXi 5.1 or higher with latest patches. (ESXi 5.1 and ESXi510-201210001 patch recommended at the time of this writing.) VMware Horizon View 5.2 or higher. vSGA and vDGA support only PCoIP with a maximum of two display monitors. The ESXi VIBs are written, maintained, and supported by NVIDIA, not VMware. The NVIDIA vSphere VIBs for vSGA and the NVIDIA Windows 7 driver for vDGA can be downloaded from the NVIDIA Download Drivers page.
NVIDIA Drivers
The virtual machines must be running Windows 7 or later. vSGA supports both 32-bit and 64-bit Windows. vDGA requires the Windows 7 64-bit edition.
End-User Clients
With both graphical and compute processing handled by the ESXi hosts that run the 3D virtual desktops, IT might overlook end clients because they often perceive that major processing is handled inside the datacenter. This is not always the case for high-end graphics applications or games running on virtual desktops. Using 3D applications with fast-changing graphics often results in a massive bandwidth requirement to support the PCoIP traffic that flows between the virtual desktop and the end clients. This is often the cause of a poor user experience.
W H I T E PA P E R / 9
In some 3D use cases, as much as 70Mbit/s of PCoIP traffic per virtual desktop has been observed during peak loads. This high bandwidth is caused by constant changes to images on the virtual desktop screen. This requires PCoIP to continually send data to keep up with the changes, ensuring that the display on the screen is accurate and current. This large flow of PCoIP traffic sent to end clients can lead to performance problems. Some low-end thin clients do not have the CPU processing power they need to decode PCoIP data fast enough to make the end-user experience smooth and uninterrupted. However, this is not always the case for every environment or end client. It depends on which applications users are running on their virtual desktops. For high-end 3D and video workloads, use a high-performance Zero Client with a Teradici Tera2 chip, or a modern Core i5- or Core i7-based Windows PC, for best performance with multiple high-resolution displays. Note: The Tera1 chip can support a maximum frames-per-second rate of 30FPS, whereas the new Tera2 chip can achieve up to 60FPS. Achieving high frame rates can be important to the usability of certain engineering applications.
W H I T E PA P E R / 1 0
DirectX 9.0c
vSGA currently supports only up to DirectX 9.0c. Applications that require a newer version of DirectX might not function or perform correctly when using vSGA.
OpenGL 2.1
vSGA currently supports only up to OpenGL 2.1. Applications that require a newer version of OpenGL might not function or perform correctly when using vSGA. Note: vDGA will support the versions of DirectX and OpenGL that the GPU manufacturers graphics driver supports. This is generally the latest version of these technologies.
Windows Aero Microsoft Office Microsoft Visio Google Earth HTML 5/Web 3D Adobe Photoshop Epic SolidWorks View Team Center Vis PTC Creo View Siemens NX Viewer
4 4 4 4 4 4 4 4 4 4 4
4 4 4 4 4 4 4 4 4 4 4
4 4 4 4 4 4 4 4 4 4 4
W H I T E PA P E R / 1 1
AP P LI CATI O N
SOFT 3D
VSGA
VDG A
Autodesk AutoCAD Autodesk Inventor Autodesk 3DS Max Autodesk Maya CATIA SolidWorks 2012 SolidWorks 2013 Enovia Siemens NX Adobe Premiere Siemens NX Viewer
4 4 4 4 4 4 4 4 4 4 4
4 4 4 4 4 4 4 4 4 4 4
4 4 4 4 4 4 4 4 4 4 4
4Works 4Useful for reviewers and lightweight use cases, but not NVIDIA Graphics Driver Vendor Certified 4NVIDIA Graphics Driver Vendor Certified 4Not appropriate
Table 4: Graphics Application Use Cases 3D and Video Design Applications
AP P LI CATI O N
SOFT 3D
VSGA
VDG A
4 4 4 4 4
4 4 4 4 4
4 4 4 4 4
W H I T E PA P E R / 1 2
Quadro Range
The Quadro 4000/5000/6000 SDI Users Guide can be downloaded from the NVIDIA Web site.
Tesla M2075
The Tesla M2075 Dual-Slot Computing Processor Module Board Specification document can be downloaded from the NVIDIA Web site.
W H I T E PA P E R / 1 3
PCI Pin: 0x69 Spawned Bus: 0x00 Flags: 0x0201 Module ID: 71 Module Name: nvidia Chassis: 0 Physical Slot: 1 Slot Description: Passthru Capable: true Parent Device: PCI 0:0:1:0 Dependent Device: PCI 0:0:1:0 Reset Method: Bridge reset FPT Sharable: true
W H I T E PA P E R / 1 4
vSGA Installation
This section takes you through the steps required to install the NVIDIA driver (VIB) on an ESXi host.
NVIDIA Drivers
1. The NVIDIA vSphere VIBs for vSGA can be downloaded from the NVIDIA Download Drivers page. 2. Upload the bundle (.zip) to a datastore on the host. You can do this in two ways: Upload the bundle by browsing the datastore using the vSphere Client. Upload the bundle to the host datastore using an SCP tool (e.g., FastSCP or WinSCP). 3. Run the following command through an ESXi SSH session to install the VIB onto the host: # esxcli software vib install d /xxx-path-to-vib/vib-name.zip # esxcli software vib install d /vmfs/volumes/509aa90d-69ee45eb-c96b4567b3d/NVIDIA-VMware-x86_64-304.59-bundle.zip During the installation, if your host is not in Maintenance Mode you will receive the following error: [MaintenanceMode Error].
You have two options: either put the host into Maintenance Mode; or add the following command option in the esxcli command above: --maintenance-mode Here is an example of the complete command: # esxcli software vib install -maintenance-mode d /vmfs/volumes/509aa90d69ee45eb-c96b-4567b3d/NVIDIA-VMware-x86_64-304.59-bundle.zip Could not find a trusted signer indicating that the VIB bundle is not signed, use the following option in the esxcli to remove the signature check:
--no-sig-check Here is an example of the complete command: # esxcli software vib install -no-sig-check d /vmfs/volumes/509aa90d69ee45eb-c96b-4567b3d/NVIDIA-VMware-x86_64-304.59-bundle.zip
Installation can take a few minutes. After it is complete you should see the following output in the SSH console: Installation Result
Message: Operation finished successfully. Reboot Required: false VIBs Installed: <VIB NAME HERE> VIBs Removed: VIBs Skipped:
W H I T E PA P E R / 1 5
4. Although the output states that a reboot is not required (Reboot Required: false), VMware recommends the ESXi host is rebooted to verify that Xorg will start correctly on future restarts of the host. If you do not reboot the host, you will have to manually start the Xorg service. You can do this by issuing the following command: # /etc/init.d/xorg start To remove the installed VIB from a host, run the following command: # esxcli software vib remove --vibname=name
If Xorg fails to start, go to the Troubleshooting section. gpuvm Issue the following command through an ESXi SSH session: # gpuvm
This results in a list of working GPUs that shows the virtual machines using each GPU, and the amount of video memory reserved for each one. If this command has no output at all, then the Xorg service is most likely not running. Run the following command in an SSH session to show the status of Xorg: # /etc/init.d/xorg status # /etc/init.d/xorg start
If Xorg fails to start, go to the Troubleshooting section. nvidia-smi To see how much of each GPU is in use, issue the following command in an SSH session: # nvidia-smi
This will show several details of GPU usage at the point in time when you issued the command (this display is not dynamic and must be reissued to update the information). You can also issue the following command: # watch n 1 nvidia-smi
This command will issue the nvidia-smi command every second to provide a refresh of that point-in-time information. Note: The most meaningful metric in the nvidia-smi display is at the right of the middle section. It shows the percentage of each GPUs processing cores in use at that point in time. This can be helpful if you have to troubleshoot poor performanceverify if the GPU processing cores are being overtaxed, and the cause of their poor performance.
W H I T E PA P E R / 1 6
Log Files Verify that the virtual machine has graphics acceleration by searching for OpenGL in the virtual machines vmware.log. You should see something like: mks| I120: OpenGL Version: 3.2.0 NVIDIA 304.59 (3.2.0) mks| I120: GLSL Version: 1.50 NVIDIA via Cg compiler (1.50.0) mks| I120: OpenGL Vendor: NVIDIA Corporation mks| I120: OpenGL Renderer: Quadro 6000/PCIe/SSE2 If the virtual machine is using the VMware software renderer, however, the vmware.log will contain: mks| I120: VMiopLog notice: SVGA2 vmiop started llvmpipe
W H I T E PA P E R / 1 7
vDGA Installation
This section takes you through enabling GPU passthrough at the host level and preparing the virtual machines for 3D rendering.
Add the PCI Device To enable vDGA for a virtual machine, the PCI device needs to be added to the virtual machines hardware. 3. Using the vSphere Client, connect directly to the ESXi host with the GPU card installed, or select the host in vCenter. 4. Right-click the virtual machine and select Edit Settings. 5. Add a new device by selecting PCI Device from the list, and click Next. 6. Select the GPU as the passthrough device to connect to the virtual machine from the drop-down list, and click Next. 7. Click Finish. Install the NVIDIA Driver 8. Download and install the latest NVIDIA Windows 7 desktop driver on the virtual machine. All NVIDIA drivers can be downloaded from the NVIDIA Download Drivers page. 9. After the driver is installed, reboot the virtual machine. Install the View Agent 10. After the NVIDIA drivers are installed correctly, install the View Agent on the virtual machine. 11. Reboot when requested. Enable Proprietary NVIDIA Capture APIs Note: This is required only if the virtual machine has more than 2GB of configured memory. 12. After the virtual machine has rebooted, enable the proprietary NVIDIA capture APIs by running: C:\Program Files\Common Files\VMware\Teradici PCoIP Server\MontereyEnable. exe -enable Note: If MontereyEnable.exe is not found, use NvFBCEnable.exe. In the new SDK, MontereyEnable is replaced with NvFBCEnable.
13. After the process is complete, Restart the virtual machine. 14. In order to activate the NVIDIA display adapter, you must connect via PCoIP, and at full screen from the endpoint (at native resolution), or the virtual machine will use the SVGA 3D display adapter. vDGA will not work through the vSphere console session. After the virtual machine has rebooted and you have connected via PCoIP in full screen, check to ensure that the GPU is active by viewing the display information in DXDiag.exe.
15. Click the Start menu. 16. Type dxdiag and click Enter after DXDiag shows up in the list, or click on it in the list. 17. After DxDiag launches, check the Display tab to verify that it is using the NVIDIA GPU/Driver.
W H I T E PA P E R / 1 9
* If you are still using Virtual Hardware version 8, the maximum VRAM is still 128MB and software rendering only.
Table 6: Video Memory (VRAM) Sizing
Note: Whenever you change the 3D Renderer setting, it reverts the amount of video memory back to the default of 96MB. Make sure you change the video memory to the appropriate number after you change this setting. VRAM settings that you configure in Horizon View Administrator take precedence over the VRAM settings that are configured for the virtual machines in vSphere Client or vSphere Web Client. Select the Manage using vSphere Client option to prevent this. If you are using Manage using vSphere Client, VMware recommends that you use the Web Client to configure the virtual machines, rather than the software vSphere Client. This is because the software vSphere Client will not display the various rendering options; it will only display Enable/Disable 3D support. Important: You must power off and on existing virtual machines for the 3D Renderer setting to take effect. Restarting or Rebooting a virtual machine does not cause the setting to take effect. Note: After making VRAM changes to Horizon View Pool, there might be a short delay (sometimes a couple of minutes) before the message Reconfiguring virtual machine settings appears in vCenter. It is important to wait for this process to complete before power cycling the virtual machines.
W H I T E PA P E R / 2 0
Screen Resolution
If you enable the 3D Renderer setting, configure the Max number of monitors setting for one or two monitors. You cannot select more than two monitors. The Max resolution of any one monitor setting is 1920 x 1200 pixels. You cannot configure this value higher. Important: You must power off and on existing virtual machines for the 3D Renderer setting to take effect. Restarting or Rebooting a virtual machine does not cause the setting to take effect.
W H I T E PA P E R / 2 2
3. Scroll down the page until you reach the Remote Display Protocol section. In this section, you will see the 3D Renderer option. 4. Select either Hardware or Automatic as the 3D rendering option from the dropdown list and click Configure to configure the amount of VRAM you want each virtual desktop to have. 5. If the 3D Renderer section is grayed out, ensure that you have PCoIP selected as your Default Display Protocol, and that Allow users to choose protocol is set to No. Important: You must power off and on existing virtual desktops for the 3D Renderer setting to take effect. Restarting or Rebooting a virtual desktop does not cause the setting to take effect. Enable a New Horizon View Pool During the creation of a new Horizon View pool, configure the pool to Normal until you reach the Pool Settings section. 1. Scroll down the page until you reach the Remote Display Protocol section. 2. In this section, you will see the 3D Renderer option. 3. Select either Hardware or Automatic as the 3D rendering option from the dropdown list and click Configure to configure the amount of VRAM you want each virtual desktop to have. 4. If the 3D Renderer section is grayed out, ensure that you have PCoIP selected as your Default Display Protocol, and that Allow users to choose protocol is set to No. Important: You must power off and on existing virtual desktops for the 3D Renderer setting to take effect. Restarting or Rebooting a virtual desktop does not cause the setting to take effect.
W H I T E PA P E R / 2 3
PCoIP
Occasionally PCoIP custom configurations can contribute to poor performance. By default, PCoIP is set to allow a maximum of 30 frames per second. Some applications require significantly more than 30FPS. If you notice that the frame rate of an application is lower than expected, reconfigure the PCoIP GPO to allow a maximum of 120 frames per second. Another option with PCoIP is to enable Disable Build-To-Lossless. This will reduce the overall amount of PCoIP traffic, which will in turn reduce the load placed on both the virtual machine and endpoint.
Relative Mouse
If you are using an application or game and the cursor is moving uncontrollably, enabling the relative mouse feature might improve mouse control. Relative mouse is a new Windows Horizon View Client (v5.3) feature that changes the way client mouse movement is tracked and sent to the server via PCoIP. Traditionally PCoIP uses absolute coordinates. Absolute mouse events allow the client to render the cursor locally, which is a significant optimization for high-latency environments. However, not all applications work well when using absolute mouse. Two notable classes of applications, CAD applications and 3D games, rely on relative mouse events to function correctly. With the introduction of vSGA and vDGA in Horizon View 5.2, VMware expects the requirement for relative mouse to increase rapidly as CAD and 3D games become more heavily used in Horizon View environments. The Horizon View Windows client v5.3 is required to enable relative mouse. At the time of writing, this feature is not available through any other software clients or Zero Clients. Enabling Relative Mouse The end user can enable relative mouse manually. To manually enable relative mouse, right-click the Horizon View Client Shade at the top of the screen and select Relative Mouse. You should see a tick sign (3) next to Relative Mouse. Note: Relative Mouse must be selected on each and every connection. There is no option to enable this by default at the time of this writing.
W H I T E PA P E R / 2 4
3. In the Edit menu, click Add Value, and then add the following registry value: Value Name: FastSendDatagramThreshold Data Type: REG_DWORD Value: 1500
4. Quit Registry Editor. Note: A reboot of the desktop virtual machine is required after changing this registry setting. If this setting does not exist, create it as a DWORD value. Further information on what this change does can be found on the Microsoft Support Web site.
W H I T E PA P E R / 2 5
Resource Monitoring
The following section documents ways to monitor the GPU resources on each ESXi host.
gpuvm
To better manage the GPU resources that are available on an ESXi host, examine the current GPU resource allocation. The ESXi command-line query utility gpuvm lists the GPUs installed on an ESXi host and displays the amount of GPU memory that is allocated to each virtual machine on the host. To run the utility, run the following command from a console on the host or an SSH connection: # gpuvm For example, the utility might display the following output: # gpuvm Xserver unix:0, GPU maximum memory 2076672KB pid 118561, VM Test-VM-001, reserved 131072KB of GPU memory pid 664081, VM Test-VM-002, reserved 261120KB of GPU memory GPU memory left 1684480KB
nvidia-smi
To run the utility, run the following command from a console on the host or an SSH connection. # nvidia-smi This will show several details of GPU usage at the point in time when you issued the command (this display is not dynamic and must be reissued to update the information). You can also issue the following command: # watch n 1 nvidia-smi This command will issue the nvidia-smi command every second to provide a refresh of that point-in-time information. Note: The most meaningful metric in the nvidia-smi display is at the right of the middle section. It shows the percentage of each GPUs processing cores in use at that point in time. This can be helpful if you have to troubleshoot poor performanceverify if the GPU processing cores are being overtaxed, and the cause of their poor performance. For more details on how to use the nvidia-smi tool, please refer to the nvidia-smi documentation.
W H I T E PA P E R / 2 6
Troubleshooting
This section provides troubleshooting steps to follow if you have issues with 3D rendering with both vSGA and vDGA.
Xorg
Xorg Fails to Start If you attempt to start Xorg and it fails, this is most likely due to the NVIDIA VIB module not loading properly. Often, this can be resolved by warm rebooting the host (it appears in some instances that the GPU is not fully initialized by the time the VIB module tries to load). If Xorg still fails to start, try some of the following steps. Verify that the NVIDIA VIB Bundle Is Installed To verify that the NVIDIA VIB bundle is installed, run the following command: # esxcli software vib list | grep NVIDIA If the VIB is installed correctly, you should see a similar output to the example below: NVIDIA-VMware 304.59-1-OEM.510.0.0.799733 VMwareAccepted 2012-11-14 Verify that the NVIDIA Driver Loads To verify that the NVIDIA driver loads, run the following command: # esxcli system module load m nvidia If the driver loads correctly, you should see a similar output to the example below: Unable to load module /usr/lib/vmware/vmkmod/nvidia: Busy If the NVIDIA driver does not load, check the vmkernel.log: # vi /var/log/vmkernel.log Search for NVRM. Often, an issue with the GPU will be identified in the vmkernel.log. Verify that Display Devices Are Present in the Host To verify that display devices are present in the host, run the following command: # esxcli hardware pci list c 0x0300 m 0xff You should see a similar output to the following: 000:001:00.0 Address: 000:001:00.0 Segment: 0x0000 Bus: 0x01 Slot: 0x00 Function: 0x00 VMkernel Name: Vendor Name: NVIDIA Corporation Device Name: NVIDIAQuadro 6000 NVIDIA
W H I T E PA P E R / 2 7
Configured Owner: Unknown Current Owner: VMkernel Vendor ID: 0x10de Device ID: 0x0df8 SubVendor ID: 0x103c SubDevice ID: 0x0835 Device Class: 0x0300 Device Class Name: VGA compatible controller Programming Interface: 0x00 Revision ID: 0xa1 Interrupt Line: 0x0b IRQ: 11 Interrupt Vector: 0x78 PCI Pin: 0x69 Spawned Bus: 0x00 Flags: 0x0201 Module ID: 71 Module Name: nvidia Chassis: 0 Physical Slot: 1 Slot Description: Passthru Capable: true Parent Device: PCI 0:0:1:0 Dependent Device: PCI 0:0:1:0 Reset Method: Bridge reset FPT Sharable: true Possible PCI Bus Slot Order Issue If you installed a second lower-end GPU in the server, it is possible that the order of the cards in the PCIe slots will choose the higher-end card for the ESXi console session. If this occurs, swap PCIe slots between the two GPUs; or change the Primary GPU settings in the server BIOS. Check Xorg Logs If the correct devices are present, view the Xorg log file to see if there is an obvious issue. # vi /var/log/Xorg.log sched.mem.min Error If you get a vSphere error about the sched.mem.min, then add the following parameter to the .vmx file of the virtual machine: sched.mem.min = 4098 Note: 4098 must match the amount of configured virtual machine memory. The example above is for a virtual machine with 4GB of RAM.
W H I T E PA P E R / 2 8
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2013 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-WP-VMGRPACCELDEPLOY-USLET-20130515-WEB