Best Practices Virrtualizing MS SQL Server On Nutanix
Best Practices Virrtualizing MS SQL Server On Nutanix
Copyright
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.
Copyright | 2
Microsoft SQL Server
Contents
1. Executive Summary.................................................................................6
2. Introduction.............................................................................................. 7
2.1. Audience.........................................................................................................................7
2.2. Purpose.......................................................................................................................... 7
3
Microsoft SQL Server
9. Conclusion..............................................................................................54
Appendix..........................................................................................................................55
Resources............................................................................................................................55
About the Authors............................................................................................................... 55
About Nutanix...................................................................................................................... 56
4
Microsoft SQL Server
List of Figures................................................................................................................ 57
List of Tables.................................................................................................................. 59
5
Microsoft SQL Server
1. Executive Summary
This document makes recommendations for designing, optimizing, and scaling Microsoft SQL
Server deployments on the Nutanix Enterprise Cloud. Historically, it has been a challenge to
virtualize SQL Server because of the high cost of traditional virtualization stacks and the impact
that a SAN-based architecture can have on performance. Businesses and their IT departments
have constantly fought to balance cost, operational simplicity, and consistent predictable
performance.
The Nutanix Enterprise Cloud removes many of these challenges and makes virtualizing a
business-critical application such as SQL Server much easier. The Acropolis Distributed Storage
Fabric (DSF) is a software-defined solution that provides all the features one typically expects in
an enterprise SAN, without a SAN’s physical limitations and bottlenecks. SQL Server particularly
benefits from the following DSF features:
• Localized I/O and the use of flash for index and key database files to lower operation latency.
• A highly distributed approach that can handle both random and sequential workloads.
• The ability to add new nodes and scale the infrastructure without system downtime or
performance impact.
• Nutanix data protection and disaster recovery workflows that simplify backup operations and
business continuity processes.
Nutanix lets you run both Microsoft SQL Server and other VM workloads simultaneously on
the same platform. Density for SQL Server deployments is driven by the database’s CPU and
storage requirements. To take full advantage of the system’s performance and capabilities,
validated testing shows that it is better to scale out and increase the number of SQL Server VMs
on the Nutanix platform than to scale up individual SQL Server instances. The Nutanix platform
handles SQL Server’s demanding throughput and transaction requirements with localized I/O,
server-attached flash, and distributed data protection capabilities.
1. Executive Summary | 6
Microsoft SQL Server
2. Introduction
2.1. Audience
This best practice document is part of the Nutanix Solutions Library. We wrote it for those
architecting, designing, managing, and supporting Nutanix infrastructures. Readers should
already be familiar with a hypervisor (VMware vSphere, Microsoft Hyper-V, or the native Nutanix
hypervisor, AHV), Microsoft SQL Server, and Nutanix.
The document addresses key items for each role, enabling a successful design, implementation,
and transition to operation. Most of the recommendations apply equally to all currently supported
versions of Microsoft SQL Server. We call out differences between versions as needed.
2.2. Purpose
This document covers the following subject areas:
• Overview of the Nutanix solution.
• The benefits of running Microsoft SQL Server on Nutanix.
• Overview of high-level SQL Server best practices for Nutanix.
• Design and configuration considerations when architecting a SQL Server solution on Nutanix.
• Virtualization optimizations for VMware ESXi, Microsoft Hyper-V, and Nutanix AHV.
Version
Published Notes
Number
1.0 April 2015 Original publication.
Updated for SQL Server 2016 and added discussion
2.0 September 2016
of Nutanix Volumes.
2.1 May 2017 Updated for all-flash platform availability.
3.0 August 2019 Major updates throughout.
2. Introduction | 7
Microsoft SQL Server
The core component of Microsoft SQL Server is the SQL Server Database Engine, which
controls data storage, processing, and security. It includes a relational engine that processes
commands and queries and a storage engine that manages database files, tables, pages,
indexes, data buffers, and transactions. The Database Engine also creates and manages stored
procedures, triggers, views, and other database objects.
The SQL Server Operating System (SQLOS) that underlies the Database Engine handles
lower-level functions such as memory and I/O management, job scheduling, and locking data to
avoid conflicting updates. A network interface layer sits above the Database Engine and uses
Microsoft’s Tabular Data Stream protocol to facilitate request and response interactions with
database servers. At the user level, SQL Server DBAs and developers write T-SQL statements to
build and modify database structures, manipulate data, implement security protections, and back
up databases, among other tasks.
have a few longer-running queries that are resource-intensive (CPU, memory, I/O) and generally
occur during month-, quarter-, or year-end. DSS queries favor read over write, so it is important
for the Nutanix platform hosting this workload type to have nodes sized to allow 100 percent
of the dataset to reside in local flash. With Nutanix data locality, this sizing ensures that the
system can provide that data as quickly as possible. This advantage places a premium on having
processors with higher physical core counts.
Batch or ETL
These workload types tend to be write-intensive, run during off-peak hours, tolerate contention
better than OLTP workloads, and sometimes consume additional bandwidth. Organizations often
run batch workloads at the end of the business day to generate reports about transactions that
occurred during that day. We can break batch or ETL workloads down into three distinct phases:
1. Extract: The system contacts multiple sources and obtains data from these sources.
2. Transform: The system performs an action (or actions) on the obtained data to prepare it for
loading into a target system.
3. Load: The data loads into the target system (often a data warehouse).
A NUMA topology has multiple nodes that are localized structures making up the physical host.
Each NUMA node has its own memory controllers and bus between the CPU and actual memory.
If a NUMA node wants to access nonlocal or remote memory, access times can be many times
slower than local memory access. Because SQL Server is NUMA-aware, it does most of its work
on a local node, preventing this remote memory access issue.
To reduce stress on the memory controllers and CPUs, ensure that memory is in a Balanced
configuration—where DIMMs across all memory channels and CPUs are populated identically.
Balanced memory configurations enable optimal interleaving across all memory channels,
maximizing memory bandwidth. To further optimize bandwidth, configure both memory
controllers on the same physical processor socket to match. For the best system-level memory
performance, ensure that each physical processor socket has the same physical memory
capacity.
If DIMMS with different memory capacities populate the memory channels attached to a memory
controller, or if different numbers of DIMMs with identical capacity populate the memory channels,
the memory controller must create multiple interleaved sets. Managing multiple interleaved sets
creates overhead for the memory controller, which in turn reduces memory bandwidth.
For example, the following image shows the relative bandwidth difference between Skylake and
Broadwell systems, each configured with six DIMMs per CPU.
Populating DIMMs across all channels delivers an optimized memory configuration. When
following this approach with Skylake CPUs, populate memory slots in sets of three per memory
controller. If you are populating a server that has two CPUs with 32 GB DIMMs, the six DIMMs
across all channels per CPU deliver a total host capacity of 384 GB, providing the highest level
of performance for that server. For a Broadwell system, a more optimized configuration has
either four or eight DIMMs (as opposed to the six in Skylake). If you are populating a two-socket
system with the same 32 GB DIMMs, eight DIMMs across all channels per CPU deliver a total
host capacity of 512 GB, which is a higher performance configuration.
Note: Each generation of CPUs may change the way it uses memory, so it’s
important to always check the documentation from the hardware vendor to validate
the configuration of the components.
The size of the CVM varies depending on the deployment; however, the typical size is 8 vCPUs
and 32 GB of memory. During initial cluster deployment, Nutanix Foundation automatically
configures the vCPU and memory assigned to the CVM. You should consult your Nutanix
partner and Nutanix Support if you need to make any changes to these values to ensure that the
configuration is support.
vCPUs are assigned to the CVM and not necessarily consumed. If the CVM has a low-end
workload, it uses the CPU cycles it needs. If the workload becomes more demanding, the CVM
uses more CPU cycles. Think of the number of CPUs you assign as the maximum number of
CPUs the CVM can use, not the number of CPUs immediately consumed.
When you size any Nutanix cluster, ensure that there are enough resources in the cluster
(especially failover capacity) to handle situations where a node fails or goes down for
maintenance. Sufficient resources are even more important when you’re sizing a solution to host
a critical workload like Microsoft SQL Server.
From Nutanix AOS 5.11 onward, AHV clusters can include a new type of node that does not
run a local CVM. These compute-only (CO) nodes are for very specific use cases, so engage
with your Nutanix partner or Nutanix account manager to see if compute-only is advantageous
for your workload. Because standard nodes (running CVMs) provide storage from the DSF, CO
nodes cannot benefit from data locality.
Note: CO nodes are only available when running Nutanix AHV as the hypervisor.
• CPU package
The CPU package is the physical device that contains all the components and cores and sits
on the motherboard. Many different types of CPU packages are available, ranging from low-
core, high frequency models to high-core, low-frequency models. The right balance between
core and frequency depends on the requirements of the SQL Server workload, considering
factors such as the type of workload, licensing, or query (single-threaded or multithreaded).
There are typically two or more CPU packages in a physical host.
• Core
A CPU package typically has two or more cores. A core is a physical subsection of a
processing chip and contains one interface for memory and peripherals.
• Logical CPU (LCPU)
The LCPU construct presented to the host operating system is a duplication of the core
that allows it to run multiple threads concurrently using hyperthreading (covered in the next
section). Logical CPUs do not have characteristics or performance identical to the physical
core, so do not consider an LCPU as a core in its own right. For example, if a CPU package
has 10 cores, enabling hyperthreading presents 20 logical CPUs to the guest OS.
• Arithmetic logic unit (ALU)
The ALU is the heart of the CPU, responsible for performing mathematical operations on
binary numbers. Typically, the multiple local CPUs on the core share one ALU.
• Hypervisor
In a virtualized environment, the hypervisor is a piece of software that sits between the
hardware and one or more guest operating systems. Each of these guest operating systems
can run its own programs, as the hypervisor presents to it the host hardware’s processor,
memory, and resources.
• vSocket
Each VM running on the hypervisor has a vSocket construct. vCPUs virtually “plug in” to
a vSocket, which helps present the vNUMA topology to the guest OS. There can be many
vSockets per guest VM, or just one.
• vCPU
Guest VMs on the hypervisor use vCPUs as the processing unit and map them to available
LCPUs through the hypervisor’s CPU scheduler. When determining how many vCPUs to
assign to a SQL Server, always size assuming 1 vCPU = 1 physical core. For example, if
the physical host contains a single 10-core CPU package, do not assigned more than 10
vCPUs to the SQL Server. When considering how much to oversubscribe vCPUs in the virtual
environment, use the ratio of vCPU to cores, not logical CPUs.
In theory, because we ideally want to use all available processor cycles, we should use
hyperthreading all the time. However, in practice, using this technology requires some additional
consideration. There are two key issues when assuming that hyperthreading simply doubles the
amount of CPU on a system:
1. How the hypervisor schedules threads to the pCPU.
2. The length of time a unit of work stays on a processor without interruption.
Because a processor is still a single entity, appropriately scheduling processes from guest VMs
is critically important. For example, the system should not schedule two threads from the same
process on the same physical core. In such a case, each thread takes turns stopping the other
to change the core’s architectural state, with a negative impact on performance. SQL Server
complicates this constraint because it schedules its own threads; depending on the combination
of SQL Server and Windows OS versions in use, SQL Server may not be aware of the distinction
between physical cores or know how to handle this distinction properly. There is also a hypervisor
CPU scheduling function to keep in mind, which is abstracted from the guest OS altogether.
Because of this complexity, be sure to size for physical cores and not hyperthreaded cores when
sizing SQL Server in a virtual environment. For mission-critical deployments, start with no vCPU
oversubscription. SQL Server deployments—especially OLTP workloads—benefit from this
approach.
As a rule, if CPU ready time is over 5 percent for a given VM, there is cause for concern.
Always ensure that the size of the physical CPU and the guest VM is in line with SQL Server
requirements to avoid this problem.
One of the key differences between Nutanix AHV and VMware ESXi or Microsoft Hyper-V is that
AHV does not support swapping VM memory to disk. With ESXi, it is possible to assign more
memory to all VMs on a host than there is physical host memory. This overcommitment can be
dangerous, as physical RAM is much faster than even the fastest storage available on Nutanix.
Similarly, with Hyper-V, technologies such as Dynamic Memory and Smart Paging allow the
system to share memory allocation between all guest VMs with physical disks as an overflow.
Memory Reservations
Because memory is the most important resource for SQL Server, do not oversubscribe memory
for this application (or for any business-critical applications). As mentioned earlier, Nutanix
AHV does not support memory oversubscription, so all memory is effectively reserved for all
VMs. When using VMware ESXi or Microsoft Hyper-V, ensure that the host memory is always
greater than the sum of all VMs. If there is memory contention on the hypervisor, guest VMs
could start to swap memory space to disk, which has a significant negative impact on SQL Server
performance.
If you are deploying SQL Server in an environment where oversubscription may occur, Nutanix
recommends reserving 100 percent of all SQL Server VM memory to ensure a consistent level
of performance. This reservation prevents the SQL Server VMs from swapping virtual memory to
disk; however, other VMs that don’t have this reservation may swap.
Although each hypervisor implements virtual networking constructs slightly differently, the basic
concepts remain the same.
VMs expect 20 GBps of bandwidth and one of the links fails, performance then drops by 50
percent.
• Consider using hypervisor-based network I/O control to ensure that transient high-throughput
operations such as vMotion or live migration do not negatively impact critical traffic.
• For VMware-based environments, consider using the VMware Distributed Switch. This
construct provides many benefits in a VMware environment, including the ability to use
technologies such as LACP and maintain TCP state during vMotion.
For more detailed information about networking in a Nutanix environment for supported
hypervisors, please refer to the following resources:
• Nutanix AHV Networking best practice guide
• vSphere Networking with Nutanix best practice guide
• Hyper-V 2016 Networking with Nutanix best practice guide
Although larger Ethernet frame sizes can be beneficial for transmitting data across the network,
one of the key drawbacks to using jumbo frames is that you must configure all network
components in the path—switch ports, NICs, virtual switches, guest operating systems, and even
SQL Server itself—with the appropriate MTU. You must configure any network components you
introduce (like additional hosts or replacement switches that don’t have a default of 9,000 bytes)
with the larger nonstandard MTU size for the network to work.
After configuring the network with a size of 9,000 bytes, Nutanix recommends configuring SQL
Server with a network packet size of 8,192 bytes. This payload, with Ethernet overhead, fits in the
TCP 8,972-byte packet. To modify this setting, navigate to the SQL Server Server Properties
menu, then to the Advanced page.
Nutanix Volumes supports SCSI-3 persistent reservations for the shared storage used by SQL
Server failover cluster instances (FCIs). Nutanix manages storage allocation and assignment
for Volumes through a construct called a volume group (VG). A VG is a collection of virtual disks
(or vDisks) presented as available block devices using the iSCSI protocol. All Nutanix CVMs in
a cluster participate in presenting these VGs, creating a highly redundant and scalable block
storage solution. When you create a Windows failover cluster for SQL Server, you need Nutanix
Volumes because Windows failover clusters require shared storage, but when you deploy SQL
Always On availability groups (AAGs), you don’t need Volumes because AAGs don’t require
shared storage.
Nutanix Volumes seamlessly manages failure events and automatically load balances iSCSI
clients to take advantage of all cluster resources. iSCSI redirection controls target path
management for vDisk load balancing and path resiliency. Instead of configuring host iSCSI client
sessions to connect directly to all CVMs in the Nutanix cluster, Volumes uses the Data Services
IP. This design allows administrators to add Nutanix nodes to the cluster in a nondisruptive
manner without needing to update every iSCSI initiator.
For more details on Nutanix Volumes, please refer to the Nutanix Volumes best practice guide.
A volume group directly connected to an AHV VM exhibits the same behavior as a volume group
presented via iSCSI, with one exception: with VGs, vDisks are not load balanced across all
CVMs in the cluster by default. This lack of load balancing is equivalent to normal VMs running
on an AHV host and is not a concern for most workloads. Nutanix recommends starting with
the Volume Groups with Load Balancing (VGLB) feature disabled. Nutanix data locality and a
correctly sized SSD tier provides optimal performance for most SQL Server workloads without
enabling VGLB.
There may be instances where VGLB can improve performance, especially with large workloads
where the size of the dataset is greater than the size of the underlying host or where throughput
is higher than what a single node can provide, like OLAP and DSS workloads. Particularly when
physical systems with a small number of drives are hosting VMs and volume groups with a high
number of vDisks, the solution is to use VGLB. This feature is not enabled with VGs by default,
so you must enable it on a per-VM basis.
As illustrated in the previous image, enabling VGLB removes the bottleneck of the local Nutanix
CVM, so all CVMs can participate in presenting primary read and write requests. While this
approach can provide great performance benefits overall, using VGLB adds a small amount
of I/O latency because some data locality is lost when the vDisks shard across the CVMs on
the cluster. In the right use case, this cost is generally worth it, as VGLB gives the system a far
greater amount of available bandwidth and CPU resources to serve I/O.
construct of a virtual storage controller. However, for VMware ESXi and Microsoft Hyper-V,
ensure there are multiple virtual SCSI adapters for SQL Server.
As a starting point for storage, Nutanix recommends the following:
• Separate drives for the OS and SQL Server binaries (and backup or restore drive, if used).
Place these drives together on the same virtual storage controller (for example, controller 0).
• Two drives for database data files, with each drive containing two data files. Place these
drives together on the same virtual storage controller but ensure that it’s a different virtual
storage controller than the one containing the OS and binaries (for example, controller 1).
Read more in the section titled SQL Server Data Files.
• Two TempDB data drives, with one TempDB data file per vCPU across the drives, up to eight
TempDB data files. Place these vDisks on their own virtual storage controller (for example,
controller 2). Read more in the section titled TempDB Sizing and Placement.
• One drive for database log files and another drive for the TempDB log file, with both drives on
their own virtual storage controller (for example, controller 3). Read more in the section titled
SQL Server Log Files.
• Ensure that user databases and system databases are on separate drives.
The following diagram shows the suggested initial drive layout for SQL Server VMs on Nutanix.
In addition to providing performance and scalability, maintaining this separation helps with
manageability, as potential problems are easier to isolate. For example, when you separate
TempDB onto its own disks, you can configure the files to grow and fill the disk without worrying
about space requirements for other files (within certain limits, and filling the disk to 100 percent
capacity still causes issues to TempDB). The more separation you can build into the solution, the
easier it is to correlate potential disk performance issues with specific database files.
In a VMware environment, Nutanix recommends using the VMware paravirtual (PVSCSI)
controller for all data and log drives. For the OS drive, it is simpler to keep the default LSI SAS
adapter; otherwise, the system must load the PVSCSI drivers to allow the OS to boot, which you
can do when you create the Windows template.
In a Hyper-V environment, if you are using Generation 1 VMs, use SCSI disks for all drives
except the OS. For Generation 2 VMs, use SCSI disks for all drives including the OS.
For versions prior to SQL Server 2016, apply trace flag 1117 at SQL Server startup.
To avoid unnecessary complexity, add files to databases only if there is database page I/O latch
contention. To look for contention, monitor the PAGEIOLATCH_XX values and spread the data
across more files as needed. Several other factors, such as memory pressure, can also cause
PAGEIOLATCH_XX latency, so investigate the situation thoroughly before adding files.
Unlike transaction log files, SQL Server accesses data files in parallel, and access patterns can
be highly random. Spreading large databases across several vDisks can improve performance.
It is best to create multiple data files before writing data to the database so that SQL Server’s
proportional fill algorithm spreads data evenly across all data files from the beginning.
You should only use volume managers (dynamic disks, storages spaces) with the OS as last
resort after careful testing. Using multiple data files across multiple disks is the recommended
and default approach. Also, do not use autoshrink on database files, as the resulting
fragmentation can reduce performance.
It is not uncommon to have SQL Server host several small databases, none of which are
particularly I/O intensive. In this case, the design could have only one or two data files per
database. These files could easily fit on one or two disks and not spread over multiple disks. The
goal with disk and file design is to keep the solution as simple as possible while meeting business
and performance requirements.
The size and autogrowth settings for TempDB data files are also important. SQL Server allocates
space in TempDB data files proportionally based on their size, so it’s important to create
all TempDBs to be the same size. Size them properly up front to accommodate application
requirements; proper sizing is generally between 1 percent and 10 percent of the database
size. During a proof-of-concept deployment, monitor TempDB size and use the high-water mark
as the starting point for the production sizing. If you need more TempDB space in the future,
grow all data files equally. You can rely on autogrowth but autgrowth can grow the data files to
a size beyond the free space available on the disk, so we don’t recommend relying solely on
autogrowth.
Nutanix recommends starting with two TempDB data drives and one TempDB log drive. This
arrangement should suffice for most SQL Server workloads. Use trace flag 1118 to avoid mixed
extent sizes. When using trace flag 1118, SQL Server uses only full extents. SQL Server 2016
automatically enables the behavior of trace flag 1118 for both TempDB and user databases.
There is no downside involved in using this setting for SQL Server versions prior to 2016.
The default setting for maximum server memory is just over 2 TB (specifically, 2,147,483,647
MB). If you don’t change this value, SQL Server tries to consume all the memory in the VM,
which can have a negative impact on both SQL Server and the Windows OS.
As a rule of thumb, leave between 6 and 8 GB of memory for the OS and assign the remaining
memory to SQL Server. If you need a more specific sizing guide, use the following formula to
determine the optimal value for maximum server memory based on the VM configuration.
The following table is a quick reference for setting maximum server memory for a 64-bit OS.
You must manually reconfigure SQL Server maximum memory any time the memory assigned
to the VM changes. You must increase this parameter to allow SQL Server to take advantage of
memory added to the VM, and you must reduce this parameter to leave enough memory for the
Windows OS if memory is removed from the VM. When using large pages with the Lock pages in
memory local security policy, restart the SQL Server service for the new value to take effect; this
process incurs downtime.
AAGs can also serve as a built-in disaster recovery solution, if you replicate an AAG to a second
or third datacenter. Ensure that you have sufficient bandwidth to keep up with the replication
requirements and that latencies are acceptable.
When deploying FCIs or AAGs, use hypervisor antiaffinity rules to enforce placement of the
nodes on different physical hosts. This placement provides resiliency in the case of an unplanned
host failure. When designing your hypervisor clusters, allow for at least n + 1 availability. With this
allowance, VMs can continue with adequate resources, even if the system loses one host.
If using VMware HA, review the restart priority of VMs. You may want to increase the SQL Server
priority to high for production databases. The best priority setting depends on what other services
are running in your cluster.
Nutanix supports the use of FCIs across ESXi, Hyper-V, and AHV if you use in-guest iSCSI.
Hyper-V environments can also take advantage of the shared virtual hard disk functionality in
Windows Server 2012 R2.
In a scale-out infrastructure, to make use of the architecture, use multiple I/O data streams and
multiple backup streams. Here’s an example script that configures a backup of a SQL Server DB
to write to multiple files.
BACKUP DATABASE [ntnxdb] TO
DISK = N'R:\SQLBACK\ntnxdb-tpcc-SQL2014-1000wh-01.bak',
DISK = N'R:\SQLBACK\ntnxdb-tpcc-SQL2014-1000wh-02.bak',
DISK = N'R:\SQLBACK\ntnxdb-tpcc-SQL2014-1000wh-03.bak',
DISK = N'R:\SQLBACK\ntnxdb-tpcc-SQL2014-1000wh-04.bak'
WITH DESCRIPTION = N'ntnxdb',
NOFORMAT, NOINIT, NAME = N'ntnxdb,
SKIP, NOREWIND, NOUNLOAD, COMPRESSION,
STATS = 5
GO
8.1. General
• Perform a current state analysis to identify workloads and sizing.
• Spend time up front to architect a solution that meets both current and future needs.
• Design to deliver consistent performance, reliability, and scale.
• Don’t undersize, don’t oversize—right size.
• Start with a proof of concept, then test, optimize, iterate, and scale.
8.5. TempDB
• Use multiple TempDB data files, all the same size.
• Use autogrow on TempDB files with caution to avoid situations where files use 100 percent of
the disk that hosts the log files.
• If there are eight or fewer cores, the number of TempDB data files is equal to the number of
cores.
• If there are more than eight cores, start with eight TempDB data files and monitor for
performance.
• Initially size TempDB to be 1 to 10 percent of database size.
• Use trace flag 1118 to avoid mixed extent sizes (full extents only). SQL Server 2016
automatically enables the behavior of trace flag 1118.
• One TempDB log drive should be sufficient for most environments.
8.6. VMware
• Use the VMXNET3 NIC.
• Use the latest VMware VM hardware version.
• Use the PVSCSI controller when possible.
• Remove unneeded hardware (floppy drive, serial port, and so on).
• Do not enable CPU hot-add, as this disables vNUMA.
8.7. RAM
• More RAM can increase SQL Server database read performance.
• Enable large page allocations when using 8 GB of memory or more.
• Do not overcommit RAM at the hypervisor host level.
• Tier-1 databases reserve 100 percent of RAM.
• Configure SQL Server maximum memory per previous section guidance.
• For tier-1 workloads, lock pages in memory.
• Size each VM to fit within a NUMA node’s memory footprint.
8.8. vCPUs
• Do not overallocate vCPUs to VMs.
• For tier-1 databases, minimize or eliminate CPU oversubscription.
• Account for Nutanix CVM core usage.
• If possible, size VMs to fit within one NUMA node.
8.9. Networking
• Use hypervisor network control mechanisms (for example, VMware NIOC).
• Use VMware load-based teaming with the vDS.
• Connect Nutanix nodes with redundant 10 Gbps connections.
• Use multi-NIC vMotion or Live Migration.
8.13. Monitoring
• Choose an enterprise monitoring solution for all SQL Servers.
• Closely monitor drive space.
8.14. Manageability
• Standardize on SQL Server build and cumulative updates.
• Use standard drive letters or mount points.
• Use VM templates.
• Join the SQL Server to the domain and use Windows authentication.
• Use Windows cluster-aware updating for AAG instances.
• Test patches and roll them out in a staggered manner during maintenance windows.
• Look into using SQL Server compression either at the row or page level, depending on which
one provides the best savings based on the workload. Be aware of licensing.
9. Conclusion
Microsoft SQL Server deployments are crucial to organizations, as they are used in everything
from departmental databases to business-critical workloads, including ERP, CRM, and BI. At the
same time, enterprises are virtualizing SQL Server to shrink their datacenter footprint, control
costs, and accelerate provisioning. The Nutanix platform provides the ability to:
• Consolidate all types of SQL Server databases and VMs onto a single converged platform
with excellent performance.
• Start small and scale databases as your needs grow.
• Eliminate planned downtime and protect against unplanned issues to deliver continuous
availability of critical databases.
• Reduce operational complexity by leveraging simple, consumer-grade management with
complete insight into application and infrastructure components and performance.
• Keep pace with rapidly growing business needs, without large up-front investments or
disruptive forklift upgrades.
9. Conclusion | 54
Microsoft SQL Server
Appendix
Resources
SQL Server Licensing
1. Microsoft product licensing for SQL Server
Nutanix Networking
1. Nutanix AHV Networking best practice guide
2. vSphere Networking with Nutanix best practice guide
3. Hyper-V 2016 Networking with Nutanix best practice guide
Appendix | 55
Microsoft SQL Server
Derek Seaman is a Customer Success Enterprise Architect at Nutanix and a Nutanix Platform
Expert (NPX-014). Follow Derek on Twitter @vDereks.
About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix Enterprise Cloud OS leverages web-scale engineering and
consumer-grade design to natively converge compute, virtualization, and storage into a resilient,
software-defined solution with rich machine intelligence. The result is predictable performance,
cloud-like infrastructure consumption, robust security, and seamless application mobility for a
broad range of enterprise applications. Learn more at www.nutanix.com or follow us on Twitter
@nutanix.
Appendix | 56
Microsoft SQL Server
List of Figures
Figure 1: Nutanix Enterprise Cloud................................................................................... 8
57
Microsoft SQL Server
58
Microsoft SQL Server
List of Tables
Table 1: Document Version History................................................................................... 7
59