Best Practices SQL Server For OpenText Content Server 10.5
Best Practices SQL Server For OpenText Content Server 10.5
Version: 10.5
Task/Topic: Deployment, Administration, Performance
Audience: Administrators
Platform: SQL Server 2012, 2014
Document ID: 500227
Updated: September 28, 2016
Best Practices
Microsoft® SQL Server for OpenText™
Content Server 10.5™
John Postma, Director, Common Engineering
2
Audience
The document is intended for a technical audience that is planning an
implementation of OpenText™ products. OpenText recommends consulting with
OpenText Professional Services who can assist with the specific details of
individual implementation architectures.
Disclaimer
The tests and results described in this document apply only to the OpenText
configuration described herein. For testing or certification of other configurations,
contact OpenText Corporation for more information.
All tests described in this document were run on equipment located in the
OpenText Performance Laboratory and were performed by the OpenText
Performance Engineering Group. Note that using a configuration similar to that
described in this document, or any other certified configuration, does not
guarantee the results documented herein. There may be parameters or variables
that were not contemplated during these performance tests that could affect
results in other test environments.
For any OpenText production deployment, OpenText recommends a rigorous
performance evaluation of the specific environment and applications to ensure
that there are no configuration or custom development bottlenecks present that
hinder overall performance.
3
Executive Summary
This white paper is intended to explore aspects of Microsoft® SQL Server which may
be of value when configuring and scaling OpenText Content Server™ 10.5. It is
relevant to SQL Server 2014 and 2012 in particular, and is based on customer
experiences, performance lab tests with a typical document management workload,
and technical advisements from Microsoft.
Most common performance issues can be solved by ensuring that the hardware used
to deploy SQL Server has sufficient CPU, RAM and fast I/O devices, properly
balanced.
Topics here explore non-default options available when simple expansion of
resources is ineffective, and discuss some best practices for administration of
Content Server’s database. It concentrates on non-default options, because in
general, as a recommended starting point, Content Server on SQL Server
installations uses Microsoft’s default deployment options. Usage profiles vary widely,
so any actions taken based on topics discussed in this paper must be verified in your
own environment prior to production deployment, and a rollback plan must be
available should adverse effects be detected.
These recommendations are not intended to replace the services of an experienced
and trained SQL Server database administrator (DBA), and do not cover standard
operational procedures for SQL Server database maintenance, but rather offer
advice specific to Content Server on the SQL Server platform.
This document will open with a brief section on how to monitor Content Server SQL
Server database performance, and then make recommendations on specific tuning
parameters
4
Monitoring and Benchmarking
To conduct a comprehensive health and performance check of OpenText Content
Server on SQL Server, you should collect a number of metrics for a pre-defined
“monitored period”. This monitored period should represent a reasonably typical
usage period for the site, and include the site’s busiest hours. Performing a
benchmark establishes a baseline of expected response times and resource usage
for typical and peak loads in your Content Server environment. You can use the
baseline to identify areas for potential improvement and for comparisons to future
periods as the site grows, and you apply hardware, software, or configuration
changes.
Benchmark
Collect the following as the basis for a benchmark and further analysis of worst-
performing aspects:
• Collect operating-system and resource-level operating statistics, including CPU,
RAM, I/O, and network utilization on the database server. How you collect these
statistics depends on the hardware and operating system that you use, and the
monitoring tools that are available to you. Performance Monitor (perfmon) is a
tool that is natively available on Windows servers. If you use perfmon, include
the counters in the following table as a minimum. Consider also using the PAL
tool with SQL Server 2012 threshold file to generate a perfmon template
containing relevant SQL Server performance counters and to analyze captured
perfmon log files:
5
Memory Pages/sec, Pages Input/sec, Available MBytes.
In general available memory should not drop below 5% of physical
memory. Depending on disk speed, pages/sec should remain below
200.
Physical Disk Track the following counters per disk or per partition: % Idle
Time, Avg. Disk Read Queue Length, Avg. Disk Write
Queue Length, Avg. Disk sec/Read, Avg. Disk
sec/Write, Disk Reads/sec, Disk Writes/sec, Disk
Write Bytes/sec, and Disk Read Bytes/sec.
In general, % Idle Time should not drop below 20%. Disk queue
lengths should not exceed twice the number of disks in the array.
Disk latencies vary based on the type of storage. General
guidelines:
Reads: Excellent < 8 msec, Good < 12 msec, Fair < 20 msec, Poor
> 20 msec;
Non-cached writes: Excellent < 08 msec, Good < 12 msec, Fair <
20 msec, Poor > 20 msec;
Cached writes: Excellent < 01 msec, Good < 02 msec, Fair < 04
msec, Poor > 04 msec
Also review virtual file latency data from the
sys.dm_io_virtual_file_stats Dynamic Management View
(DMV) that shows I/O requests and latency per data/log file.
SQL Server The SQL Server Buffer cache hit ratio should be > 90%. In
Counters OLTP applications, this ratio should exceed 95%. Use the PAL tool
SQL Server 2012 template for additional counters and related
thresholds.
• Note any Windows event log errors present after or during the monitored period.
Generate summary timing logs while you collect the operating-system statistics
noted above. In addition, generate at least one day of Content Server connect
6
logs during the larger period covered by the summary timings, during as typical a
period of activity as possible.
Note that connect logging requires substantial space. Depending on the activity
level of the site, your connect log files may be 5 to 10 GB, so adequate disk
space should be planned. Content Server logs can be redirected to a different file
system if necessary. There is also an expected performance degradation of 10%
to 25% while connect logging is on. If the system is clustered, you should enable
connect logging on all front-end nodes.
• Collect SQL Server profiling events to trace files for periods of three to four hours
during core usage hours that fall within the monitored period. Use the Tuning
template to restrict events captured to Stored Procedures –
RPC:Completed and TSQL—SQL:BatchCompleted. Ensure data columns
include Duration (data needs to be grouped by duration), Event Class, Textdata,
CPU, Writes, Reads, SPID. Don’t collect system events, and filter to only the
Content Server database ID. If the site is very active, you may also want to filter
duration > 2000 msec to limit the size of the trace logs and reduce overhead. You
can use SQL Server Extended Events (new in version 2008, and with new GUI
tool in 2012) to monitor activity. They are intended to replace the SQL Profiler,
provide more event classes, and cause less overhead on the server. For more
information, see an overview of SQL Server Extended Events and a guide to
converting existing SQL Profiler traces to the new extended events format on the
Microsoft Developer Network.
• Obtain the results of a Content Server Level 5 database verification report (run
from the Content Server Administration page, Maintain Database section). To
speed up the queries involved in this verification, ensure there is an index
present on DVersData.ProviderID. Note that for a large site this may take
days to run. If there is a period of lower activity during the night or weekends, that
would be an ideal time to run this verification.
• Gather feedback from Content Server business users that summarizes any
current performance issues or operational failures that might be database-
related.
Performance Dashboard
The performance dashboard offers real-time views of system activity and wait states,
lets you drill down on specific slow or blocking queries, and provides historical
information on waits, I/O stats, and expensive queries. It also shows active traces
and reports on missing indexes. (Note, however, that this is based on single SQL
statements, not on overall database load, so you must consider it from a wider
perspective.)
Download the SQL Server 2012 Performance Dashboard reports installer. It works
with SQL Server 2012 and 2014.
7
Management Data Warehouse
In SQL Server 2008 and later, you can use the Management Data Warehouse to
collect performance data on system resources and query performance, and to report
historical data. Disk usage, query activity, and server activity are tracked by default;
user-defined collections are also supported. A set of graphical reports show data from
the collections and allows you to drill down to specific time periods. For more
information see the Microsoft Technet article, SQL Server 2008 Management Data
Warehouse.
Also, in SQL 2008 and later, the Management Studio has been enhanced with an
Activity monitor for real-time performance monitoring.
8
SQL Server Setup Best Practices
OpenText recommends that you install and configure SQL Server following
Microsoft’s recommendations for best performance. This section covers many SQL
Server settings, and refers to Microsoft documentation where applicable.
In addition to configuring the settings described in this section, OpenText
recommends that you install the latest SQL Server Service Pack that is supported by
your Content Server Update level. (Check the release notes for your Content Server
version.)
9
Maximum Degree of Parallelism (MaxDOP)
Description Controls the maximum number of processors that are used for the
execution of a query in a parallel plan
(http://support.microsoft.com/kb/2806535 ).
Parallelism is often beneficial for longer-running queries or for
queries that have complicated execution plans. However, OLTP-
centric application performance can suffer, especially on higher-end
servers, when the time that it takes SQL Server to coordinate a
parallel plan outweighs the advantages of using one.
Default 0 (unlimited)
Recommendation Consider modifying the default value when SQL Server experiences
excessive CXPACKET wait types.
For non-NUMA servers, set MaxDOP no higher than the number of
physical cores, to a maximum of 8.
For NUMA servers, set MaxDOP to the number of physical cores per
NUMA node, to a maximum of 8.
Note: Non-uniform memory access (NUMA) is a processor
architecture that divides system memory into sections that are
associated with sets of processors (called NUMA nodes). It is
meant to alleviate the memory-access bottlenecks that are
associated with SMP designs. A side effect of this approach is that
each node can access its local memory more quickly than it can
access memory on remote nodes, so you can improve performance
by ensuring that threads run on the same NUMA node.
Also see the Cost Threshold for Parallelism section for related
settings that restrict when parallelism is used, to allow best
performance with Content Server.
Note: Any value that you consider using should be thoroughly
tested against the specific application activity or pattern of queries
before you implement that value on a production server.
Notes Several factors can limit the number of processors that SQL Server
will utilize, including:
• licensing limits related to the SQL Server edition
• custom processor affinity settings and limits defined in a
Resource Governor pool.
These factors may require you to adjust the recommended MaxDOP
setting. See related reference items in Appendix A – References for
background information.
See Appendix B – Dynamic Management Views (DMVs) for
examples of monitoring SQL Server wait types.
Permissions To change this setting, you must have the alter settings
server-level permission.
10
tempdb Configuration
Description The tempdb is a global resource that stores user objects (such as
temp tables), internal objects (such as work tables, work files,
intermediate results for large sorts and index builds). When
snapshot isolation is used, the tempdb stores the before images of
blocks that are being modified, to allow for row versioning and
consistent committed read access.
Notes As with MaxDOP, be mindful of factors that can limit the number of
processors SQL Server will utilize, and set the number of tempdb
data files appropriately.
11
Monitor the space used in and the growth of tempdb, and
adjust tempdb size as needed.
Notes Microsoft states that, because deleted disk data is overwritten only
when data is written to files, an unauthorized principal who gains
access to data files or backups may be able to access deleted
content. Ensure that access to these files is secured, or disable this
setting when potential security concerns outweigh the performance
benefit.
If the database has Transparent Data Encryption enabled, it
cannot use instant initialization.
Permissions To set this user right for the SQL Server service, you must have
administrative rights on the Windows server.
12
Permissions To set this user right for the SQL Server service, you must have
administrative rights on the Windows server.
Default The default setting for min server memory is 0, and the default
setting for max server memory is 2,147,483,647 MB. SQL
Server dynamically determines how much memory it will use, based
on current activity and available memory.
Permissions To change this setting, you must have the alter settings
server-level permission.
13
Antivirus Software
Description Antivirus software scans files and monitors activity to prevent,
detect, and remove malicious software. Guidelines for antivirus
software configuration are provided in the Microsoft support article,
How to choose antivirus software to run on computers that are
running SQL Server.
Recommendation Exclude all database data and log files from scanning (including
tempdb). Exclude SQL Server engine process from active
monitoring.
Notes Follow the Microsoft support article for SQL Server version-specific
details.
Notes Also, see the sections on tempdb, Database Data, Log File Size,
and AutoGrowth for other recommendations related to data and log
files.
14
Locking
Transaction Isolation
Description When snapshot isolation is enabled, all statements see a snapshot
of data as it existed at the start of the transaction. This reduces
blocking contention and improves concurrency since readers do not
block writers and vice-versa, and also reduces the potential for
deadlocks. See the MSDN article, Snapshot Isolation in SQL
Server.
Permissions To change this setting, you must have alter permission on the
database.
15
Lock Escalation
Description Some bulk operations, such as copying or moving a large subtree,
or changing permissions on a tree, can cause SQL Server resource
thresholds to be exceeded. Lock escalation is triggered when one of
the following conditions exists:
• A single Transact-SQL statement acquires at least 5,000
locks on a single non-partitioned table or index.
• A single Transact-SQL statement acquires at least 5,000
locks on a single partition of a partitioned table and the
ALTER TABLE SET LOCK_ESCALATION option is set to
AUTO.
• The number of locks in an instance of the Database
Engine exceeds memory or configuration thresholds. (The
thresholds vary depending on memory usage and the
Locks server setting).
Although escalation to a lower granularity of lock can free
resources, it also affects concurrency, meaning that other sessions
accessing the same tables and indexes can be put in a wait state
and degrade performance.
Default Locks setting is 0, which means that lock escalation occurs when
the memory used by lock objects is 24% of the memory used by the
database engine.
All objects have a default lock escalation value of table, which
means that, when lock escalation is triggered, it is done at the table
level.
Notes For a description of the Lock Escalation process in SQL Server, see
the Microsoft Technet article, Lock Escalation (Database Engine).
16
SQL Server Configuration Settings
Global server settings that affect all databases on an instance.
Default 5
Recommendation Content Server mainly issues small OLTP-type queries where the overhead of
parallelism outweighs the benefit, but it does issue a small number of longer queries
that may run faster with parallelism. OpenText recommends that you increase the cost
threshold setting in combination with configuring the Maximum Degree of Parallelism
(MaxDOP) setting as recommended in this white paper. This reduces the overhead for
smaller queries, while still allowing longer queries to benefit from parallelism.
The optimal value depends on a variety of factors including hardware capability and
load level. Load tests in the OpenText performance lab achieved improved results with
a cost threshold of 50, and that may be a reasonable setting to start with. Monitor the
following and adjust the cost threshold as needed:
• CXPACKET wait type: when a parallel plan is used for a query there is some
overhead coordinating the threads that are tracked under the CXPACKET
wait. It’s normal to have some CXPACKET waits when parallel plans are
used, but if it is one of the highest wait types, further changes to this setting
may be warranted. See Appendix B – Dynamic Management Views (DMVs)
for examples of querying DMVs for wait info.
• See Appendix B – Dynamic Management Views (DMVs) for examples of
querying DMVs for queries using Parallelism
• THREADPOOL wait type: If many queries are using a parallel plan, there can
be periods when SQL Server uses all of its available worker threads, time
spent by a query waiting for an available worker thread is tracked under the
THREADPOOL wait type. If this is one of the highest wait types it may be an
indication that too many queries are using parallel plans, and that cost
threshold for parallelism should be increased, or maximum worker threads
increased (only consider increasing maximum worker threads on systems that
are not experiencing CPU pressure). However, note that there can be other
causes for an increase in this wait type (blocked queries or long running
queries), so it should only be considered in combination with a more
comprehensive view of query performance and locking.
Permissions To change this setting, you must have the alter settings server-level permission.
17
Optimize for Ad hoc Workloads
Description Available in SQL Server 2008 and later, this ad hoc caching
mechanism can reduce stress on memory-bound systems. It
caches a stub of the query plan, and stores the full plan only if a
query is issued more than once. This prevents the cache from being
dominated by plans that are not reused, freeing space for more
frequently accessed plans.
Turning this on does not affect plans already in the cache, only new
plans created after enabling the setting.
Default Off
Recommendation When there is memory pressure, and the plan cache contains a
significant number of single-use plans, enable this setting.
Monitoring Check the portion of the plan cache used by single use queries:
SELECT objtype AS [CacheType], count_big(*) AS
[Total Plans]
, sum(cast(size_in_bytes as
decimal(18,2)))/1024/1024 AS [Total MBs] ,
avg(usecounts) AS [Avg Use Count]
, sum(cast((CASE WHEN usecounts = 1 THEN
size_in_bytes ELSE 0 END) AS
decimal(18,2)))/1024/1024 AS [Total MBs - USE
Count 1]
, sum(CASE WHEN usecounts = 1 THEN 1 ELSE 0 END)
AS [Total Plans - USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESC
Recommendation Consider enabling this flag if latch waits on pages in tempdb cause
long delays that are not resolved by the recommendations in the
tempdb section.
18
AlwaysOn Availability Groups
Description First introduced in SQL Server 2012, AlwaysOn Availability
Groups are a high-availability and disaster recovery solution that
supports a failover environment for a set of user databases. For
more information, see the MSDN article, AlwaysOn Availability
Groups (SQL Server).
Default Disabled
19
Content Server Database Settings
These settings are specific to the Content Server database.
Compatibility Level
Description The database compatibility level sets certain database behaviors to
be compatible with the specified version of SQL Server.
Default The compatibility level for newly created databases is the same as
the model database which, by default, is the same as the installed
version of SQL Server.
When upgrading the database engine, compatibility level for user
databases is not altered, unless it is lower than the minimum
supported. Restoring a database backup to a newer version also
does not change its compatibility level.
Recommendation Using the latest compatibility mode allows the Content Server
database to benefit from all performance improvements in the
installed SQL Server version.
OpenText recommends that you set this equal to the version of SQL
Server that is installed.
When you change the compatibility level of the Content Server
database, be sure to update statistics on the database after making
the change.
NOTE: With SQL Server 2014, as per this technical alert, you must
use trace flag 9481 if the Content Server database compatiblity
level is set to SQL 2014 (120), or leave the compatiblitly level set to
SQL 2012 (110).
20
Clustered Indexes
Description Clustered indexes store data rows for the index columns in sorted
order. In general, the primary key or the most frequently used index
on each table is a good candidate for a clustered index. This is
especially important for key highly-active core tables. Only one
clustered index can be defined per table.
Default In Content Server 10.5 and later, many tables in the Content Server
database have a clustered index.
21
Table and Index Fragmentation, Fill factor
Description As data is modified, index and table pages can become fragmented,
leading to reduced performance. You can mitigate this by regularly
reorganizing or rebuilding indexes that have fragmentation levels
above a certain threshold
Fragmentation can be avoided, or reduced, by setting a fill factor for
indexes. This leaves space for the index to grow without needing
page splits that cause fragmentation. This is a tradeoff, because
setting a fill factor leaves empty space in each page, consuming
extra storage space and memory.
For more information, see the Microsoft Developer Network topic
Reorganize and Rebuild Indexes.
Default Server index fill factor default is 0 (meaning fill leaf-level pages to
capacity).
Notes By default, a table lock is held for the duration of an index rebuild
(but not a reorganization), preventing user access to the table.
Specifying ONLINE=ON in the command avoids the table lock (other
than for a brief period at the start), allowing user access to the table
during the rebuild. However, this feature is available only in
Enterprise editions of SQL Server. Also take note of the potential
data corruption issue when running online index rebuilds with
parallelism that is described in the Microsoft Support article, FIX:
Data corruption occurs in clustered index when you run online index
rebuild in SQL Server 2012 or SQL Server 2014.
22
Monitoring Track the perfmon SQLServer:AccessMethods:Page
Splits/Sec counter to observe the rate of page splits, and to help
evaluate the effectiveness of fill-factor settings that are used. (Note
that this includes both mid-page splits that cause fragmentation,
and end-page splits for an increasing index.)
Statistics
Description The query optimizer uses statistics to aid in creating high-quality
query plans that improve performance. Statistics contain information
about the distribution of values in one or more columns of a table or
view, and are used to estimate the number of rows in a query result.
An overview of SQL Server statistics is covered in the MSDN
article, Statistics.
Three database settings control whether SQL Server creates
additional statistics, and when and how it updates statistics:
AUTO_CREATE_STATISTICS: The query optimizer creates
statistics on individual columns in query predicates as necessary.
AUTO_UPDATE_STATISTICS: The query optimizer determines
when statistics might be out of date (based on modification reaching
a threshold) and updates them when they are used by a query.
AUTO_UPDATE_STATISTICS_ASYNC: When set off, queries being
compiled will wait for statistics to update if they are out of date.
When this setting is on, queries compile with existing statistics even
if the statistics are out of date, which could lead to a suboptimal
plan.
Default The first two settings above are on by default, and the third is off. All
can be changed in the model database. When the Content Server
database is created, it will inherit the settings from the model
database.
23
procedure updates statistics on all tables that have one or more
rows modified, so it is normally preferable to use an UPDATE
STATISTICS statement to update statistics on specific tables, as
needed.
Permissions To update statistics, you must have alter permission on the table
or view.
Collation
Description The collation for a database defines the language and character set
used to store data, sets rules for sorting and comparing characters,
and determines case-sensitivity, accent-sensitivity, and kana-
sensitivity.
24
collation that is different from the database collation.
Notes The following script identifies table columns with a collation that is
different from the database:
DECLARE @DatabaseCollation VARCHAR(100)
SELECT @DatabaseCollation = collation_name
FROM sys.databases WHERE database_id = DB_ID()
SELECT
@DatabaseCollation 'Default database collation'
SELECT
t.Name 'Table Name', c.name 'Col Name', ty.name
'Type Name',
c.max_length, c.collation_name, c.is_nullable
FROM sys.columns c INNER JOIN
sys.tables t ON c.object_id = t.object_id
INNER JOIN
sys.types ty ON c.system_type_id =
ty.system_type_id
WHERE t.is_ms_shipped = 0 AND
c.collation_name <> @DatabaseCollation
Data Compression
Description SQL Server 2008 and later offers data compression at the row and
page level (but only in the Enterprise Edition). Compression
reduces I/O and the amount of storage and memory used by SQL
Server, but adds a small amount of overhead in the form of
additional CPU usage.
Recommendation When storage space, available memory, or disk I/O are under
pressure, and the database server is not CPU-bound, consider
using compression on selected tables and indexes.
Microsoft recommends compressing large objects that have either a
low ratio of update operations, or a high ratio of scan operations.
You can use the sp_estimate_data_compression_savings
stored procedure to estimate the space that row or page
compression could save in each table and index, as outlined in Data
Compression: Strategy, Capacity Planning and Best Practices.
You can automate the process using a script. (An example of this
type of approach and a sample script, which was used for internal
testing, is covered in this SQL Server Pro article.) The script
analyzes the usage of Content Server tables and indexes that have
more than 100 pages and selects candidates for compression. It
estimates the savings from row or page compression, and
generates a command to implement the recommended
compression. The script relies on usage data from the DMVs, so it
should be run after a period of representative usage.
Overall impact from compression on performance, storage,
25
memory, and CPU will depend on many factors related to the
environment and product usage. Testing in the OpenText
performance lab has demonstrated the following:
Performance: For load tests involving a mix of document-
management operations, with a small set of indexes compressed
based on only high-read-ratio indexes, there was minimal
performance impact, but when a larger set of tables and indexes
was compressed, performance was less consistent, and degraded
by up to 20%. For high-volume ingestion of documents with
metadata, there was no impact on ingestion throughput.
CPU: CPU usage increased by up to 8% in relative terms.
MDF File Storage: Reduced by up to 40% depending on what was
compressed. Specific large tables like LLAttrData were reduced
by as much as 82%.
I/O: Read I/O on MDF files reduced by up to 30%; write I/O by up to
18%.
Memory Usage: SQL Buffer memory usage reduced by up to 25%.
As with any configuration change, test the performance impact of
any compression changes on a test system prior to deploying on
production systems.
Notes It can take longer to rebuild indexes when they are compressed.
26
Database Data, Log File Size, and AutoGrowth
Description The initial size of the data and log files, and the amount by which
they grow as data is added to the database. Autogrowth of log files
can cause delays, and frequent growth of data or log files can
cause them to become fragmented, which may lead to performance
issues.
Recommendation Optimal data and log file sizes really depend on the specific
environment. In general, it is preferable to size the data and log files
to accommodate expected growth so that you avoid frequent
autogrowth events.
Leave autogrowth enabled to accommodate unexpected growth. A
general rule is to set autogrow increments to about one-eighth the
size of the file, as outlined in the Microsoft Support article,
Considerations for the "autogrow" and "autoshrink" settings in SQL
Server.
Leave the autoshrink parameter set to False for the Content
Server database.
Notes The following script identifies a database’s data file size, and the
amount of used space and free space in it:
SELECT DBName, Name, [FileName], Size as
'Size(MB)', UsedSpace as 'UsedSpace(MB)',(Size -
UsedSpace) as 'AvailableFreeSpace(MB)'
from
( SELECT db_name(s.database_id) as DBName, s.name
AS [Name], s.physical_name AS [FileName],
(s.size * CONVERT(float,8))/1024 AS [Size],
(CAST(CASE s.type WHEN 2 THEN 0 ELSE
CAST(FILEPROPERTY(s.name, 'SpaceUsed') AS float)*
CONVERT(float,8) END AS float))/1024 AS
[UsedSpace],
s.file_id AS [ID]
FROM
sys.filegroups AS g
INNER JOIN sys.master_files AS s ON ((s.type = 2
or s.type = 0) and s.database_id = db_id() and
(s.drop_lsn IS NULL)) AND
(s.data_space_id=g.data_space_id)
) DBFileSizeInfo
27
Recovery Model
Description The Recovery Model is how SQL Server controls how transaction
logs are maintained per database.
Default Simple model is the default. This does not cover backups of the
transaction logs.
28
Identifying Worst-Performing SQL
There are several ways to identify poorly-performing SQL.
29
Appendices
Appendix A – References
Compute Capacity Limits by Edition of SQL Server:
https://msdn.microsoft.com/en-us/library/ms143760(v=sql.120).aspx
SQL Server Resource Governor:
https://msdn.microsoft.com/en-us/library/bb933866(v=sql.120).aspx
Data Compression: Strategy, Capacity Planning and Best Practices:
https://msdn.microsoft.com/en-us/library/dd894051(v=SQL.100).aspx
SQL Server Index Design Guide:
https://technet.microsoft.com/en-us/library/jj835095(v=sql.110).aspx
MaxDOP Recommendations:
https://support.microsoft.com/en-us/kb/2806535
Instant Database File Initialization:
https://msdn.microsoft.com/en-us/library/ms175935(v=sql.120).aspx
Lock Pages in Memory:
https://support.microsoft.com/en-us/kb/2659143
Antivirus software:
https://support.microsoft.com/en-us/kb/309422
SQL Server Memory Configuration Options:
https://msdn.microsoft.com/en-us/library/ms178067(v=sql.120).aspx
Disk Partition Alignment Best Practices for SQL Server:
https://technet.microsoft.com/en-us/library/dd758814(v=sql.100).aspx
Optimizing tempdb Performance:
https://msdn.microsoft.com/en-us/library/ms175527(v=sql.105).aspx
SQL Server 2014 Extended Events:
https://msdn.microsoft.com/en-us/library/bb630282(v=sql.120).aspx
Convert SQL Trace script to Extended Event Session:
https://msdn.microsoft.com/en-us/library/ff878114(v=sql.120).aspx
Reorganize and Rebuild Indexes:
https://msdn.microsoft.com/en-us/library/ms189858(v=sql.120).aspx
Index Fill Factor:
http://sqlmag.com/blog/what-best-value-fill-factor-index-fill-factor-and-performance-
part-2
30
SQL Server Statistics:
https://msdn.microsoft.com/en-us/library/ms190397(v=sql.120).aspx
Collation and Unicode Support:
https://msdn.microsoft.com/en-us/library/ms143726(v=sql.120).aspx
SQL Server Lock Escalation:
https://technet.microsoft.com/en-us/library/ms184286(v=sql.105).aspx
31
Appendix B – Dynamic Management Views (DMVs)
SQL Server Dynamic Management Views (DMVs) provide information used to
monitor the health of the server, diagnose problems, and tune performance. Server-
scoped DMVs retrieve server-wide information and require VIEW SERVER STATE
permission to access. Database-scoped DMVs retrieve database information and
require VIEW DATABASE STATE permission.
This appendix provides a description of some DMVs that may be helpful for
monitoring SQL Server performance, along with samples for querying those DMVs.
All procedures and sample code in this appendix are delivered as is and are for
educational purposes only. They are presented as a guide to supplement official
OpenText product documentation.
Waits (sys.dm_os_wait_stats)
Description Shows aggregate time spent on different wait categories.
Notes Consider excluding wait types that don’t impact user query
performance, such as described in the SQL Skills blog post, Wait
statistics, or please tell me where it hurts.
Sample Show total plan count and memory usage, highlighting single-use
plans:
SELECT objtype AS [CacheType], count_big(*) AS
[Total Plans]
, sum(cast(size_in_bytes as
decimal(18,2)))/1024/1024 AS [Total MBs] ,
avg(usecounts) AS [Avg Use Count]
, sum(cast((CASE WHEN usecounts = 1 THEN
size_in_bytes ELSE 0 END) as
decimal(18,2)))/1024/1024 AS [Total MBs - USE
Count 1]
, sum(CASE WHEN usecounts = 1 THEN 1 ELSE 0 END)
AS [Total Plans - USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESC
32
Queries using Parallelism
Description Search the plan cache for existing parallel plans and see the cost
associations to these plans.
Notes This DMV query shows data about parallel cached query plans,
including their cost and number of times executed. It can be helpful
in identifying a new cost threshold for parallelism setting that will
strike a balance between letting longer queries use parallelism while
avoiding the overhead for shorter queries. However, note that the
cost threshold for parallelism is compared to the serial plan cost for
a query when determining whether to use a parallel plan, the above
DMV query shows the cost of the generated parallel plan and is
typically different (smaller) than the serial plan cost. Consider the
parallel plan costs as just a general guideline towards setting cost
threshold for parallelism.
33
Performance of cached query plans (sys.dm_exec_query_stats)
Description Shows aggregate performance statistics for cached query plans.
34
--ORDER BY avg_logical_reads DESC --AvgReads
--ORDER BY avg_logical_writes DESC --AvgWrites
--ORDER BY avg_physical_reads DESC --AvgPhysicalReads
35
Virtual File Latency (sys.dm_io_virtual_file_stats)
Description For each data and log file, shows aggregate data about number, average size, and latency of
reads and writes.
Sample SELECT
-- @CaptureID,
GETDATE(), CASE
WHEN [num_of_reads] = 0 THEN 0
ELSE ([io_stall_read_ms]/[num_of_reads])
END [ReadLatency],
CASE WHEN [io_stall_write_ms] = 0 THEN 0
ELSE ([io_stall_write_ms]/[num_of_writes])
END [WriteLatency],
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0) THEN 0
ELSE ([io_stall]/([num_of_reads] + [num_of_writes]))
END [Latency],
--avg bytes per IOP
CASE WHEN [num_of_reads] = 0 THEN 0
ELSE ([num_of_bytes_read]/[num_of_reads])
END [AvgBPerRead],
CASE WHEN [io_stall_write_ms] = 0 THEN 0
ELSE ([num_of_bytes_written]/[num_of_writes])
END [AvgBPerWrite],
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0) THEN 0
ELSE (([num_of_bytes_read] +
[num_of_bytes_written])/([num_of_reads] + [num_of_writes]))
END [AvgBPerTransfer], LEFT([mf].[physical_name],2) [Drive],
DB_NAME([vfs].[database_id]) [DB],
[vfs].[database_id],[vfs].[file_id],
[vfs].[sample_ms], [vfs].[num_of_reads],
[vfs].[num_of_bytes_read],
[vfs].[io_stall_read_ms], [vfs].[num_of_writes],
[vfs].[num_of_bytes_written],
[vfs].[io_stall_write_ms], [vfs].[io_stall],
[vfs].[size_on_disk_bytes]/1024/1024. [size_on_disk_MB],
[vfs].[file_handle], [mf].[physical_name]
FROM [sys].[dm_io_virtual_file_stats](NULL,NULL) AS vfs
JOIN [sys].[master_files] [mf] ON [vfs].[database_id] =
[mf].[database_id]
AND [vfs].[file_id] = [mf].[file_id]
ORDER BY [Latency] DESC;
36
Index Usage (sys.dm_db_index_usage_stats)
Description Returns counts of different types of operations on indexes.
37
Table and index size and fragmentation
(sys.dm_db_index_physical_stats)
Description The first sample below returns size and fragmentation information
for each table and index. The second sample generates alter
index commands for indexes with more than 1000 pages, and
fragmentation greater than 5%.
38
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag
INTO #work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(),
NULL, NULL , NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 5.0 AND
index_id > 0 AND page_count > 1000;
-- Declare the cursor for the list of partitions
to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM
#work_to_do;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO @objectid, @indexid, @partitionnum,
@frag;
IF @@FETCH_STATUS < 0 BREAK;
SELECT @objectname = QUOTENAME(o.name),
@schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id =
o.schema_id
WHERE o.object_id = @objectid;
SELECT @indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = @objectid AND index_id =
@indexid;
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid AND index_id =
@indexid;
-- 30 is an arbitrary decision point at
which to switch between reorganizing and
rebuilding.
IF @frag < 5.0
SET @command = '';
IF @frag < 30.0
SET @command = N'ALTER INDEX ' + @indexname
+ N' ON ' + @schemaname + N'.' + @objectname + N'
REORGANIZE';
IF @frag >= 30.0
SET @command = N'ALTER INDEX ' + @indexname
39
+ N' ON ' + @schemaname + N'.' + @objectname + N'
REBUILD';
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' +
CAST(@partitionnum AS nvarchar(10));
-- EXEC (@command);
IF LEN( @command ) > 0
PRINT @command;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
DROP TABLE #work_to_do;
--GO
40
Lock Escalations (sys.dm_db_index_operational_stats)
Description This DMV returns a variety of low-level information about table and
index access. This sample shows lock escalation attempts and
successes for each object in a database.
41
SQL Server and Database information queries
Description The following queries return information about SQL Server and database
configuration, that can be helpful when investigating issues, or as part of
a benchmark exercise to document the state of the system.
Sample Show SQL Server full version:
SELECT @@VERSION;
Show database snapshot isolation, recovery model,
collation:
SELECT name,snapshot_isolation_state_desc,
CASE is_read_committed_snapshot_on
WHEN 0 THEN 'OFF' WHEN 1 THEN 'ON' END
AS is_read_committed_snapshot_on, recovery_model,
recovery_model_desc, collation_name from sys.databases
Show TempDB Configuration:
SELECT name AS FileName, size*1.0/128 AS FileSizeinMB,
CASE max_size WHEN 0 THEN 'Autogrowth is off.'
WHEN ‐1 THEN 'Autogrowth is on.'
ELSE 'Log file will grow to a maximum size of 2 TB.'
END AutogrowthStatus, growth AS 'GrowthValue',
'GrowthIncrement' = CASE
WHEN growth = 0 THEN 'Size is fixed and will not grow.'
WHEN growth > 0 AND is_percent_growth = 0
THEN 'Growth value is in 8‐KB pages.'
ELSE 'Growth value is a percentage.'
END FROM tempdb.sys.database_files;
Database table row count, data and index size:
SELECT name = object_schema_name(object_id) + '.' +
object_name(object_id), row_count, data_size = 8*sum(case
when index_id < 2 then in_row_data_page_count +
lob_used_page_count + row_overflow_used_page_count
else lob_used_page_count + row_overflow_used_page_count
end)
, index_size = 8*(sum(used_page_count) ‐ sum(case
when index_id < 2 then in_row_data_page_count +
lob_used_page_count + row_overflow_used_page_count
else lob_used_page_count + row_overflow_used_page_count
end)) FROM sys.dm_db_partition_stats
where object_schema_name(object_id) != 'sys'
GROUP BY object_id, row_count
ORDER BY data_size desc, index_size DESC
42
For additional guidance and help, please join the community of
experts:
43
About OpenText
OpenText is the world’s largest independent provider of Enterprise Content
Management (ECM) software. The Company's solutions manage information for all
types of business, compliance and industry requirements in the world's largest
companies, government agencies and professional service firms. OpenText supports
approximately 46,000 customers and millions of users in 114 countries and 12
languages. For more information about OpenText, visit www.opentext.com.
44
www.opentext.com
NORTH AMERICA +800 499 6544 • UNITED STATES +1 847 267 9330 • GERMANY +49 89 4629 0
UNITED KINGDOM +44 118 984 8000 • AUSTRALIA +61 2 9026 3400
Copyright © 2015 OpenText SA and/or OpenText ULC. All Rights Reserved. OpenText is a trademark or registered trademark of OpenText SA and/or OpenText ULC. The list of trademarks
is not exhaustive of other trademarks, registered trademarks, product names, company names, brands and service names mentioned herein are property of OpenText SA or other respective
owners.