The Fundamental Guide To SQL Query Optimization Ebook 27621
The Fundamental Guide To SQL Query Optimization Ebook 27621
®
“Why is the system running
so slowly today?”
How many times have you heard that complaint this week? This
quarter? This year?
However, it doesn’t give you very much to go on, does it? Your users
can’t tell you, “I think you need to deal with that wild card in the SELECT
statement in line 443,” or “The Order Status query is performing too many
unnecessary logical reads and hogging CPU cycles.”
That’s up to you to figure out. And that’s where the SQL query tuning
tips in this e-book come into play. Here you’ll find a reliable method for
analyzing and addressing the performance bottlenecks in your SQL
Server databases, along with case studies and diagnostic queries you
can use right away. We’ve put together this resource to help you identify
and fix performance issues more efficiently, so you spend less time
hearing how the system is running slowly today.
2
“SQL query optimization is Why optimize? — It’s useful to know the business purpose of the SQL
query. Suppose you get complaints at the end of every month about
easy.” Said nobody. Ever. poor performance. You determine that the culprit is a set of reports for
accounting that runs and prints for hours, then goes straight into a filing
cabinet, unexamined by anybody. That’s not a SQL query optimization
HERE ARE A FEW REASONS WHY.
problem; it’s a business process problem.
Where to optimize? — Locating the SQL statements that hamper your
Who should optimize? — The responsibility for optimizing may be murky.
database performance can be like searching for a needle in a haystack
The DBAs say, “The developers wrote the code, so they should tune it.”
of code. Besides, SQL Server could be processing dozens or hundreds
The developers counter, “The DBAs see the code in production. They
of statements at any one time. So, where should you focus your effort?
know the environment and how the servers are set up.” The organization
If you’ve ever tuned SQL to your satisfaction but no one else noticed
has to wrap some process around that first.
an improvement, then you probably worked on the wrong statement or
measured the wrong effect. What to optimize? — SQL query optimization is about software; it doesn’t
make hardware problems go away. Before you start tuning anything,
How to optimize? — SQL tuning requires experience in different areas.
make sure your hardware resources (processor power, memory size,
Do you know how to read and interpret execution plans? Do you know
storage speed, network throughput) are suited to the database and the
the best data access path off the disk? Which are the best Join methods
application running on it.¹
to use? And, of course, do you know how to write good SQL? It often
becomes necessary to rewrite an offending query. Finally, like any diagnostic pursuit, SQL query optimization takes time, trial
and error, as the following five tips illustrate.
1
For a holistic view of query performance, watch the webcast “Why Are My SQL Server Queries So Slow?” from Quest.
3
Tip 1: Monitor wait time
The problem is that DMVs are real-time, non-persistent views that reside
only in memory. They offer only a snapshot, so they cannot tell you what
happened, say, between 3:00 AM and 5:00 AM last Sunday morning. One
SQL Server incorporates wait types that allow you to monitor not only the
option in SQL Server 2012 and later is to use extended events to gather
total wait time but also each step of the query as it’s processed through
wait types and query results, but even that doesn’t make it easy to spot
the database. Wait types offer invaluable clues about the amount of time
trends over time.
taken and resources consumed by a query.
An effective and inexpensive way to capture exactly the data you want
The first step is to run queries that capture and store wait time data so
from the DMV is to poll it with a query at some interval, include the
you can analyze it.
timestamp and save it to a table.
CAPTURE THE DATA IN WAIT TIME TABLES
As you’ll see below, from dm_exec_requests you can poll the
SQL Server tracks data on wait time. Starting in SQL Server 2005, it
sql_handle and plan_handle for the execution plan. If it’s a
holds the data in dynamic management views (DMVs). stored procedure, you can get the snippet of code that the procedure
Figure 1 shows the DMVs (table names underlined) and corresponding is waiting on by reviewing the statement_start_offset and
column names that contain the data most useful for tuning. statement_end_offset columns. If a session is encountering a lock
wait, blocking_session_id will return its name.
4
VIEWING WAIT TYPES — SESSION LEVEL The query JOINs it to dm_exec_sql_text with the sql_handle, then
to dm_exec_text_query_plan (blue text), using the plan_handle.
Start your optimization at the session level. The query in Figure 2 polls
If the session is waiting on a lock wait type, it will also capture the
dm_exec_requests (green type) to generate a view of each running
blocking_session_id.
session and the resource that the SQL statement is waiting on.
The r.status column in the WHERE statement is useful in reducing
SELECT r.session_id, r.wait_time, r.status, extraneous noise by excluding background processes and any sleeping
r.wait_type, r.blocking_session_id, s.text, or idle processes. Also, the WHERE statement excludes the session
r.statement_start_offset, r.statement_end_ belonging to the administrator running this query.
offset, p.query_plan
Figure 3 shows the first six columns of output of this query against an
FROM sys.dm_exec_requests r instance of the AdventureWorks sample database in SQL Server.
WHERE r.status <> 'background' AND r.status <> Figure 3: Result of session-level query
'sleeping' AND r.session_id <> @@SPID
A status of running or runnable shows that the session is on CPU or in the
CPU queue. The suspended status, in row 3 for example, indicates that
Figure 2: Query session-level wait types the session is waiting on a wait type, most likely to feed data to the client.
That query shows you what is running right now. So, to examine trends
over time, you add a timestamp to the query, run it at a suitable interval
and load the results into a table. You can then quickly discover which
queries are spending the most time in the database and begin to see
how to tune them.
5
VIEWING WAIT TYPES — INSTANCE LEVEL The query excludes idle wait types, which rarely hamper performance,
But suppose you’re not familiar with an instance, yet you need to get and its output shows you, at the instance level, where you’re waiting the
control of many instances in short order. In that case, you’ll want to see most. For example, the instance queried in Figure 4 is spending over 80
which resources the instance is using and where the bottlenecks lie. percent of its processing time on a lock wait.
So, if you have a performance issue right now and you have no other
diagnostic tools, you can clear the view by running:
Next, run the query from the article “SQL Server Wait Statistics” for the The instance spent more than half its time on SOS_SCHEDULER_YIELD,
most recent statistics, shown in Figure 4. meaning it gave up processes because it was yielding to the
CPU scheduler.
6
COLLECTING WAIT TIME DATA AT INTERVALS The query in Figure 7 automates the INSERT, polling
dm_exec_requests every second; that interval is granular
Now that you’ve seen those views, you can run the following base
enough to be useful. Since this is an in-memory query, its execution
queries to collect wait time data. The process is to first create a table in
has negligible impact on the overall performance of the database.
which to store the data, then automate polling to insert data into the table
at some interval. The final step outputs the table of wait time data.
INSERT INTO rta_data
Figure 6 contains code to create the Wait Time Analysis table by
SELECTing into rta_data. (The green text shows the delta from the SELECT r.session_id, r.wait_time, r.status, r.wait_
query in Figure 2.) type, r.blocking_session_id,
s.text, r.statement_start_offset,
SELECT r.session_id, r.wait_time, r.status, r.statement_end_offset, p.query_plan,
r.wait_type, r.blocking_session_id, s.text,
r.statement_start_offset, r.statement_end_offset, CURRENT_TIMESTAMP time_polled
p.query_plan, CURRENT_TIMESTAMP time_polled
FROM sys.dm_exec_requests r
INTO rta_data
OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) s
FROM sys.dm_exec_requests r
OUTER APPLY sys.dm_exec_text_query_plan
OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) s
(r.plan_handle, r.statement_start_offset,
OUTER APPLY sys.dm_exec_text_query_plan r.statement_end_offset) p
(r.plan_handle, r.statement_start_offset,
WHERE r.status <> 'background' AND r.status <>
r.statement_end_offset) p
'sleeping' AND r.session_id <> @@SPID
WHERE r.status <> 'background' AND r.status <>
'sleeping' AND r.session_id <> @@SPID Figure 7: Query to automate INSERT
7
Finally, Figure 8 shows a query with a common table expression (CTE) The results from the query above are shown in Figure 9. The SQL
to define a temporary named result set that contains the wait type statement that spent the most time in the database spent 2 seconds on
and count. network wait, 68 seconds on I/O completion and 203 seconds on CPU.
AS
AS
Figure 9: Wait time analysis
(SELECT text, COUNT(*) as tot_time
Again, when there is no wait type, the SQL statement is usually either on
FROM rta_data GROUP BY text) CPU or in the CPU queue awaiting execution.
FROM rta
8
WHY ANALYZE WAIT TIME?
Analyzing wait time has several benefits in SQL query optimization.
Before tuning any SQL, it’s important to gather baseline metrics to see
whether your changes improve performance. How long did a given
operation take before the change? How much improvement does the
user expect? How far, practically, can you tune? Baseline metrics allow
you to set an acceptable goal and stop when you reach it.
It’s also useful to know how to interpret the different wait types at
work in your database, including locking/blocking (LCK), I/O problems
(PAGEIOLATCH), latch contention (LATCH) and network slowdown
(NETWORK). A query spending most of its time on ASYNC_NETWORK_IO
does not necessarily mean a network problem; the system could simply
be feeding too much data to the client. And where a single query
generates multiple wait types, you’ll find it best to tune for the wait type
most often encountered and see how the others behave in response.
Finally, aside from the value of wait time analysis in SQL query
optimization, if a production query slows down unexpectedly, you can
analyze it to see what changed.
9
Tip 2: Review the To generate the plan that is currently active, query the table-valued
function dm_exec_query_plan. Use the plan_handle and the
execution plan start/stop offset columns from the dm_exec_request view for the
query. Figure 10 shows an example based on an order query against the
There are many ways to get the execution plan in SQL Server. SQL AdventureWorks database.
Server Management Studio (SSMS) allows for estimated, actual and
real-time statistics.
Other ways to get the execution plan include Profiler Tracing and
Extended Events, options that tend to generate too much data that is not
useful in SQL query optimization. Also, tracing requires that you trace all
the time or know when the problem is going to occur.
10
WHAT TO LOOK FOR IN THE EXECUTION PLAN
Make sure the Join methods shown in the plan are appropriate.
Nested Loops Join — Compares each row from one table ("outer
table") to each row in another table ("inner table") and returns rows
that satisfy the Join predicate. The cost is proportional to the product
of the number of rows in the two tables. This Join is well suited to
smaller data sets.
Merge Join — Compares two sorted inputs, one row at a time. The cost
is proportional to the sum of the total number of rows. This requires
an equi-join condition and is efficient for larger data sets, especially in
analytical queries.
Hash Match Join — Reads rows from one input, then hashes the
rows, based on the equi-join condition, into an in-memory hash
table. The Join does the same for the second input and then returns
matching rows. It is most useful for very large data sets (especially
data warehouses).
Ensure that expensive operations like full table scans and clustered index
scans are justified; that is, do you need the query to read all that data? If
an index is missing, the query may be reading too much data for the end
result set and take up excessive CPU time that other processes could use.
11
Each step in the plan contains details, as shown in Figure 11. Examine
the plan for exceptionally high numbers, such as Estimated I/O Cost,
Estimated CPU Cost and Estimated Number of Rows.
Check the step in your query at which any filtering predicate is applied.
It’s preferable to filter in the early steps.
12
When you include SET STATISTICS IO ON in your query,
SSMS displays messages with the number of logical reads, as
depicted in Figure 12.
13
Tip 3: Gather object
information
The next task is to gather information about the objects associated with
the expensive steps.
Review all information about the tables in the query, keeping in mind
that they could be a view or a table-valued function. (You can determine
that by hovering over the FROM clause in SSMS.) Know where the table
Figure 14: Data profile task in SSIS
resides physically. Examine the indexes, keys and constraints and how
the tables are related. Know whether additional statistics are being gathered
and, if so, make sure they’re up to date (for example, using
INFORMATION ON TABLES, COLUMNS AND ROWS
SET AUTO_UPDATE_STATISTICS ON;). A query is only as good
Look at the size of the table and columns used, especially in the WHERE as its statistics.
clause. To find the cardinality and distributions of a column, use SQL
Server Integration Services (SSIS) to create a data profile task with all Find out the row count for each table involved. An easy way to do that is
the tables in the query. As shown in Figure 14, you can then view that to run a query from objects and partitions, as shown in Figure 15, to list
information using the data_viewer located on the Windows Start menu. the statistics that Query Optimizer knows.
14
Row counts become important in finding the driving table (see below).
SELECT so.name, sp.rows, so.type
The largest tables are OrderDetail with about 5 million records,
FROM sys.objects so INNER JOIN sys.partitions sp and OrderHeader with 1.3 million. Coincidentally, they represent two
ON so.object_id = sp.object_id of the three most expensive steps identified in the execution plan
(see Figure 13).
WHERE so.type IN ('TF','U','V')
AND sp.index_id = 1 Review the columns in the SELECT section of the query. If you find wild
cards, make sure all those columns are necessary. It’s generally a bad
ORDER BY so.name
idea to SELECT * because it chews up memory and CPU cycles.
Figure 15: Query for row count Other techniques that degrade performance include functions to convert
The query lists the row counts for each table in a database. Notice that mismatched data types (like integer to varchar) and non-searchable
it filters on three types: table functions (T), user tables (U) and views (V). arguments in the WHERE clause. Without a valid WHERE clause, there’s no
15
INFORMATION ON INDEXES Similarly, keys and constraints can help Query Optimizer in choosing
the execution plan. But many developers configure tables without any
If it’s a multi-column index, understand the order and selectivity
foreign or primary keys because they fear that keys slow processing
of the columns.
down. When keys and constraints are used in good measure, they do not
If you have multiple tables in a query, it’s useful to know the relationships affect processing noticeably.
among them. In SSMS, you can generate a diagram of the tables in the
Watch for functions on indexed columns; they can cause Query Optimizer
query, as depicted in Figure 17.
to turn off that index, in which case you may need to rewrite the query.
Finally, know whether and when you are rebuilding indexes. Rebuilding
can have a sort of yo-yo effect on query performance: the index
becomes fragmented, so queries run poorly, then the index gets
rebuilt, so queries run well, then get fragmented again, and so on.
The undesirable result is good performance on some days and bad
Figure 17: SSMS diagram of tables performance on others.
The diagram shows how the tables are related. Here you can see all the
columns for Join criteria and WHERE clauses. If you leave off a column
that should have been used in the Join, Query Optimizer may choose a
different or worse plan.
16
Tip 4: Find the driving table
Your goal now is to drive the query with the table that returns the least
data. That reduces the number of logical reads. In short, you study Joins
and predicates, and filter earlier in the query rather than later.
For example, if you have two tables with 1 million rows each and JOIN
them, you’ll have a lot of data. But if you’re interested in only two or
three records, your query has generated a lot of needless logical reads.
Filtering early whittles down the possible data sets and reduces that
work, which is why it’s useful to compare the size of the final result
set with the sizes of the data sets being returned in each step of the
execution plan.
Another useful technique for finding the driving table is SQL diagramming
(see on page 18), a graphical method for mapping the amount of data in
the tables and finding which filter will return the least amount.
The following two case studies illustrate the optimization tips given so far.
17
Case study 1: University MONITOR WAIT TIME
Wait time analysis yielded these statistics:
FROM student s
AND r.signup_date BETWEEN :beg_date In the upper right, the plan reveals a table scan of class, which
AND :beg_date +1 represents a heap. With a heap, the table sits on disk, not organized in
any way, probably generating a lot of logical I/O. Although the table itself
AND r.cancelled = ‘N‘
is not very large, a table scan is inefficient.
Figure 18: Case study 1 – original query The wide arrow leading from the registration table in the lower right
reflects the clustered index scan and the high (60 percent) cost.
The query SELECTs student name and signup date from the student
table, then JOINs to registration on student.id and to class on Also, in green type SSMS suggests adding an index on registration
class.id. It looks for current records with the name of ‘SQL Tuning’, for the columns cancelled and signup_date. It is best to validate that
a specific signup date and the cancelled flag of ‘N’ for non-cancelled. suggestion before accepting it.
Wait time analysis shows that this query was taking the most time in
the database.
18
As in Figure 12, Figure 20 shows a query with SET STATISTICS IO ON.
Logical reads are not excessively high. However, the optimizer included
the work tables Workfile and Worktable, which take up temporary
space and add extra steps for the query to work through.
19
FIND THE DRIVING TABLE Now, calculate the relative number of records required for the Join
criteria and put the numbers at each end of the arrow. For every 1 student
At this point, SQL diagramming is a good way to find the driving table.
there are about 5 records in the registration table, and for every 1
First, determine which tables contain the detailed information and class there are about 30 records in registration. That means it should
which tables are the master or lookup tables. In this simple case study, never be necessary to JOIN more than 150 (5 x 30) records to get a
registration is the detail table. It has two lookup tables, student and result for 1 student or 1 class.
class. To diagram these tables, draw an upside-down tree connecting
In fact, if you just make sure that your Join columns are properly indexed,
the detail table (at the top) with arrows (or links) to the lookup tables, as
you can skip figuring out the math for it.
depicted in Figure 23.
Next, look at the filtering predicates to find which table to drive the query
with. This query had two filters: one on registration cancelled = 'N' and
the other on signup_date between two dates. To see how selective
the filter is, run this query on registration:
Figure 23: University billing system – SQL diagram The other filter is on class:
20
FIX 1 — CLASS TABLE FIX 2 — REGISTRATION TABLE
Thus, class is the driving table. As shown in Figure 19, class was using Add a non-clustered index:
a table scan (heap) because it was missing an index. To fix class, add
CREATE NONCLUSTERED INDEX reg_alt ON
CREATE UNIQUE CLUSTERED INDEX class_pk ON class(class_
registration(class_id);
id); for an index with a primary key.
The new execution plan is shown in Figure 25.
Because the name column is in the WHERE clause, add a non-clustered
index on name: CREATE NONCLUSTERED INDEX class_nm_idx
ON class(name);
Figure 24 shows the execution plan, now that that index is being used.
The Messages tab shows that the number of logical reads for class
is still 2 and for registration it has fallen from 400 to 63, which
demonstrates even more progress.
21
FIX 3 — COVERING INDEX By changing reg_alt to add signup_date and cancelled,
in descending order of specificity, the number of logical reads for
If you add a covering index, the optimizer can retrieve all the information
registration falls from 400 to 6.
it needs from the index without going back to the table, thus reducing
I/O. (See “A note on indexes” below for more details on covering and Average SQL Response time has gone from 5.2 down to 1.3 seconds,
filtering indexes.) with 2.5x more executions (59 to 160) in the same timeframe.
Also, the original query spent 77 percent of its time waiting on
Modify the index by adding all the columns from registration
ASYNC_NETWORK_IO; it now spends most of its time on CPU.
that the optimizer will need to resolve the query. Run the following:
22
Case study 2: SELECT c.CustomerID ,
Customer query
p.FirstName ,
p.LastName ,
This case study takes you through the flaws in a query on the oh.OrderID ,
AdventureWorks database. It leads up to the fifth and final SQL query oh.OrderDate ,
The query shown in Figure 27 SELECTs the same columns used for tip 2 pr.Name,
and tip 3 above, but with different filters in the WHERE clause. It performs pr.Color,
The goal is to figure out the best execution plan without relying on the FROM Sales.OrderHeader AS oh
Query Optimizer in SQL Server. In other words, what is the most selective INNER JOIN Sales.Customer AS c ON c.CustomerID = oh.CustomerID
way to get the smallest amount of data first? And then, what is the
INNER JOIN Person.Person AS p ON p.BusinessEntityID = c.PersonID
most efficient way to build upon that data to satisfy the columns in the
INNER JOIN Sales.OrderDetail AS od ON od.OrderID = oh.OrderID
SELECT clause?
INNER JOIN Production.Product AS pr ON pr.ProductID
= od.ProductID
WHERE oh.OnlineOrderFlag = 1
23
MONITOR WAIT TIME
Wait time analysis yields these statistics:
Executions – 439
Logical reads are very high. While average SQL response time is low,
executions are too.
24
On the right side of Figure 29, the execution plan shows a key lookup on
Product. Why would a table that should have a primary key use a key
lookup? That seems inefficient.
The exec sp_helpindex queries (Figure 31) show that the column
product.name has an index, as does the other selective column,
person.lastname.
Although the clustered index scan on Customer is low in cost, it’s worth
a look through the object information for a better access path.
That means it is likely that the Optimizer will drive the query beginning
with the name of the product or the last name of the customer.
25
FIND THE DRIVING TABLE returns 1,386 records from 19,972 in Person, or 7 percent, which is much
more selective. Finally, the query
Use SQL diagramming to quickly find the driving table. As shown in
Figure 32, SalesOrderDetail is the detailed table. It has lookups into select count(1) from production.Product pr
SalesOrderHeader and Product. SalesOrderHeader has a lookup
into Customer, which has a lookup into Person. where pr.Name like 'Mountain%42‘ and
pr.ProductID like ‘9%’
where oh.OnlineOrderFlag = 1
26
In the portion that reads:
WHERE oh.OnlineOrderFlag = 1
Fixes for case study 2 are part of SQL query optimization tip 5.
27
Tip 5: Identify 6. Use WHERE clauses for filtering as early as possible in the query.
7. Beware of third-party SQL generators such as EMF, LINQ and
performance inhibitors NHibernate, which often produce sub-optimal code.
2. Is the query processing in parallel? Does it really need to? If not, then
it’s consuming valuable resources that other queries may need.
4. Are you running nested views with a linked server? It’s better to
avoid linked servers and replicate the data if possible.
28
FIXES FOR CASE STUDY 2
To return to case study 2, an implicit conversion on ProductID was
causing the optimizer to not use the clustered index. It happens that the
Product table has only 504 records. Furthermore, the highest possible
value in the Product column is 999, so the 9% criterion would be better
written as >=900. That way, SQL Server would not incur the implicit
conversion and it would perform an index seek using the primary key
instead of using the alternate non-clustered index, AK_product_name.
A NOTE ON INDEXES
Adding indexes is not always the right thing to do. If you need the
performance on INSERT, UPDATE or DELETE statements, then indexes
can actually hamper that performance because of the extra work the
Optimizer must do to maintain the indexes. However, if you need better
performance on SELECT statements, then you can choose from the
following types of indexes:
ON Sales.OrderHeader(OnlineOrderFlag)
INCLUDE (OrderID,OrderDate,CustomerID,SubTotal)
29
Filtered indexes use a WHERE clause to retrieve only a subset of data in Figure 34 shows the new execution plan.
a table. The indexes are usually very small because the index contains
only the data that is in the filter. For example, the filtered index below
would contain only those rows where OnlineOrderFlag = 1. If the
table has 6 million rows, and only 100 rows have OnlineOrderFlag = 1,
then the index would have only 100 entries.
ON Sales.OrderHeader(OnlineOrderFlag)
WHERE OnlineOrderFlag = 1
You can also combine both index types, covering and filtered. Figure 34: Execution plan (partial) with covering index
For example:
Note that the order of the steps and the Join methods have again
CREATE NONCLUSTERED INDEX changed. Product (red oval) is again the driving table; it is the most
FcIX_OrderHeader_OnlineOrderFlag selective so it should be first. Product does a nested loop Join into
SalesOrderDetail using an index seek.
ON Sales.OrderHeader(OnlineOrderFlag)
Furthermore, logical I/O has fallen, as depicted in Figure 35.
INCLUDE (OrderID,OrderDate,CustomerID,SubTotal)
WHERE OnlineOrderFlag = 1
CREATE NONCLUSTERED INDEX CIX_OrderDetail_ProductID Figure 35: Logical I/O with covering index on SalesOrderDetail
30
As a final tuning step, try to lower the number of logical reads (126,821)
shown in Figure 28. In the execution plan, SQL Server suggested
adding a covering index for SalesOrderHeader also. Add the
following to the query:
ON Sales.OrderHeader(OnlineOrderFlag)
INCLUDE (OrderID,OrderDate,CustomerID,SubTotal)
31
Conclusion ABOUT THE AUTHOR
Janis Griffin is a senior systems consultant at Quest, where she
The five SQL query optimization tips in this e-book comprise a method specializes in performance tuning and analyzes database performance.
for tuning your SQL Server queries for higher speed and better A database administrator with over 30 years of experience, Janis
performance. By monitoring wait time, reviewing the execution plan, started out on Oracle version 3. Most of her expertise is in Oracle, SQL
gathering object information, finding the driving table and identifying Server and MySQL.
performance inhibitors, database professionals like you can improve
performance in your database environment.
32
ABOUT QUEST © 2020 Quest Software Inc. ALL RIGHTS RESERVED.
Quest provides software solutions for the rapidly changing world This guide contains proprietary information protected by copyright. The software
described in this guide is furnished under a software license or nondisclosure agreement.
of enterprise IT. We help simplify the challenges caused by data This software may be used or copied only in accordance with the terms of the applicable
explosion, cloud expansion, hybrid data centers, security threats and agreement. No part of this guide may be reproduced or transmitted in any form or by
any means, electronic or mechanical, including photocopying and recording for any
regulatory requirements. We’re a global provider to 130,000 companies purpose other than the purchaser’s personal use without the written permission of
across 100 countries, including 95% of the Fortune 500 and 90% of Quest Software Inc.
the Global 1000. Since 1987, we’ve built a portfolio of solutions which The information in this document is provided in connection with Quest Software products.
now includes database management, data protection, identity and No license, express or implied, by estoppel or otherwise, to any intellectual property
right is granted by this document or in connection with the sale of Quest Software
access management, Microsoft platform management and unified products. EXCEPT AS SET FORTH IN THE TERMS AND CONDITIONS AS SPECIFIED IN
endpoint management. With Quest, organizations spend less time THE LICENSE AGREEMENT FOR THIS PRODUCT, QUEST SOFTWARE ASSUMES NO
LIABILITY WHATSOEVER AND DISCLAIMS ANY EXPRESS, IMPLIED OR STATUTORY
on IT administration and more time on business innovation. For more
WARRANTY RELATING TO ITS PRODUCTS INCLUDING, BUT NOT LIMITED TO, THE
information, visit www.quest.com. IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE,
OR NON-INFRINGEMENT. IN NO EVENT SHALL QUEST SOFTWARE BE LIABLE FOR ANY
DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE, SPECIAL OR INCIDENTAL DAMAGES
If you have any questions regarding your potential use of this material, (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF PROFITS, BUSINESS
contact: INTERRUPTION OR LOSS OF INFORMATION) ARISING OUT OF THE USE OR INABILITY
TO USE THIS DOCUMENT, EVEN IF QUEST SOFTWARE HAS BEEN ADVISED OF THE
Quest Software Inc. POSSIBILITY OF SUCH DAMAGES. Quest Software makes no representations or
warranties with respect to the accuracy or completeness of the contents of this document
Attn: LEGAL Dept and reserves the right to make changes to specifications and product descriptions at
4 Polaris Way any time without notice. Quest Software does not make any commitment to update the
information contained in this document.
Aliso Viejo, CA 92656
Patents
Refer to our website (www.quest.com) for regional and international office Quest Software is proud of our advanced technology. Patents and pending patents may
information. apply to this product. For the most current information about applicable patents for this
product, please visit our website at www.quest.com/legal
Trademarks
Quest and the Quest logo are trademarks and registered trademarks of Quest Software
Inc. For a complete list of Quest marks, visit www.quest.com/legal/trademark-information.
aspx. All other trademarks are property of their respective owners.
eBook-FundamentalGuideSQL-US-JY-59506
33