ASE 15.7 Performance and Tuning Series - Locking and Concurrency Control Sybase Inc - PDF
ASE 15.7 Performance and Tuning Series - Locking and Concurrency Control Sybase Inc - PDF
Contents
CHAPTER 1
iii
Contents
CHAPTER 2
CHAPTER 3
CHAPTER 4
iv
Contents
CHAPTER 5
80
81
81
82
83
84
84
85
86
87
87
88
Indexes........................................................................................... 89
Types of indexes ............................................................................ 90
Index pages............................................................................. 91
Index size ................................................................................ 92
Indexes and partitions .................................................................... 93
Local indexes on partitioned tables ......................................... 93
Global indexes on partitioned tables ....................................... 93
Local versus global indexes .................................................... 94
Unsupported partition index types........................................... 94
Clustered indexes on allpages-locked tables................................. 94
Clustered indexes and select operations ................................ 95
Clustered indexes and insert operations ................................. 96
Page splitting on full data pages ............................................. 97
Page splitting on index pages ................................................. 99
Performance impacts of page splitting .................................... 99
Overflow pages ..................................................................... 100
Clustered indexes and delete operations .............................. 101
Nonclustered indexes................................................................... 103
Leaf pages revisited .............................................................. 103
Nonclustered index structure................................................. 104
Nonclustered indexes and select operations......................... 105
Nonclustered index performance .......................................... 106
Nonclustered indexes and insert operations ......................... 107
Nonclustered indexes and delete operations ........................ 108
Clustered indexes on data-only-locked tables....................... 109
Index covering.............................................................................. 109
Covering matching index scans ............................................ 110
Covering nonmatching index scans ...................................... 111
Indexes and caching .................................................................... 112
Using separate caches for data and index pages ................. 113
Index trips through the cache ................................................ 113
CHAPTER 6
Index............................................................................................................................................ 145
vi
CH A PTE R
Introduction to Locking
This chapter discusses basic locking concepts and the locking schemes
and types of locks used in Adaptive Server.
Topic
How locking affects performance
Page
1
2
3
7
18
19
26
Pseudocolumn-level locking
Reducing contention
34
37
Understanding the types of locks in Adaptive Server can help to reduce lock
contention and avoid or minimize deadlocks.
Event sequence
T1 and T2 start.
update account
set balance = balance - 100
where acct_number = 25
T1 updates balance
for one account by
subtracting $100.
update account
set balance = balance + 100
where acct_number = 45
commit transaction
T2
begin transaction
select sum(balance)
from account
where acct_number < 50
commit transaction
T1 updates balance of
the other account by
adding the $100.
T1 ends.
CHAPTER 1
Introduction to Locking
By default, Adaptive Server locks the data used in T1 until the transaction is
finished. Only then does it allow T2 to complete its query. T2 sleeps, or
pauses in execution, until the lock it needs it is released when T1 is completed.
The alternative, returning data from uncommitted transactions, is known as a
dirty read. If results do not need to be exact, T2 can read the uncommitted
changes from T1 and return results immediately, without waiting for the lock
to be released.
Locking is handled automatically by Adaptive Server, with options that can be
set at the session and query level by the user. You should know how and when
to use transactions to preserve data consistency while maintaining high
performance and throughput.
A table-level lock
Less overall work is required when a table-level lock is used, but large-scale
locks can degrade performance by making other users wait until locks are
released. Decreasing lock granularity makes more data accessible to other
users. Finer granularity locks can degrade performance, since more work is
necessary to maintain and coordinate the increased number of locks. To
achieve optimum performance, a locking scheme must balance the needs of
concurrency and overhead.
Adaptive Server provides these locking schemes:
For each locking scheme, Adaptive Server can lock an entire table, for queries
that acquire many page or row locks, or can lock only the affected pages or
rows.
Note The terms data-only-locking and data-only-locked table refer to both
the datapages and datarows locking schemes, and are typically refered to as
DOL tables. Allpages-locked tables are known as APL tables.
Allpages locking
Allpages locking locks data pages and index pages. When a query updates a
value in a row in an allpages-locked table, the data page is locked with an
exclusive lock. Any index pages affected by the update are also locked with
exclusive locks. These locks are transactional, meaning that they are held until
the end of the transaction.
Figure 1-1 shows the locks acquired on data pages and indexes while a new
row is being inserted into an allpages-locked table.
CHAPTER 1
Introduction to Locking
Index on FirstName
Index leaf
Mark
10,1
Index on LastName
Page 10
Mark
Twain
Index leaf
Twain
10,1
Legend
Locked
Unlocked
In many cases, concurrency problems that result from allpages locking arise
from the index page locks, rather than the locks on the data pages themselves.
Data pages have longer rows than indexes, and often have a small number of
rows per page. If index keys are short, an index page can store between 100 and
200 keys. An exclusive lock on an index page can block other users who need
to access any of the rows referenced by the index page, a far greater number of
rows than on a locked data page.
Datapages locking
In datapages locking, entire data pages are still locked, but index pages are not
locked. When a row needs to be changed on a data page, that page is locked,
and the lock is held until the end of the transaction. The updates to the index
pages are performed using latches, which are nontransactional. Latches are
held only as long as required to perform the physical changes to the page and
are then released immediately. Index page entries are implicitly locked by
locking the data page. No transactional locks are held on index pages. See
Latches on page 17 and Choosing a locking scheme based on contention
statistics on page 53 for more information.
Figure 1-2 shows an insert into a datapages-locked table. Only the affected data
page is locked.
Figure 1-2: Locks held during datapages locking
Page 10
Index leaf
Mark
10,1
Index on LastName
Mark
Twain
Index leaf
Twain
10,1
Legend
Locked
Unlocked
CHAPTER 1
Introduction to Locking
Datarows locking
In datarows locking, row-level locks are acquired on individual rows on data
pages. Index rows and pages are not locked. When a row is changed on a data
page, a nontransactional latch is acquired on the page. The latch is held while
the physical change is made to the data page, then the latch is released. The
lock on the data row is held until the end of the transaction. The index rows are
updated, using latches on the index page, but are not locked. Index entries are
implicitly locked by acquiring a lock on the data row.
Figure 1-3 shows an insert into a datarows-locked table. Only the affected data
row is locked.
Figure 1-3: Locks held during datarows locking
Index on FirstName
Page 10
Index leaf
Mark
Index on LastName
Mark
10,1
Twain
Index leaf
Twain
10,1
Legend
Locked
Unlocked
Page locks or table locks are used for tables that use allpages locking or
datapages locking.
Row locks or table locks are used for tables that use datarows locking.
Page or row locks are less restrictive (or smaller) than table locks. A page lock
locks all the rows on a data page or an index page; a table lock locks an entire
table. A row lock locks only a single row on a page. Adaptive Server uses page
or row locks whenever possible to reduce contention and to improve
concurrency.
Adaptive Server uses a table lock to provide more efficient locking when an
entire table or a large number of pages or rows is accessed by a statement.
Locking strategy is directly tied to the query plan, so a query plan can be as
important for its locking strategies as for its I/O implications. For data-onlylocked tables, an update or delete statement without a useful index performs a
table scan and acquires a table lock. For example, the following statement
acquires a table lock if the account table uses the datarows or datapages locking
scheme:
update account set balance = balance * 1.05
CHAPTER 1
Introduction to Locking
Shared locks Adaptive Server applies shared locks for read operations.
If a shared lock has been applied to a data page or data row or to an index
page, other transactions can also acquire a shared lock, even when the first
transaction is active. However, no transaction can acquire an exclusive
lock on the page or row until all shared locks on the page or row are
released. This means that many transactions can simultaneously read a
page or row, but no transaction can change data on the page or row while
a shared lock exists. Transactions that require an exclusive lock wait for,
or block, for the release of the shared locks before continuing.
By default, Adaptive Server releases shared locks after it finishes scanning
the page or row. It does not hold shared locks until the statement is
completed or until the end of the transaction unless requested to do so by
the user. For more details on how shared locks are applied, see Locking
for select queries at isolation level 1 on page 29.
Update locks Adaptive Server applies an update lock during the initial
phase of an update, delete, or fetch (for cursors declared for update)
operation while the page or row is being read. The update lock allows
shared locks on the page or row, but does not allow other update or
exclusive locks. Update locks help avoid deadlocks and lock contention.
If the page or row needs to be changed, the update lock is promoted to an
exclusive lock as soon as no other shared locks exist on the page or row.
In general, read operations acquire shared locks, and write operations acquire
exclusive locks. For operations that delete or update data, Adaptive Server
applies page-level or row-level exclusive and update locks only if the column
used in the search argument is part of an index. If no index exists on any of the
search arguments, Adaptive Server must acquire a table-level lock.
The examples in Table 1-2 show what kind of page or row locks Adaptive
Server uses for basic SQL statements. For these examples, there is an index
acct_number, but no index on balance.
Table 1-2: Page locks and row locks
Statement
select balance
from account
where acct_number = 25
Allpages-locked table
Shared page lock
Datarows-locked table
Shared row lock
Statement
insert account values
(34, 500)
delete account
where acct_number = 25
update account
set balance = 0
where acct_number = 25
Allpages-locked table
Datarows-locked table
Table locks
This section describes the types of table locks.
Shared lock similar to a shared page or row lock, except that it affects
the entire table. For example, Adaptive Server applies a shared table lock
for a select command with a holdlock clause if the command does not use
an index. A create nonclustered index command also acquires a shared
table lock.
The examples in Table 1-3 show the respective page, row, and table locks of
page or row locks Adaptive Server uses for basic SQL statements. For these
examples, there is an index on acct_number.
10
CHAPTER 1
Introduction to Locking
delete account
where acct_number = 25
update account
set balance = 0
where acct_number = 25
Allpages-locked table
Intent shared table lock
Shared page lock
Intent exclusive table lock
Exclusive page lock on data page
Exclusive page lock on leaf index
pages
Datarows-locked table
Intent shared table lock
Shared row lock
Intent exclusive table lock
Exclusive row lock
Exclusive table locks are also applied to tables during select into operations,
including temporary tables created with tempdb..tablename syntax. Tables
created with #tablename are restricted to the sole use of the process that created
them, and are not locked.
11
create table
drop table
create index
drop index
create view
drop view
create procedure
drop procedure
create trigger
drop trigger
create default
drop default
create rule
drop rule
create function
drop function
select into
create schema
reorg rebuild
12
CHAPTER 1
Introduction to Locking
Demand locks
Adaptive Server sets a demand lock to indicate that a transaction is next in the
queue to lock a table, page, or row. Since many readers can hold shared locks
on a given page, row, or table, tasks that require exclusive locks are queued
after a task that already holds a shared lock. Adaptive Server allows up to three
readers tasks to skip ahead of a queued update task.
After a write transaction has been skipped by three tasks or families (in the case
of queries running in parallel) that acquire shared locks, Adaptive Server gives
a demand lock to the write transaction. Any subsequent requests for shared
locks are queued behind the demand lock, as shown in Figure 1-4 on page 14.
As soon as the readers queued ahead of the demand lock release their locks, the
write transaction acquires its lock and can proceed. The read transactions
queued behind the demand lock wait for the write transaction to finish and
release its exclusive lock.
Adaptive Server uses demand locks to avoid lock starvation for write
transactions (when the required number of locks are not available).
Task 6 makes an exclusive lock request, but must wait until the shared lock
is released because shared and exclusive locks are not compatible.
Tasks 1 and 4 make shared lock requests, which are also immediately
granted for the same reason.
Task 6 has now been skipped three times, and is granted a demand lock.
13
After tasks 1, 2, 3, and 4 finish their reads and release their shared locks,
task 6 is granted its exclusive lock.
After task 6 finishes its write and releases its exclusive page lock, task 5 is
granted its shared page lock.
Figure 1-4: Demand locking with serial query execution
Page
Active lock
Demand lock
Shared
page
Exclusive
page
Sleep wait
Shared
page
14
CHAPTER 1
Introduction to Locking
Task 9 makes an exclusive lock request, but must wait until the shared lock
is released.
Worker processes 1:1, 2:1, 3:1, task 10, and worker processes 3:2 and 1:2
are consecutively granted shared lock requests. Since family ID 3 and task
10 have no prior locks queued, the skip count for task 9 is now 3, and task
9 is granted a demand lock.
Finally, worker process 4:1 makes a shared lock request, but it is queued
behind task 9s exclusive lock request.
Any additional shared lock requests from family IDs 1, 2, and 3 and from
task 10 are queued ahead of task 9, but all requests from other tasks are
queued after it.
After all the tasks in the active lock position release their shared locks, task
9 is granted its exclusive lock.
After task 9 releases its exclusive page lock, task 4:1 is granted its shared
page lock.
15
Active lock
Page
Demand lock
Shared
page
Sleep wait
Exclusive
Page
Shared
page
4:1
1:3
2:3
1:1
2:1
3:1
10
3:2
1:2
16
CHAPTER 1
Introduction to Locking
One of the result rows so that it no longer qualifies for the serializable read
transaction, by updating or deleting the row
A row that is not included in the serializable read result set so that the row
now qualifies, or insert a row that would qualify for the result set
Adaptive Server uses range locks, infinity key locks, and next-key locks to
protect against phantoms on data-only-locked tables. Allpages-locked tables
protect against phantoms by holding locks on the index pages for the
serializable read transaction.
When a query at isolation level 3 (serializable read) performs a range scan
using an index, all the keys that satisfy the query clause are locked for the
duration of the transaction. Also, the key that immediately follows the range is
locked, to prevent new values from being added at the end of the range. If there
is no next value in the table, an infinity key lock is used as the next key, to
ensure that no rows are added after the last key in the table.
Range locks can be shared, update, or exclusive locks; depending on the
locking scheme, they are either row locks or page locks. sp_lock output shows
Fam dur, Range in the context column for range locks. For infinity key locks,
sp_lock shows a lock on a nonexistent row, row 0 of the root index page and
Fam dur, Inf key in the context column.
Every transaction that performs an insert or update to a data-only-locked table
checks for range locks.
Latches
Latches are nontransactional synchronization mechanisms used to guarantee
the physical consistency of a page. While rows are being inserted, updated, or
deleted, only one Adaptive Server process can access the page. Latches are
used for datapages and datarows locking, but not for allpages locking.
The most important distinction between a lock and a latch is duration:
A lock can persist for a long period of time: while a page is being scanned,
while a disk read or network write takes place, for the duration of a
statement, or for the duration of a transaction.
17
A latch is held only for the length of time required to insert or move a few
bytes on a data page, to copy pointers, columns, or rows, or to acquire a
latch on another index page.
Lock sufficiency, for the current taskis the current lock held on a page or
row sufficient if the task needs to access the page again?
An exclusive
intent lock?
No
Yes
No
No
No
No
No
N/A
No
N/A
No
Yes
No
N/A
N/A
No
No
Yes
Yes
Yes
Yes
18
CHAPTER 1
Introduction to Locking
If a task has:
An update lock
Yes
Yes
No
Yes
No
No
An exclusive lock
Yes
Yes
Yes
A shared lock
Name
read uncommitted
read committed
repeatable read
serializable read
Description
The transaction is allowed to read uncommitted changes to data.
The transaction is allowed to read only committed changes to data.
The transaction can repeat the same query, and no rows that have
been read by the transaction are updated or deleted.
The transaction can repeat the same query, and receive exactly the
same results. No rows can be inserted that appear in the result set.
You can choose the isolation level for all select queries during a session, or you
can choose the isolation level for a specific query or table in a transaction.
At all isolation levels, all updates acquire exclusive locks and hold them for the
duration of the transaction.
Note For tables that use allpages locking, requesting isolation level 2 also
enforces isolation level 3. The Adaptive Server default isolation level is level 1.
19
Event sequence
begin transaction
T3 and T4 start.
update account
set balance = balance - 100
where acct_number = 25
T3 updates balance
for one account by
subtracting $100.
T4 queries current
sum of balance for
accounts.
T4 ends.
rollback transaction
T4
begin transaction
select sum(balance)
from account
where acct_number < 50
commit transaction
T3 rolls back,
invalidating the
results from T4.
If transaction T4 queries the table after T3 updates it, but before it rolls back
the change, the amount calculated by T4 is off by $100.The update statement
in T3 acquires an exclusive lock on account. However, T4 does not try to
acquire a shared lock before querying account, so it is not blocked by T3. The
opposite is also true. If T4 begins to query account at isolation level 0 before
T3 starts, T3 can still acquire its exclusive lock on account while T4s query
executes, because T4 does not hold any locks on the pages it reads.
At isolation level 0, Adaptive Server performs dirty reads by:
Allowing another task to read rows, pages, or tables that have exclusive
locks; that is, to read uncommitted changes to data.
Any data modifications that are performed by T4 while the isolation level is set
to 0 acquire exclusive locks at the row, page, or table level, and block if the data
they need to change is locked.
20
CHAPTER 1
Introduction to Locking
Dirty reads make in-cache copies of dirty data that the isolation level 0
application needs to read.
If a dirty read is active on a row, and the data changes so that the row is
moved or deleted, the scan must be restarted, which may incur additional
logical and physical I/O.
During deferred update of a data row, there can be a significant time interval
between the delete of the index row and the insert of the new index row. During
this interval, there is no index row corresponding to the data row. If a process
scans the index during this interval at isolation level 0, it does not return the old
or new value of the data row. See Deferred updates in Chapter 1,
Understanding Query Processing in Performance and Tuning Series: Query
Processing and Abstract Plans.
sp_sysmon reports on these factors. See Data Cache Management in
Performance and Tuning Series: Monitoring Adaptive Server with sp_sysmon.
21
Event sequence
T5 and T6 start.
update account
set balance = balance - 100
where acct_number = 25
T5 updates account
after getting
exclusive lock.
T6 tries to get shared
lock to query account
but must wait until
T5 releases its lock.
rollback transaction
T6
begin transaction
select sum(balance)
from account
where acct_number < 50
commit transaction
22
CHAPTER 1
Introduction to Locking
Event sequence
begin transaction
T7 and T8 start.
select balance
from account
where acct_number = 25
T8
T8 ends.
select balance
from account
where acct_number = 25
commit transaction
T7 ends.
begin transaction
update account
set balance = balance - 100
where acct_number = 25
commit transaction
If transaction T8 modifies and commits the changes to the account table after
the first query in T7, but before the second one, the same two queries in T7
produce different results. Isolation level 2 blocks T8 from executing. It would
also block a transaction that attempted to delete the selected row.
23
Event sequence
T9 and T10 start.
T10
begin transaction
T10 ends.
commit transaction
If transaction T10 inserts rows into the table that satisfy T9s search condition
after T9 executes the first select, subsequent reads by T9 using the same query
result in a different set of rows.
Adaptive Server prevents phantoms by:
Using range locks or infinity key locks for certain queries on data-onlylocked tables.
Holding the shared locks allows Adaptive Server to maintain the consistency
of the results at isolation level 3. However, holding the shared lock until the
transaction ends decreases Adaptive Server concurrency by preventing other
transactions from getting their exclusive locks on the data.
Compare the phantom, shown in Table 1-10, with the same transaction
executed at isolation level 3, as shown in Table 1-11.
24
CHAPTER 1
Introduction to Locking
Event sequence
T11 and T12 start.
select * from
account holdlock
where acct_number < 25
select * from
account holdlock
where acct_number < 25
commit transaction
T12
begin transaction
In transaction T11, Adaptive Server applies shared page locks and holds the
locks until the end of T11. (If account is a data-only-locked table, and no index
exists on the acct_number argument, a shared table lock is acquired.) The insert
in T12 cannot get its exclusive lock until T11 releases its shared locks. If T11
is a long transaction, T12 (and other transactions), may wait for longer periods
of time. Use level 3 only when required.
25
Using exclusive and shared locks allows Adaptive Server to maintain the
consistency of the results at isolation level 1. Releasing the shared lock after
the scan moves off a page improves Adaptive Server concurrency by allowing
other transactions to obtain their exclusive locks on the data.
Scan duration locks are released when the scan moves off the row or
page, for row or page locks, or when the scan of the table completes, for
table locks.
Table 1-12 shows the types of locks acquired by queries at different isolation
levels, for each locking scheme for queries that do not use cursors. Table 1-13
shows information for cursor-based queries.
26
CHAPTER 1
Introduction to Locking
Locking
scheme
Table
lock
Data
page
lock
Index
page
lock
Data
row
lock
Allpages
Datapages
Datarows
Allpages
Datapages
Datarows
IS
IS
IS
S
*
-
S
-
Allpages
Datapages
Datarows
IS
IS
IS
S
S
-
S
-
select via
index scan
3
Allpages
1 with holdlock Datapages
2 with holdlock Datarows
IS
IS
IS
S
S
-
S
-
select
via
table scan
Allpages
3
1 with holdlock Datapages
2 with holdlock Datarows
IS
S
S
S
-
insert
0, 1, 2, 3
0, 1, 2, 3
delete
update
0, 1, 2
IX
IX
IX
IX
IX
IX
IX
IX
IX
X
X
X
X
U, X
U, X
-
X
U, X
-
X
X
U, X
writetext
Allpages
Datapages
Datarows
Allpages
Datapages
Datarows
Allpages
Datapages
Datarows
Allpages
Datapages
Datarows
IX
IX
IX
U, X
U, X
-
U, X
-
U, X
Allpages
Datapages
Datarows
IX
X
X
U, X
-
Statement
select
readtext
any type of
scan
1
2 with
noholdlock
3 with
Duration
No locks are acquired.
noholdlock
any type of
scan
delete
update
via index
scan
delete
update
via table
scan
Key: IS intent shared, IX intent exclusive, S shared, U update, X exclusive
27
Statement
Isolation
level
Locking
scheme
Table
lock
Data
page
lock
Index
page
lock
Data
row
lock
select
Allpages
Datapages
Datarows
Allpages
Datapages
Datarows
IS
IS
IS
S
*
-
S
-
Allpages
Datapages
Datarows
IS
IS
IS
S
S
-
S
-
(without for
clause)
select... for
read only
1
2 with
noholdlock
3 with
noholdlock
2, 3
1 with holdlock
2 with holdlock
select...for
update
Allpages
Datapages
Datarows
IX
IX
IX
U, X
U, X
-
X
-
U, X
select...for
update with
shared
Allpages
Datapages
Datarows
IX
IX
IX
S, X
S, X
-
X
-
S, X
select...for
update
2, 3, 1 holdlock
Allpages
Datapages
Datarows
IX
IX
IX
U, X
U, X
-
X
-
U, X
select...for
update with
shared
2, 3
Allpages
Datapages
Datarows
IX
IX
IX
S, X
S, X
-
X
-
S, X
2, holdlock
1 with holdlock
2 with holdlock
Duration
No locks are acquired.
28
CHAPTER 1
Introduction to Locking
Table lock
X
S
If read committed with lock is set to 0 (the default), then select queries read
the column values with instant-duration page or row locks. The required
column values or pointers for the row are read into memory, and the lock
is released. Locks are not held on the outer tables of joins while rows from
the inner tables are accessed. This reduces deadlocking and improves
concurrency.
If a select query needs to read a row that is locked with an incompatible
lock, the query still blocks on that row until the incompatible lock is
released. Setting read committed with lock to 0 does not affect the isolation
level; only committed rows are returned to the user.
If read committed with lock is set to 1, select queries acquire shared page
locks on datapages-locked tables and shared row locks on datarowslocked tables. The lock on the first page or row is held, then the lock is
acquired on the second page or row and the lock on the first page or row
is dropped.
29
You must declare cursors as read-only to avoid holding locks during scans
when read committed with lock is set to 0. Any implicitly or explicitly updatable
cursor on a data-only-locked table holds locks on the current page or row until
the cursor moves off the row or page. When read committed with lock is set to
1, read-only cursors hold a shared page or row lock on the row at the cursor
position.
read committed with lock does not affect locking behavior on allpages-locked
30
CHAPTER 1
Introduction to Locking
Includes search arguments for every key in the index chosen by the query,
so that the index unambiguously qualifies the row, and
31
the same transaction. In this case, scans block on the uncommitted inserted
row.
uncommitted inserted row with the key value of interest, it skips it without
blocking.
32
CHAPTER 1
Introduction to Locking
The only exception to this rule is if the transaction doing the uncommitted
insert was overwriting an uncommitted delete of the same row done earlier by
the same transaction. In this case, updates and deletes block on the
33
Pseudocolumn-level locking
Since the branch value in the row affected by T15 is not 77, the row does not
qualify, and the row is skipped, as shown. If T15 updated a row where branch
equals 77, a select query would block until T15 either commits or rolls back.
Table 1-15: Pseudo-column-level locking with multiple predicates
T15
Event sequence
T15 and T16 start.
begin transaction
update accounts
set balance = 80
where acct_number = 20
and branch = 23
begin transaction
commit transaction
T16
For select queries to avoid blocking when they reference columns in addition
to columns that are being updated, all of the following conditions must be met:
At least one of the search clauses of the select query must be on a column
that is among the first 32 columns of the table.
The configuration parameter read committed with lock must be set to 0, the
default value.
Pseudocolumn-level locking
During concurrent transactions that involve select and update commands,
pseudo-column-level locking can allow some queries to return values from
locked rows, and can allow other queries to avoid blocking on locked rows that
do not qualify. Pseudo-column-level locking can reduce blocking when:
34
CHAPTER 1
Introduction to Locking
Neither the old nor the new value of the updated column qualifies, and an
index containing the updated column is being used.
The query does not reference an updated column in the select list or any
clauses (where, having, group by, order by or compute), and
The query does not use an index that includes the updated column.
Transaction T14 in Table 1-16 requests information about a row that is locked
by T13. However, since T14 does not include the updated column in the result
set or as a search argument, T14 does not block on T13s exclusive row lock.
Table 1-16: Pseudo-column-level locking with mutually exclusive
columns
T13
Event sequence
begin transaction
update accounts
set balance = 50
where acct_number = 35
commit transaction
T14
begin transaction
If T14 uses an index that includes the updated column (for example,
acct_number, balance), the query blocks trying to read the index row.
For select queries to avoid blocking when they do not reference updated
columns, all of the following conditions must be met:
The columns referenced in the select query must be among the first 32
columns of the table.
35
Pseudocolumn-level locking
The select query must not use an index that contains the updated column.
The configuration parameter read committed with lock must be set to 0, the
default value.
If neither the old or new value meets the search criteria, the row can be
skipped, and the query does not block.
If the old value, the new value, or both values qualify, the query blocks. In
Table 1-17, if the original balance is $80, and the new balance is $90, the
row can be skipped, as shown. If either of the values is less than $50, T18
must wait until T17 completes.
Table 1-17: Checking old and new values for an uncommitted update
T17
Event sequence
begin transaction
update accounts
set balance = balance + 10
where acct_number = 20
commit transaction
T18
begin transaction
For select queries to avoid blocking when old and new values of uncommitted
updates do not qualify, all of the following conditions must be met:
36
At least one of the search clauses of the select query must be on a column
that is among the first 32 columns of the table.
CHAPTER 1
Introduction to Locking
The index used for the select query must include the updated column.
The configuration parameter read committed with lock must be set to 0, the
default value.
Reducing contention
To help reduce lock contention between update and select queries:
Use datarows or datapages locking for tables with lock contention caused
by update and select commands.
If tables have more than 32 columns, make the first 32 columns the
columns most frequently used as search arguments and in other query
clauses.
Select only needed columns. Avoid using select * when all columns are not
needed by the application.
Use any available predicates for select queries. When a table uses
datapages locking, the information about updated columns is kept for the
entire page, so that if a transaction updates some columns in one row, and
other columns in another row on the same page, any select query that
needs to access that page must avoid using any of the updated columns.
37
Reducing contention
38
CH A PTE R
This chapter discusses the types of locks used in Adaptive Server and the
commands that can affect locking.
Topic
Locking and performance
Configuring locks and lock promotion thresholds
Page
39
44
51
56
Processes wait for locks to be released. Any time a process waits for
another process to complete its transaction and release its locks,
overall response time and throughput are affected.
39
40
Keep transactions short to reduce the time that locks are held.
CHAPTER 2
It may be blocked and have to wait while holding large numbers of locks.
Creating a useful index for the query allows the data modification statement to
use page or row locks, improving concurrent access to the table. If you cannot
create an index for a lengthy update or delete transaction, you can perform the
operation in a cursor, with frequent commit transaction statements to reduce the
number of page locks.
select balance
from account holdlock
where acct_number = 25
update account
set balance = balance + 50
where acct_number = 25
begin tran
commit tran
41
begin tran
update account
set balance = balance + 50
where acct_number = 25
go
update account
set balance = balance - 50
where acct_number = 45
commit tran
go
0.4
0.0
185
25
88.1 %
11.9 %
To avoid this:
42
Partition the table using the round-robin strategy. Partitioning a heap table
creates multiple page chains in the table, and, therefore, multiple last pages
for insertions.
CHAPTER 2
Concurrent inserts to the table are less likely to block one another, since
multiple last pages are available. Partitioning improves concurrency for
heap tables without creating separate tables for different groups of users.
See Improving insert performance with partitions in Performance and
Tuning Series: Physical Database Tuning for information about
partitioning tables.
Create a clustered index to distribute updates across the data pages in the
table.
Like partitioning, this creates multiple insertion points for the table.
However, it also introduces overhead for maintaining the physical order of
the tables rows.
Use the lowest level of locking required by each application. Use isolation
level 2 or 3 only when necessary.
Updates by other transactions may be delayed until a transaction using
isolation level 3 releases any of its shared locks at the end of the
transaction.
Use isolation level 3 only when nonrepeatable reads or phantoms may
interfere with results.
If only a few queries require isolation level 3, use the holdlock keyword or
the at isolation serializing clause in those queries rather than using set
transaction isolation level 3 for the entire transaction.
If most queries in the transaction require isolation level 3, use set
transaction isolation level 3, but use noholdlock or at isolation read committed
in the queries that can execute at isolation level 1.
If the application must return a row, wait for user interaction, and then
update the row, consider using timestamps and the tsequal function rather
than holdlock.
43
Other tuning efforts can also help reduce lock contention. For example, if a
process holds locks on a page, and must perform a physical I/O to read an
additional page, the process holds the lock much longer than it would if the
additional page were already in cache. In this case, better cache utilization or
the use of large I/O can reduce lock contention. You can also reduce lock
contention by improving indexing and distributing physical I/O evenly across
disks.
The size of the lock hash table and the number of spinlocks that protect the
page/row lock hash table, table lock hash table, and address lock hash table
The server-wide lock timeout limit, and the lock timeout limit for
distributed transactions
The number of locks available per engine and the number of locks
transferred between the global free lock list and the engines
You may also need to adjust the sp_configure parameter max memory, since
each lock uses memory.
The number of locks required by a query can vary widely, depending on the
locking scheme and on the number of concurrent and parallel processes and the
types of actions performed by the transactions. Configuring the correct number
for your system is a matter of experience and familiarity with the system.
44
CHAPTER 2
Start with 20 locks for each active concurrent connection, plus 20 locks for
each worker process. Consider increasing the number of locks if:
Tables using datapages locking require fewer locks than tables using
allpages locking, since queries on datapages-locked tables do not acquire
separate locks on index pages.
An insert with allpages locking requires N+1 locks, where N is the number of
indexes. The same insert on a data-only-locked table locks only the data page
or data row.
select queries and locks
Scans at transaction isolation level 1, with read committed with lock set to hold
locks (1), acquire overlapping locks that roll through the rows or pages, so they
hold, at most, two data page locks at a time.
However, transaction isolation level 2 and 3 scans, especially those using
datarows locking, can acquire and hold very large numbers of locks, especially
when running in parallel. Using datarows locking, and assuming no blocking
during lock promotion, the maximum number of locks that might be required
for a single table scan is:
45
For tables that use the datarows-locking scheme, data modification commands
can require many more locks than data modification on allpages- or datapageslocked tables.
For example, a transaction that performs a large number of inserts into a heap
table may acquire only a few page locks for an allpages-locked table, but
requires one lock for each inserted row in a datarows-locked table. Similarly,
transactions that update or delete large numbers of rows may acquire many
more locks with datarows locking.
from row locks to table locks. Row locks are never promoted to page locks.
46
CHAPTER 2
A table may be scanned more than once inside a single transaction in the
case of joins, subqueries, exists clauses, and so on. Each scan of the table
is a scan session.
A scan session may scan data from more than one partition. Lock promotion is
based on the number of page or row locks acquired across all the partitions
accessed in the scan.
A table lock is more efficient than multiple page or row locks when an entire
table might eventually be needed. At first, a task acquires page or row locks,
then attempts to escalate to a table lock when a scan session acquires more page
or row locks than the value set by the lock promotion threshold.
Since lock escalation occurs on a per-scan-session basis, the total number of
page or row locks for a single transaction can exceed the lock promotion
threshold, as long as no single scan session acquires more than the lock
promotion threshold number of locks. Locks may persist throughout a
transaction, so a transaction that includes multiple scan sessions can
accumulate a large number of locks.
Lock promotion cannot occur if another task holds locks that conflict with the
type of table lock needed. For instance, if a task holds any exclusive page locks,
no other process can promote to a table lock until the exclusive page locks are
released.
When lock promotion is denied due to conflicting locks, a process can
accumulate page or row locks in excess of the lock promotion threshold and
may exhaust all available locks in Adaptive Server.
The lock promotion parameters are:
For datarows-locked tables, row lock promotion HWM, row lock promotion
LWM, and row lock promotion PCT.
47
PCT percent
When the number of locks acquired during a scan session exceeds this number,
Adaptive Server attempts to acquire a table lock.
Setting the high water mark to a value greater than 200 reduces the chance of
any task or worker process acquiring a table lock on a particular table. For
example, if a process updates more than 200 rows of a very large table during
a transaction, setting the lock promotion high water mark higher keeps this
process from attempting to acquire a table lock.
Setting the high water mark to less than 200 increases the chances of a
particular task or worker process acquiring a table lock.
The low water mark must be less than or equal to the corresponding high water
mark.
Setting the low water mark to a very high value decreases the chance for a
particular task or worker process to acquire a table lock, which uses more locks
for the duration of the transaction, potentially exhausting all available locks in
Adaptive Server. The possibility of all locks being exhausted is especially high
with queries that update a large number of rows in a datarows-locked table, or
that select large numbers of rows from datarows-locked tables at isolation
levels 2 or 3.
If conflicting locks prevent lock promotion, you may need to increase the value
of the number of locks configuration parameter.
48
CHAPTER 2
Setting lock promotion PCT to a very low value increases the chance of a
particular user transaction acquiring a table lock. Figure 2-1 shows how
Adaptive Server determines whether to promote page locks on a table to a table
lock.
Figure 2-1: Lock promotion logic
No
Do not promote
to table lock.
Yes
Does this scan session
hold lock promotion HWM
number of page or row
locks?
No
Yes
No
Do not
promote to
table lock.
Yes
No
Promote to
table lock.
Yes
Do not promote
to table lock.
49
In this example, the task does not attempt to promote to a table lock unless the
number of locks on the table is between 100 and 2000.
If a command requires more than 100 but less than 2000 locks, Adaptive Server
compares the number of locks to the percentage of locks on the table.
If the number of locks is greater than the number of pages resulting from the
percentage calculation, Adaptive Server attempts to issue a table lock.
sp_setrowlockpromote sets the configuration parameters for all datarows-
locked tables:
sp_setrowlockpromote "server", null, 300, 500, 50
The default values for lock promotion configuration parameters are likely to be
appropriate for most applications.
After the values are initialized, you can change any individual value. For
example, to change the lock promotion PCT only:
sp_setpglockpromote "table", titles, null, null, 70
sp_setrowlockpromote "table", authors, null, null,
50
50
CHAPTER 2
Precedence of settings
You can change the lock promotion thresholds for any user database or for an
individual table. Settings for an individual table override the database or
server-wide settings; settings for a database override the server-wide values.
Server-wide values for lock promotion apply to all user tables on the server,
unless the database or tables have lock promotion values configured.
51
Here are some typical situations and general guidelines for choosing the
locking scheme:
The table is a heap table that will have a high rate of inserts.
Use datarows locking to avoid contention. If the number of rows inserted
per batch is high, datapages locking is also acceptable. Allpages locking
has more contention for the last page of heap tables.
configuration parameter.
2
If the table uses allpages locking and has a clustered index, ensure that
performance of the modified clustered index structure on data-only-locked
tables will not hurt performance.
See Tables where clustered index performance must remain high on
page 55.
52
CHAPTER 2
If the table uses allpages locking, convert the locking scheme to datapages
locking to determine whether that solves the concurrency problem.
53
Reducing lock contention on one table could ease lock contention on other
tables as well, or it could increase lock contention on another table that was
masked by blocking on the first table in the application. For example:
Lock contention is high for two tables that are updated in transactions
involving several tables. Applications first lock TableA, then attempt to
acquire locks on TableB, and block, holding locks on TableA.
Additional tasks running the same application block while trying to
acquire locks on TableA. Both tables show high contention and high wait
times.
Changing TableB to data-only locking may alleviate the contention on both
tables.
Contention for TableT is high, so its locking scheme is changed to a dataonly locking scheme.
Re-running sp_object_stats now shows contention on TableX, which had
shown very little lock contention. The contention on TableX was masked
by the blocking problem on TableT.
If your application uses many tables, you may want to convert your set of tables
to data-only locking gradually, by changing only those tables with the highest
lock contention. Then test the results of these changes by re-running
sp_object_stats.
Run your usual performance monitoring tests both before and after you make
the changes.
54
Check query plans and I/O statistics, especially for those queries that use
clustered indexes.
Monitor the tables to learn how changing the locking scheme affects:
CHAPTER 2
55
For tables with variable-length columns, subtract 2 bytes for each variablelength column (this includes all columns that allow null values). For example,
the maximum user row size for a data-only-locked table with 4 variable-length
columns is 1950 bytes.
If you try to convert an allpages-locked table that has more than 1958 bytes in
fixed-length columns, the command fails as soon as it reads the table schema.
When you try to convert an allpages-locked table with variable-length
columns, and some rows exceed the maximum size for the data-only-locked
table, the alter table command fails at the first row that is too long to convert.
Optimistic index locking does not acquire an address lock on the root page of
an index partition during normal data manipulation language (DML)
operations. If your updates and insertions can cause modifications to the root
page of the accessed index partition, optimistic index locking restarts the
search and acquires an exclusive table lock, not an address lock.
Two stored procedures are changed by optimistic index locking:
For more information, see the Adaptive Server Reference Manual: Procedures.
56
CHAPTER 2
None of the indexes on this table cause modifications to the root page.
A database is read-only.
You have small tables (that are not read-only) with index levels no higher
than 3.
to the entire table. Use extreme caution in setting the optimistic index locking
property.
57
58
CH A PTE R
Locking Reports
This chapter discusses tools that report on locks and locking behavior.
Topic
Locking tools
Page
59
65
72
73
Locking tools
sp_who, sp_lock, and sp_familylock report on locks held by users and show
processes that are blocked by other transactions.
shows lock sleep to indicate that this task or worker process is waiting
for an existing lock to be released.
The blk_spid or block_xloid column shows the process ID of the
task or transaction holding the lock or locks.
You can add a user name parameter to get sp_who information about a
particular Adaptive Server user. If you do not provide a user name, sp_who
reports on all processes in Adaptive Server.
For example, consider what happens if you run three sessions in the pubs2
database: session one deletes the authors table, session two selects all the
data from the authors table, and the third session running sp_who against
spid 15. In this situation, session two hangs, and session three reports this
in the sp_who output:
59
Locking tools
sp_who '15'
fid spid status
loginame
origname
hostname
blk_spid
tempdbname cmd
block_xloid
threadpool
--- ---- ----------------- --------- -------------------------------- ----------------- ---------------------------
15
recv sleep sa
tempdb AWAITING COMMAND
sa
0
PSALDINGXP
0
syb_default_pool
dbname
-----------
pubs2
16 lock sleep
tempdb
sa
sa
SELECT
PSALDINGXP
15
syb_default_pool
dbname
-----------
pubs2
If you run sp_lock against spid 15, the class column displays the cursor name
for locks associated with the current users cursor and the cursor ID for other
users:
fid
row
--------0
0
0
0
0
0
spid
loid
locktype
dbname
class
----- ---------- ---------------------------------------- --------------------15
30
Ex_intent
pubs2
Non Cursor Lock
15
30
Ex_page-blk
pubs2
Non Cursor Lock
15
30
Ex_page
pubs2
Non Cursor Lock
table_id
page
context
--------------------------------576002052
0
576002052
1008
576002052
1040
Ind pg
If you run sp_lock against spid 16, the class column displays the cursor name
for locks associated with the current users cursor and the cursor ID for other
users:
fid
page
----------0
0
spid
loid
locktype
table_id
row
dbname
class context
----- ---------- ----------------------------------------------------------------------------16
32
Sh_intent
576002052
0
pubs2
Non Cursor Lock
Note The sample output for sp_lock and sp_familylock in this chapter omits the
class column to increase readability. The class column reports either the
names of cursors that hold locks or Non Cursor Lock.
60
CHAPTER 3
Locking Reports
table_id
---------208003772
208003772
208003772
208003772
208003772
208003772
208003772
208003772
16003088
16003088
16003088
16003088
16003088
16003088
464004684
176003658
176003658
208003772
208003772
208003772
208003772
208003772
208003772
208003772
240003886
240003886
240003886
240003886
240003886
240003886
240003886
16003088
48003202
80003316
112003430
176003658
208003772
240003886
61
Locking tools
This example shows the lock status of serial processes and one parallel process:
spid 15 holds an exclusive intent lock on a table, one data page lock, and
two index page locks. A blk suffix indicates that this process is blocking
another process that needs to acquire a lock; spid 15 is blocking another
process. As soon as the blocking process completes, the other processes
move forward.
spid 30 holds an exclusive intent lock on a table, one lock on a data page,
spid 49 is the task that ran sp_lock; it holds a shared intent lock on the
spt_values table in master while it runs.
spid 50 holds intent locks on two tables, and several row locks.
fid 32 shows several spids holding locks: the parent process (spid 32) holds
shared intent locks on 7 tables, while the worker processes hold shared
page locks on one of the tables.
The lock type column indicates not only whether the lock is a shared lock
(Sh prefix), an exclusive lock (Ex prefix), or an Update lock, but also
whether it is held on a table (table or intent) or on a page or row.
A demand suffix indicates that the process will acquire an exclusive lock as
soon as all current shared locks are released.
The context column consists of one or more of the following values:
Fam dur means that the task will hold the lock until the query completes,
that is, for the duration of the family of worker processes. Shared intent
locks are an example of family duration locks.
For a parallel query, the coordinating process always acquires a shared
intent table lock that is held for the duration of the parallel query. If the
parallel query is part of a transaction, and earlier statements in the
transaction performed data modifications, the coordinating process holds
family duration locks on all the changed data pages.
Worker processes can hold family duration locks when the query operates
at isolation level 3.
62
CHAPTER 3
Locking Reports
Range indicates a range lock, used for some range queries at transaction
isolation level 3.
To see lock information about a particular login, give the spid for the process:
sp_lock 30
fid
spid
loid
locktype
table_id
page
row
dbname
class
context
----- ----- ---------- -------------------------- ------------------------------------- ----------------------------------0
30
60 Ex_intent
208003772
0
0
sales
Fam dur
0
30
60 Ex_page
208003772
997
0
sales
Fam dur
0
30
60 Ex_page
208003772
2405
0
sales
Fam dur, Ind pg
0
30
60 Ex_page
208003772
2406
0
sales
Fam dur, Ind pg
If the spid you specify is also the fid for a family of processes, sp_who prints
information for all of the processes.
You can also request information about locks on multiple spids:
sp_lock 30, 15
fid
spid
loid
locktype
row
dbname
class
----- ----- ---------- -------------------------------------------- --------------------0
15
30 Ex_page
0
sales
Fam dur, Ind pg
0
15
30 Ex_page
0
sales
Fam dur, Ind pg
0
15
30 Ex_page-blk
0
sales
Fam dur
0
30
60 Ex_intent
0
sales
Fam dur
0
30
60 Ex_page
0
sales
Fam dur
0
30
60 Ex_page
0
sales
Fam dur, Ind pg
0
30
60 Ex_page
0
sales
Fam dur, Ind pg
table_id
page
context
--------------------------------208003772
2400
208003772
2404
208003772
946
208003772
208003772
997
208003772
2405
208003772
2406
63
Locking tools
sp_familylock 51
fid
spid
loid
locktype
row
dbname
class
----- ----- ---------- -------------------------------------------- --------------------51
23
102
Sh_page
0
sales
51
51
102
Sh_intent
0
sales
Fam dur
51
51
102
Sh_intent
0
sales
Fam dur
51
51
102
Sh_intent
0
sales
Fam dur
51
102
Sh_intent
0
sales
Fam dur
table_id
page
context
--------------------------------208003772
945
16003088
48003202
176003658
208003772
11
11
11
11
11
64
11
sleeping
tempdb
16 lock sleep
tempdb
WORKER
17 lock sleep
tempdb
WORKER
18 send sleep
tempdb
WORKER
19 lock sleep
tempdb
WORKER
diana
SELECT
diana
PROCESS
diana
PROCESS
diana
PROCESS
diana
PROCESS
diana
0
diana
0
diana
0
diana
0
diana
0
olympus
sales
18
sales
18
sales
sales
18
sales
syb_default_pool
olympus
syb_default_pool
olympus
syb_default_pool
olympus
syb_default_pool
olympus
syb_default_pool
CHAPTER 3
11
11
20 lock sleep
diana
tempdb
WORKER PROCESS
21 lock sleep
diana
tempdb
WORKER PROCESS
diana
0
diana
0
olympus
Locking Reports
18
sales
18
sales
syb_default_pool
olympus
syb_default_pool
Each worker process acquires an exclusive address lock on the network buffer
while writing results to it. When the buffer is full, it is sent to the client, and the
lock is held until the network write completes.
65
Connection B requests an exclusive lock on the same pages and then waits.
66
CHAPTER 3
T19
Event sequence
begin transaction
update savings
set balance = balance - 250
where acct_number = 25
update checking
set balance = balance + 250
where acct_number = 45
commit transaction
Locking Reports
T20
begin transaction
update checking
set balance = balance - 75
where acct_number = 45
update savings
set balance = balance + 75
where acct_number = 25
commit transaction
67
A single worker process can be involved in a deadlock such as those that occur
between two serial processes. For example, a worker process that is performing
a join between two tables can deadlock with a serial process that is updating
the same two tables.
In some cases, deadlocks between serial processes and families involve a level
of indirection.
For example, if a task holds an exclusive lock on tableA and needs a lock on
tableB, but a worker process holds a family-duration lock on tableB, the task
must wait until the transaction that the worker process is involved in completes.
If another worker process in the same family needs a lock on tableA, the result
is a deadlock. Figure 3-1 illustrates the following deadlock scenario:
The family identified by fid 8 is doing a parallel query that involves a join
of stock_tbl and sales_tbl, at transaction level 3.
68
CHAPTER 3
Locking Reports
stock_tbl
Shared
page
lock
Page 10862
(level 3)
W8 9
Worker
process
W8 10
Worker
process
T1 7
sales_tbl
Exclusive
page
lock
Legend:
Page 634
Shared
intent
lock
Lock held by
Needs lock
Deadlock Id 11 detected
In this output, fid 0, spid 29 started the deadlock detection check, so its fid and
spid values are used as the second and third values in the deadlock message.
(The first value, 03, is the engine number.)
However, setting print deadlock information to 1 can degrade Adaptive Server
performance. For this reason, use it only to determine the cause of deadlocks.
69
The type of locks each task held, and the type of lock each task was trying
to acquire
In the following report, spid 29 is deadlocked with a parallel task, fid 94, spid
38. The deadlock involves exclusive versus shared lock requests on the authors
table. spid 29 is chosen as the deadlock victim:
Deadlock Id 11: detected. 1 deadlock chain(s) involved.
Deadlock Id 11: Process (Familyid 94, 38) (suid 62) was executing a SELECT
command at line 1. SQL Text select * from authors where au_id like '172%'
Deadlock Id 11: Process (Familyid 29, 29) (suid 56) was executing a INSERT
command at line 1
SQL Text: insert authors (au_id, au_fname, au_lname) values (A999999816,
Bill, Dewart)
Deadlock Id 11: Process (Familyid 0, Spid 29) was waiting for a exclusive page
lock on page 1155 of the authors table in database 8 but process (Familyid
94, Spid 38) already held a shared page lock on it.
Deadlock Id 11: Process (Familyid 94, Spid 38) was waiting for a shared page
lock on page 2336 of the authors table in database 8 but process (Familyid
29, Spid 29) already held a exclusive page lock on it.
Deadlock Id 11: Process (Familyid 0, 29) was chosen as the victim. End of
deadlock information.
Avoiding deadlocks
Deadlocks may occur when many long-running transactions are executed at the
same time in the same database. Deadlocks become more common as lock
contention increases between transactions, which decreases concurrency.
Methods for reducing lock contention, such as changing the locking scheme,
avoiding table locks, and not holding shared locks, are described in Chapter 2,
Locking Configuration and Tuning.
70
CHAPTER 3
Locking Reports
71
Therefore, a process may wait from the number of milliseconds set by deadlock
checking period to almost twice that value before deadlock checking is
performed. sp_sysmon can help you tune deadlock checking behavior.
See Deadlock detection in Performance and Tuning Series: Monitoring
Adaptive Server with sp_sysmon..
To measure lock contention on all tables in all databases, specify only the
interval. This example monitors lock contention for 20 minutes, and reports
statistics on the 10 tables with the highest levels of contention:
sp_object_stats "00:20:00"
rpt_locks reports grants, waits, deadlocks, and wait times for the tables
with the highest contention. rpt_locks is the default.
rpt_objlist reports only the names of the objects with the highest level
of lock activity.
72
CHAPTER 3
Locking Reports
Page Locks
---------Grants:
Waits:
Deadlocks:
Wait-time:
Contention:
SH_PAGE
---------94488
532
4
20603764 ms
0.56%
UP_PAGE
---------4052
500
0
14265708 ms
10.98%
EX_PAGE
---------4828
776
24
2831556 ms
13.79%
Value
The number of times the lock was granted immediately
Waits
Deadlocks
Wait-time
Contention
73
For more information about sp_sysmon and lock statistics, see Lock
management in Performance and Tuning Series: Monitoring Adaptive Serve
with sp_sysmon.
Use the monitoring tables to pinpoint locking problems. See the Performance
and Tuning Series: Monitoring Tables.
74
CH A PTE R
This chapter discusses the types of locks used in Adaptive Server and the
commands that can affect locking.
Topic
Specifying the locking scheme for a table
Topic
75
80
84
85
87
create table specifies the locking scheme for newly created tables
alter table changes the locking scheme for a table to any other
locking scheme
75
This command sets the default lock scheme for the server to data pages:
sp_configure "lock scheme", 0, datapages
When you first install Adaptive Server, lock scheme is set to allpages.
If you do not specify the lock scheme for a table, the default value for the server
is used, as determined by the setting of the lock scheme configuration
parameter.
This command specifies datarows locking for the new_publishers table:
create table new_publishers
(pub_id
char(4)
not null,
pub_name
varchar(40) null,
city
varchar(20) null,
state
char(2)
null)
lock datarows
Specifying the locking scheme with create table overrides the default serverwide setting.
76
CHAPTER 4
This command changes the locking scheme for the titles table to datarows
locking:
alter table titles lock datarows
alter table supports changing from one locking scheme to any other locking
near, the maximum length of 1962 (including the two bytes for the offset table).
For data-only-locked tables with only fixed-length columns, the maximum
user data row size is 1960 bytes (including the 2 bytes for the offset table).
Tables with variable-length columns require 2 additional bytes for each column
that is variable-length (this includes columns that allow nulls.)
See Determining Sizes of Tables and Indexes in Performance and Tuning
Series: Physical Database Tuning for information on rows and row overhead.
If the table is partitioned, and you have not run update statistics since
making major data modifications to the table, run update statistics on the
table that you plan to alter. alter table...lock performs better with accurate
statistics for partitioned tables.
Changing the locking scheme does not affect the distribution of data on
partitions; rows in partition 1 are copied to partition 1 in the copy of the
table.
77
Set any space management properties that should be applied to the copy of
the table or its rebuilt indexes. See Setting Space Management
Properties in Performance and Tuning Series: Physical Database Tuning
for information on rows and row overhead.
If any of the tables in the database are partitioned and require a parallel
sort:
Run dbcc checktable on the table and dbcc checkalloc on the database to
ensure database consistency.
78
Copies all rows in the table to new data pages, formatting rows according
to the new format. If you are changing to data-only locking, any data rows
of fewer than 10 bytes are padded to 10 bytes during this step. If you are
changing to allpages locking from data-only locking, padding is stripped
from rows of fewer than 10 bytes.
CHAPTER 4
If a clustered index exists on the table, rows are copied in clustered index key
order onto the new data pages. If no clustered index exists, the rows are copied
in page-chain order for an allpages-locking to data-only-locking conversion.
The entire alter table...lock command is performed as a single transaction to
ensure recoverability. An exclusive table lock is held on the table for the
duration of the transaction.
If you do not specify a locking scheme with select into, the new table uses the
server-wide default locking scheme, as defined by the configuration parameter
lock scheme.
79
Temporary tables created with the #tablename form of naming are single-user
tables, so lock contention is not an issue. For temporary tables that can be
shared among multiple users, that is, tables created with tempdb..tablename,
any locking scheme can be used.
For all queries in the session, with the set transaction isolation level
command
For specific tables in a query, with the holdlock, noholdlock, and shared
keywords
When choosing locking levels in your applications, use the minimum locking
level consistent with your business model. The combination of setting the
session level while providing control over locking behavior at the query level
allows concurrent transactions to achieve required results with the least
blocking.
Note If you use transaction isolation level 2 (repeatable reads) on allpages-
80
CHAPTER 4
If the session has enforced isolation level 3, you can make the query operate at
level 1 using noholdlock, as described below.
If you are using the Adaptive Server default isolation level of 1, or if you have
used the set transaction isolation level command to specify level 0 or 2, you can
enforce level 3 by using the holdlock option to hold shared locks until the end
of a transaction.
You can display the current isolation level for a session with the global variable
@@isolation.
text_pointer
81
Level to use
Keyword
Effect
noholdlock
2, 3
holdlock
N/A
shared
These keywords affect locking for the transaction: if you use holdlock, all locks
are held until the end of the transaction.
If you specify holdlock in a query while isolation level 0 is in effect for the
session, Adaptive Server issues a warning and ignores the holdlock clause, not
acquiring locks as the query executes.
If you specify holdlock and read uncommitted, Adaptive Server prints an error
message, and the query is not executed.
82
Level to use
0
Option
read committed
repeatable read
serializable
read
uncommitted
Effect
Reads uncommitted changes; use from
level 1, 2, or 3 queries to perform dirty
reads (level 0).
Reads only committed changes; wait
for locks to be released; use from level
0 to read only committed changes, but
without holding locks.
Holds shared locks until the transaction
completes; use from level 0 or level 1
queries to enforce level 2.
Holds shared locks until the transaction
completes; use from level 1 or level 2
queries to enforce level 3.
CHAPTER 4
For example, the following statement queries the titles table at isolation level 0:
select *
from titles
at isolation read uncommitted
The holdlock keyword makes a shared page, row, or table lock more restrictive.
holdlock applies:
To shared locks
The at isolation clause applies to all tables in the from clause, and is applied only
for the duration of the transaction. The locks are released when the transaction
completes.
In a transaction, holdlock instructs Adaptive Server to hold shared locks until
the completion of that transaction instead of releasing the lock as soon as the
required table, view, row, or data page is no longer needed. Adaptive Server
always holds exclusive locks until the end of a transaction.
The use of holdlock in the following example ensures that the two queries return
consistent results:
begin transaction
select branch, sum(balance)
from account holdlock
group by branch
select sum(balance) from account
commit transaction
The first query acquires a shared table lock on account so that no other
transaction can update the data before the second query runs. This lock is not
released until the transaction including the holdlock command completes.
83
Readpast locking
If the session isolation level is 0, and only committed changes must be read
from the database, you can use the at isolation level read committed clause.
For example, if the transaction isolation level is set to 3, which normally causes
a select query to hold locks until the end of the transaction, this command
releases the locks when the scan moves off the page or row:
select balance from account noholdlock
where acct_number < 100
If the session isolation level is 1, 2, or 3, and you want to perform dirty reads,
you can use the at isolation level read uncommitted clause.
The shared keyword instructs Adaptive Server to use a shared lock (instead of
an update lock) on a specified table or view in a cursor.
See Using the shared keyword on page 86 for more information.
Readpast locking
Readpast locking allows select and readtext queries to skip all rows or pages
locked with incompatible locks. The queries do not block, terminate, or return
error or advisory messages to the user. Readpast locking is largely designed to
be used in queue-processing applications.
In general, these applications allow queries to return the first unlocked row that
meets query qualifications. An example might be an application tracking calls
for service: the query needs to find the row with the earliest timestamp that is
not locked by another repair representative.
84
CHAPTER 4
At level 0, Adaptive Server uses no locks on any base table page that
contains a row representing a current cursor position. Cursors acquire no
read locks for their scans, so they do not block other applications from
accessing the same data.
However, cursors operating at this isolation level are not updatable, and
they require a unique index on the base table to ensure accuracy.
85
If you do not set the close on endtran or chained options, a cursor remains open
past the end of the transaction, and its current page locks remain in effect. It
may also continue to acquire locks as it fetches additional rows.
This allows other users to obtain an update lock on the table or an underlying
table of the view.
You can use the holdlock keyword with shared after each table or view name.
holdlock must precede shared in the select statement. For example:
declare authors_crsr cursor
for select au_id, au_lname, au_fname
from authors holdlock shared
where state != CA
for update of au_lname, au_fname
These are the effects of specifying the holdlock or shared options when defining
an updatable cursor:
86
If you do not specify either option, the cursor holds an update lock on the
row or on the page containing the current row.
CHAPTER 4
Other users cannot update, through a cursor or otherwise, the row at the
cursor position (for datarows-locked tables) or any row on this page (for
allpages and datapages-locked tables).
Other users can declare a cursor on the same tables you use for your cursor,
and can read data, but they cannot get an update or exclusive lock on your
current row or page.
If you specify the shared option, the cursor holds a shared lock on the
current row or on the page containing the currently fetched row.
Other users cannot update, through a cursor or otherwise, the current row,
or the rows on this page. They can, however, read the row or rows on the
page.
If you specify the holdlock option, you hold update locks on all the rows or
pages that have been fetched (if transactions are not being used) or only
the pages fetched since the last commit or rollback (if in a transaction).
Other users cannot update, through a cursor or otherwise, currently fetched
rows or pages.
Other users can declare a cursor on the same tables you use for your cursor,
but they cannot get an update lock on currently fetched rows or pages.
If you specify both options, the cursor holds shared locks on all the rows
or pages fetched (if not using transactions) or on the rows or pages fetched
since the last commit or rollback.
Other users cannot update, through a cursor or otherwise, currently fetched
rows or pages.
To immediately lock the entire table, rather than waiting for lock
promotion to take effect.
87
When the query or transactions uses multiple scans, and none of the scans
locks a sufficient number of pages or rows to trigger lock promotion, but
the total number of locks is very large.
within the wait period, an error message is printed, but the transaction is not
rolled back.
Lock timeouts
You can specify the amount of time that a task waits for a lock:
At the server level, with the lock wait period configuration parameter
For a session or in a stored procedure, with the set lock wait command
See the Transact-SQL Users Guide for more information on these commands.
Except for lock table, a task that attempts to acquire a lock and fails to acquire
it within the time period returns an error message and the transaction is rolled
back.
Using lock timeouts can be useful for removing tasks that acquire some locks,
and then wait for long periods of time blocking other users. However, since
transactions are rolled back, and users may simply resubmit their queries,
timing out a transaction means that the work needs to be repeated.
Use sp_sysmon to monitor the number of tasks that exceed the time limit while
waiting for a lock.
See Lock time-out information in Performance and Tuning Series:
Monitoring Adaptive Server with sp_sysmon.
88
CH A PTE R
Indexes
This chapter describes how Adaptive Server stores indexes and uses them
to speed data retrieval for select, update, delete, and insert operations.
Topic
Types of indexes
Page
90
93
94
Nonclustered indexes
Index covering
103
109
112
Indexes help to avoid table scans. A few index pages and data pages
can satisfy many queries without requiring reads on hundreds of data
pages.
Indexes can help to avoid sorts, if the index order matches the order
of the columns in an order by clause.
For most partitioned tables, you can create global indexes with one
index tree to cover the whole table, or you can create local indexes
with multiple index trees, each of which covers one partition of the
table.
89
Types of indexes
Although indexes speed data retrieval, they can slow down data modifications,
since most changes to the data require index updates. Optimal indexing
demands an understanding of:
The behavior of queries that access unindexed heap tables, tables with
clustered indexes, and tables with nonclustered indexes
Types of indexes
Adaptive Server provides two general types of indexes that can be created at
the table or at the partition level.
Clustered indexes, where the data is physically stored in the order of the
keys on the index:
For allpages-locked tables, rows are stored in key order on pages, and
pages are linked in key order.
Nonclustered indexes, where the storage order of data in the table is not
related to index keys
You can create only one clustered index on a table or partition because there is
only one possible physical ordering of the data rows. You can create up to 249
nonclustered indexes per table.
A table that has no clustered index is called a heap. The rows in the table are in
no particular order, and all new rows are added to the end of the table. Chapter
2, Data Storage, in Performance and Tuning Series: Physical Database
Tuning discusses heaps and SQL operations on heaps.
For partitioned tables, indexes may be either local or global (see Indexes and
partitions on page 93).
Function-based indexes are a type of nonclustered index which use one or
more expressions as the index key. See the Transact-SQL Users Guide for more
on creating function-based indexes. See also Chapter 6, Indexing for
Concurrency Control, for information on when to use function-based indexes.
90
CHAPTER 5
Indexes
Index pages
Index entries are stored as rows on index pages in a format similar to that of
data rows on data pages. Index entries store key values and pointers to lower
levels of the index, to the data pages, or to individual data rows.
Adaptive Server uses B-tree indexing, so each node in the index structure can
have multiple children.
Index entries are usually much smaller than a data row in a data page, and index
pages are typically much more densely populated than data pages. If a data row
has 200 bytes (including row overhead), there are 10 rows per page on a 2K
server. However, an index on a 15-byte field has about 100 rows per index page
on a 2K server (the pointers require 4 9 bytes per row, depending on the type
of index and the index level).
Indexes can have multiple levels:
Root level
Leaf level
Intermediate level
Root level
The root level is the highest level of the index. There is only one root page. If
an allpages-locked table is very small, so that the entire index fits on a single
page, there are no intermediate or leaf levels, and the root page stores pointers
to the data pages.
Data-only-locked tables always have a leaf level between the root page and the
data pages.
For larger tables, the root page stores pointers to the intermediate level index
pages or to leaf-level pages.
Leaf level
The lowest level of the index is the leaf level. At the leaf level, an index
contains a key value for each row in the table, and the rows are stored in sorted
order by the index key:
For clustered indexes on allpages-locked tables, the leaf level is the data.
No other level of the index contains one index row for each data row.
91
Types of indexes
Intermediate level
All levels between the root and leaf levels are intermediate levels. An index on
a large table or an index using long keys may have many intermediate levels.
Indexes on a very small table may not have an intermediate level; the root
pages point directly to the leaf level.
Index size
Table 5-1 describes the limits for index size for APL and DOL tables:
Table 5-1: Index row-size limit
Page size
2K (2048 bytes)
4K (4096 bytes)
8K (8192 bytes)
1250
2600
1310
2670
5300
5400
You can create tables with columns wider than the limit for the index key;
however, these columns become nonindexable. For example, if you perform
the following on a 2K page server, then try to create an index on c3, the
command fails and Adaptive Server issues an error message because column
c3 is larger than the index row-size limit (600 bytes).
create table t1 (
c1 int
c2 int
c3 char(700))
You can still create statistics for a nonindexable column, or include it in search
results. Also, if you include the column in a where clause, it is evaluated during
optimization.
92
CHAPTER 5
Indexes
An index row size that is too large can result in frequent index page splits. Page
splits can make the index level grow linearly with the number of rows in the
table, making the index useless because the index traverse becomes expensive.
Adaptive Server limits the index size to, at most, approximately one third of
servers page size, so that each index page contains at least three index rows.
93
You can run reorg rebuild on a per-partition basis, reorganizing the local
index sub-tree while minimizing the impact on other operations.
Global nonclustered indexes are better for covered scans than local
indexes, especially for queries that need to fetch rows across partitions.
Global partitioned indexes are not supported, meaning that global indexes
that cover all the data in the table are not themselves partitioned.
By following the next page pointers on the data pages, Adaptive Server
reads the entire table in index key order.
On the root and intermediate pages, each entry points to a page on the next
level.
94
CHAPTER 5
Indexes
Page 1007
Bennet
1132
Greane
1133
Hunter
1127
Page 1001
Bennet
1007
Karsen
1009
Smith
1062
Page 1009
Karsen
1315
Key
Pointer
Key
select *
from employees
where lname =
"Green"
Pointer
Page 1132
Bennet
Chan
Dull
Edwards
Page 1133
Greane
Green
Greene
Page 1127
Hunter
Jenkins
Root page
Intermediate
Data pages
In Figure 5-1, the root level page, Green is greater than Bennet, but less
than Karsen, so the pointer for Bennet is followed to page 1007. On page
1007, Green is greater than Greane, but less than Hunter, so the pointer
to page 1133 is followed to the data page, where the row is located and returned
to the user.
This retrieval using the clustered index requires one read for each of the:
Intermediate level
Data page
95
These reads may come either from cache or from disk. On tables that are
frequently used, the higher levels of the indexes are often found in cache, with
lower levels and data pages being read from disk.
96
CHAPTER 5
Indexes
Page 1001
Bennet
1007
Karsen
1009
Smith
1062
Key
Pointer
Key
Pointer
Page 1007
Bennet
1132
Greane
1133
Hunter
1127
Page 1009
Karsen
1315
Page 1132
Bennet
Chan
Dull
Edwards
Page 1133
Greane
Greco
Green
Greene
Page 1127
Hunter
Jenkins
Root page
Intermediate
Data pages
The next and previous page pointers on adjacent pages are changed to
incorporate the new page in the page chain. This requires reading those
pages into memory and locking them.
Approximately half of the rows are moved to the new page, with the new
row inserted in order.
The higher levels of the clustered index change to point to the new page.
If the table also has nonclustered indexes, all pointers to the affected data
rows must be changed to point to the new page and row locations.
97
Pointer
Page 1007
Bennet
1132
Greane
1133
Green
1144
Hunter
1127
Pointer
Key
insert employees
(lname)
values ("Greaves")
Key
Before
Page 1001
Bennet
1007
Karsen
1009
Smith
1062
Page 1009
Karsen
1315
Page 1132
Bennet
Chan
Dull
Edwards
Page 1133
Greane
Greaves
Greco
Page 1144
Green
Greene
Page 1127
Hunter
Jenkins
Root page
Intermediate
Data pages
98
If you insert a large row that cannot fit on the page before or the page after
the page that requires splitting, two new pages are allocated, one for the
large row and one for the rows that follow it.
CHAPTER 5
Indexes
If Adaptive Server detects that all inserts are taking place at the end of the
page, due to a increasing key value, the page is not split when it is time to
insert a new row that does not fit at the bottom of the page. Instead, a new
page is allocated, and the row is placed on the new page.
If Adaptive Server detects that inserts are taking place in order at other
locations on the page, the page is split at the insertion point.
All nonclustered index entries that point to the rows affected by the split
When you create a clustered index for a table that will grow over time, you may
want to use fillfactor to leave room on data pages and index pages. This reduces
the number of page splits for a time.
See Choosing space management properties for indexes on page 138.
99
Overflow pages
Special overflow pages are created for nonunique clustered indexes on
allpages-locked tables when a newly inserted row has the same key as the last
row on a full data page. A new data page is allocated and linked into the page
chain, and the newly inserted row is placed on the new page.
Figure 5-4: Adding an overflow page to a clustered index, allpageslocked table
Page 1133
Greane
Greco
Green
Greene
Overflow
data page
Page 1156
Greene
Page 1134
Gresham
Gridley
Data pages
The only rows that are placed on the overflow page are additional rows with
the same key value. In a nonunique clustered index with many duplicate key
values, there can be numerous overflow pages for the same value.
The clustered index does not contain pointers directly to overflow pages.
Instead, the next page pointers are used to follow the chain of overflow pages
until a value is found that does not match the search value.
100
CHAPTER 5
Indexes
Pointer
Key
Page 1001
Bennet
1007
Karsen
1009
Smith
1062
Pointer
delete
from employees
where lname = "Green"
Key
Before delete
Page 1007
Bennet
1132
Greane
1133
Hunter
1127
Page 1009
Karsen
1315
Page 1133
Greane
Green
Greco
Greene
Page 1132
Bennet
Chan
Dull
Edwards
Page 1133
Greane
Greco
Greene
Intermediate
n
Gree
Page 1127
Hunter
Jenkins
Root page
Data to
be
deleted
Data pages
101
Page 1001
Bennet
1007
Karsen 1009
Smith
1062
Pointer
Pointer
Key
delete
from employees
where lname =
Key
Figure 5-6: Deleting the last row on a page (after the delete)
Page R1007
Bennet
1132
Greane
1133
Hunter
1127
Page 1133
Greane
Green
Greane
Empty page
available for
reallocation
Page 1009
Karsen
1315
Gridle
y
ley
102
Intermediate
Grid
Page 1127
Hunter
Jenkins
Root page
Page 1134
Data pages
CHAPTER 5
Indexes
Nonclustered indexes
The B-tree works much the same for nonclustered indexes as it does for
clustered indexes, but there are some differences. In nonclustered indexes:
Leaf level stores one key-pointer pair for each row in the table.
Leaf-level pages store the index keys, data page number, and row number
for the data row to which this index row is pointing. This combination of
page number and row offset number is called the row ID.
The root and intermediate levels store index keys and page pointers to
other index pages. They also store the row ID of the keys data row.
With keys of the same size, nonclustered indexes require more space than
clustered indexes.
103
Nonclustered indexes
The row ID
The row ID
The row ID in higher levels of the index is used for indexes that allow duplicate
keys. If a data modification changes the index key or deletes a row, the row ID
positively identifies all occurrences of the key at all index levels.
104
CHAPTER 5
Indexes
Pointer
RowID
Key
Page 1007
Bennet
Greane
Hunter
1421,1
1307,4
1307,1
1421,1
1411,3
1307,2
1007
1009
1062
Karsen
Page 1009
1411,3
Pointer
Page 1132
Bennet
1421,1
Chan
1129,3
Dull
1409,1
Edwards
1018,5
1132
1133
1127
Page 1133
Greane
1307,4
Green
1421,2
Greene
1409,2
Page 1001
Bennet
Karsen
Smith
Key
Pointer
RowID
Key
1315
10
11
12
13
Page 1242
OLeary
Ringer
White
Jenkins
14
15
16
17
Page 1307
Hunter
Smith
Ringer
Greane
18
19
20
Page 1421
Bennet
Green
Yokomoto
21
22
23
Page 1409
Dull
Greene
White
Page 1127
Hunter
1307,1
Jenkins
1242,4
Root page
Intermediate
Leaf pages
Data pages
105
Nonclustered indexes
Bennet
Karsen
Smith
Page 1001
1421,1
1411,3
1307,2
RowID
Pointer
Key
Key
Pointer
Key
RowID
select *
from employee
where lname =
"Green"
Bennet
Greane
Hunter
Page 1007
1421,1
1307,4
1307,1
1132
1133
1127
Pointer
Page 1132
Bennet
1421,1
Chan
1129,3
Dull
1409,1
Edwards
1018,5
Page 1133
Greane
1307,4
Green
1421,2
Greene
1409,2
1007
1009
1062
Karsen
Page 1009
1411,3
Ray
Ron
Lisa
Bob
Page 1242
OLeary
Ringer
White
Jenkins
Tim
Liv
Ann
Jo
Page 1307
Hunter
Smith
Ringer
Greane
Page 1421
Ian
Bennet
Andy
Green
Les
Yokomoto
1315
Page 1127
Hunter
1307,1
Jenkins
1242,4
Page 1409
Chad
Dull
Eddy
Greene
Gabe
White
Kip
Greco
Root page
Intermediate
Data pages
Leaf pages
Leaf-level page
Data page
If your applications use a particular nonclustered index frequently, the root and
intermediate pages are probably in cache, so it is likely that only one or two
physical disk I/Os need to be performed.
106
CHAPTER 5
Indexes
Bennet
Karsen
Smith
Page 1001
1421,1
1411,3
1307,2
Page 1007
1421,1
1307,4
1307,1
Pointer
Key
Bennet
Greane
Hunter
RowID
Key
Pointer
Key
RowID
insert employees
(empid, lname)
values(24,
"Greco")
Page 1009
1411,3
Page 1132
Bennet
1421,1
Chan
1129,3
Dull
1409,1
Edwards
1018,5
1132
1133
1127
Page 1133
Greane
1307,4
Greco
1409,4
Green
1421,2
Greene
1409,2
1007
1009
1062
Karsen
Pointer
1315
Ray
Ron
Lisa
Bob
Page 1242
OLeary
Ringer
White
Jenkins
Tim
Liv
Ann
Jo
Page 1307
Hunter
Smith
Ringer
Greane
Page 1421
Ian
Bennet
Andy
Green
Les
Yokomoto
Page 1127
Hunter
1307,1
Jenkins
1242,4
Page 1409
Chad
Dull
Edi
Greene
Gabe
White
Kip
Greco
Root page
Intermediate
Leaf pages
Data pages
107
Nonclustered indexes
Pointer
Bennet
Karsen
Smith
RowID
Key
Page 1001
1421,1
1411,3
1307,2
1007
1009
1062
Bennet
Greane
Hunter
Karsen
Pointer
RowID
Key
Key
delete employees
where lname = "Green"
Page 1007
1421,1
1307,4
1307,1
Page 1009
1411,3
Pointer
Page 1132
Bennet
1421,1
Chan
1129,3
Dull
1409,1
Edwards
1018,5
1132
1133
1127
1315
Page 1133
Greane
1307,4
Greco
1409,4
Green
1421,2
Greene
1409,2
Ray
Ron
Lisa
Bob
Page 1242
OLeary
Ringer
White
Jenkins
Tim
Liv
Ann
Jo
Page 1307
Hunter
Smith
Ringer
Greane
Ian
Andy
Les
Page 1421
Bennet
Green
Yokomoto
Chad
Eddy
Gabe
Kip
Page 1409
Dull
Greene
White
Greco
Page 1127
Hunter
1307,1
Jenkins
1242,4
n
Gree
Root page
Intermediate
Leaf pages
Data pages
If the deletion removes the last row on the data page, the page is deallocated
and the adjacent page pointers are adjusted in allpages-locked tables. Any
references to the page are also deleted in higher levels of the index.
If the delete operation leaves only a single row on an index intermediate page,
index pages may be merged, as with clustered indexes.
108
CHAPTER 5
Indexes
Index covering
Index covering can produce dramatic performance improvements when all
columns needed by the query are included in the index.
You can create indexes on more than one key. These are called composite
indexes. Composite indexes can have up to 31 columns, adding up to a
maximum 600 bytes.
109
Index covering
110
CHAPTER 5
Indexes
Bennet,Sam
Greane,Grey
Hunter,Hugh
Page 1544
1580,1
1649,4
1649,1
Key
Page 1560
Bennet,Sam
1580,1
Chan,Sandra
1129,3
Dull,Normal
1409,1
Edwards,Linda
1018,5
Pointer
Key
RowID
Pointer
Figure 5-11: Matching index access does not have to read the data row
1560
1561
1843
Page 1561
Greane,Grey
1307,4
Greco,Del
1409,4
Green,Rita
1421,2
Greene,Cindy
1703,2
10
11
12
13
Page 1647
OLeary
Ringer
White
Jenkins
14
15
16
17
Page 1649
Hunter
Smith
Ringer
Greane
18
20
Page 1580
Bennet
Yokomoto
21
22
23
24
Page 1703
Dull
Greene
White
Greco
Page 1843
Hunter,Hugh
1307,1
Jenkins,Ray
1242,4
Root page
Intermediate
Leaf pages
Data pages
111
The nonmatching scan must examine all rows on the leaf level. It scans all leaf
level index pages, starting from the first page. It has no way of knowing how
many rows might match the query conditions, so it must examine every row in
the index. Since it must begin at the first page of the leaf level, it can use the
pointer in syspartitions.firstpage rather than descend the index.
Key
select lname,
emp_id
from employees
Page 1560
Bennet,Sam,409...
Chan,Sandra,817...
Dull,Normal,415...
Edwards,Linda,238...
Page 1544
Bennet,Sam,409... 1580,1
Greane,Grey,486... 1649,4
Hunter,Hugh,457... 1649,1
1580,1
1129,3
1409,1
1018,5
Pointer
RowID
Key
sysindexes.fir
sysindexes.firstpag
Root page
Pointer
1560
1561
1843
Page 1561
Greane,Grey,486...
Greco,Del,672...
Green,Rita,398...
Greene,Cindy,127...
1307,4
1409,4
1421,2
1703,2
Page 1843
Hunter,Hugh,457...
Jenkins,Ray,723...
1307,1
1242,4
Intermediate
Leaf pages
10
11
12
13
Page 1647
OLeary
Ringer
White
Jenkins
14
15
16
17
Page 1649
Hunter
Smith
Ringer
Greane
18
20
Page 1580
Bennet
Yokomoto
21
22
23
24
Page 1703
Dull
Greene
White
Greco
Data pages
112
CHAPTER 5
Indexes
Root and intermediate index pages always use least recently used (LRU)
strategy.
Index pages can use one cache while the data pages use a different cache,
if the index is bound to a different cache.
Index pages can cycle through the cache many times, if number of index
trips is configured.
When a query that uses an index is executed, the root, intermediate, leaf, and
data pages are read in that order. If these pages are not in cache, they are read
into the MRU end of the cache and are moved toward the LRU end as
additional pages are read in.
Each time a page is found in cache, it is moved to the MRU end of the page
chain, so the root page and higher levels of the index tend to stay in the cache.
113
By default, the number of trips that an index page makes through the cache is
set to 0. To change the default, a system administrator can set the number of
index trips configuration parameter.
114
CH A PTE R
This chapter introduces the basic query analysis tools that can help you
choose appropriate indexes. It also discusses index selection criteria for
point queries, range queries, and joins.
Topic
How indexes affect performance
Symptoms of poor indexing
Page
115
117
117
120
123
123
135
138
139
115
Target specific data pages that contain specific values in a point query
Establish upper and lower bounds for reading data in a range query
Use ordered data to avoid sorting data or to favor the less costly orderedinput based JOIN, UNION, GROUP, or DISTINCT operators over other more
expensive algorithms (for example, using merge joins instead of nestedloop joins and so on).
For example, to select the best index for a join clause:
r.c1=s.c1 and ... r.cn=s.cn
You can use indexes on both sides of the and clause if they are
compatible (that is, they have a nonempty common prefix covered by
the equijoin clause. This common prefix determines the part of the
equijoin clause used as a merge clause (the longer the merge clause,
the more effective it is).
The query processor enumerates plans with an index on one side and
a sort on the other. In the example above, the index prefix covered by
the equijoin clause determines the part of the equijoin clause used as a
merge clause (again, the longer the merge clause, the more effective
it is).
You can use similar steps to identify the best index for union, distinct, and
group clauses.
You can create indexes to enforce the uniqueness of data and to randomize the
storage location of inserts.
You can set sp_chgattribute 'concurrency_opt_threshold' parameter to avoid
table scans for increased concurrency. The syntax is:
sp_chgattribute table_name, "concurrency_opt_threshold", min_page_count
For example, this sets the concurrency optimization threshold for a table to 30
pages:
sp_chgattribute lookup_table, "concurrency_opt_threshold", 30
116
CHAPTER 6
117
118
CHAPTER 6
Leaf-level pages
Index levels
t10, 10 bytes
t20, 20 bytes
4311
6946
3
3
t40, 40 bytes
12501
The output shows that the indexes for the 10-column and 20-column keys each
have three levels, while the 40-column key requires a fourth level.
The number of pages required is more than 50 percent higher at each level.
The table has very wide rows, resulting in very few rows per data page.
The set of queries run on the table provides logical choices for a covering
index.
119
For example, if a table has very long rows, and only one row per page, a query
that needs to return 100 rows must access 100 data pages. An index that covers
this query, even with long index rows, can improve performance.
For example, if the index rows are 240 bytes, the index stores 8 rows per page,
and the query must access only 12 index pages.
use master
go
sp_dboption database_name, "single user", true
go
sp_configure "allow updates", 1
go
120
CHAPTER 6
You can use the checkpoint to identify the one or more databases or use an
all clause.
checkpoint [all | [dbname[, dbname[, dbname.....]]]
Note You must be assigned sa_role to run sp_fixindex.
Run dbcc checktable to verify that the corrupted index is now fixed.
You can use the checkpoint to identify the one or more databases or use an
all clause, which means you do not have to issue the use database
command.
checkpoint [all | [dbname[, dbname[, dbname.....]]]
Issue:
1> use database_name
2> go
1> checkpoint
2> go
1> select sysstat from sysobjects
2> where id = 1
3> go
121
You can use the checkpoint to identify the one or more databases or use an
all clause.
checkpoint [all | [dbname[, dbname[, dbname.....]]]
update sysobjects
set sysstat = sysstat | 4096
where id = 1
go
Run:
1> sp_fixindex database_name, sysobjects, 2
2> go
update sysobjects
set sysstat = sysstat_ORIGINAL
where id = object_ID
go
Run dbcc checktable to verify that the corrupted index is now fixed.
You can use the checkpoint to identify the one or more databases or use an
all clause.
checkpoint [all | [dbname[, dbname[, dbname.....]]]
122
CHAPTER 6
Because the data for a cluster index is ordered by index key, you can create
only one clustered index per table. Adaptive Server creates a clustered
index by default as a local index for range-, list-, and hash-partitioned
tables. You cannot create global clustered indexes on range-, list-, or hashpartitioned tables.
When you create a clustered index, Adaptive Server requires empty free
space to copy the rows in the table and allocate space for the clustered
index pages. It also requires space to re-create any nonclustered indexes
on the table.
The amount of space required can vary, depending on how full the tables
pages are when you begin and the space management properties are
applied to the table and index pages.
See Determining the space available for maintenance activities in
Database Maintenance, in Performance and Tuning Series: Physical
Database Tuning.
The referential integrity constraints unique and primary key create unique
indexes to enforce their restrictions on the keys. By default, unique
constraints create nonclustered indexes and primary key constraints create
clustered indexes.
2048
4096
600
1250
8192
16384
2600
5300
Choosing indexes
When you are working with index selection you may want to ask these
questions:
123
Choosing indexes
What are the most important processes that make use of the table?
If dirty reads are required, are there unique indexes to support the scan?
124
Space constraints
CHAPTER 6
Clustered indexes provide very good performance when the key matches
the search argument in range queries, such as:
where colvalue >= 5 and colvalue < 10
In allpages-locked tables, rows are maintained in key order and pages are
linked in order, providing very fast performance for queries using a
clustered index.
In data-only-locked tables, rows are in key order after the index is created,
but the clustering can decline over time.
125
Choosing indexes
Other good choices for clustered index keys are columns used in order by
clauses and in joins.
The primary key, if it is used for where clauses and if it randomizes inserts
If there are several possible choices, choose the most commonly needed
physical order as a first choice.
As a second choice, look for range queries. During performance testing, check
for hot spots due to lock contention.
126
CHAPTER 6
Consider using composite indexes to cover critical queries and to support less
frequent queries:
The most critical queries should be able to perform point queries and
matching scans.
Index selection
Index selection allows you to determine which indexes are actively being used
and those that are rarely used.
127
Choosing indexes
This section assumes that the monitoring tables feature is already set up. See
the Performance and Tuning Series: Monitoring Tables for information about
installing and using the monitoring tables.
Index selection uses these columns of the monitoring access table,
monOpenObjectActivity:
object (such as a table or index) was used as the access method by the
optimizer.
128
CHAPTER 6
order by UsedCount
This example displays all indexes that are usedor not currently usedin an
application:
select DBID, ObjectID, IndexID, ObjectName = object_name(ObjectID, DBID),
LastOptSelectDate, UsedCount, LastUsedDate
from monOpenObjectActivity
where DBID = db_id("MY_1253_RS_RSSD")
and ObjectID = object_id('MY_1253_RS_RSSD..rs_columns')
DBID
ObjectID
IndexID
ObjectName
LastOptSelectDate
UsedCount
LastUsedDate
---------- -------------------- --------------------------------------------------------------------------------------4
192000684
0
rs_columns
May 15 2006 4:18PM
450
May 15 2006 4:18PM
4
192000684
1
rs_columns
NULL
0
NULL
4
192000684
2
rs_columns
NULL
0
NULL
4
192000684
3
rs_columns
May 12 2006 6:11PM
1
May 12 2006 6:11PM
4
192000684
4
rs_columns
NULL
0
NULL
4
192000684
5
rs_columns
NULL
0
NULL
If the index is not used, it results in a NULL date. If an index is used, it results
in a date like May 15 2006 4:18PM.
In this example, the query displays all indexes that are not currently used in the
current database:
select DB = convert(char(20), db_name()),
TableName = convert(char(20), object_name(i.id, db_id())),
IndexName = convert(char(20),i.name),
IndID = i.indid
from master..monOpenObjectActivity a, sysindexes i
where a.ObjectID =* i.id
and a.IndexID =* i.indid
and (a.UsedCount = 0 or a.UsedCount is NULL)
and i.indid > 0
and object_name(i.id, db_id()) not like "sys%"
order by 2, 4 asc
DB
TableName
IndexName
IndID
------------------- -------------------- -------------------- -----MY_1253_RS_RSSD
rs_articles
rs_key_articles
1
MY_1253_RS_RSSD
rs_articles
rs_key4_articles
2
129
Choosing indexes
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
MY_1253_RS_RSSD
rs_classes
rs_classes
rs_config
rs_databases
rs_databases
rs_databases
rs_databases
rs_databases
rs_datatype
rs_datatype
rs_key_classes
rs_key2_classes
rs_key_config
rs_key_databases
rs_key9_databases
rs_key13_databases
rs_key14_databases
rs_key15_databases
rs_key_datatypes
rs_key2_datatype
1
2
1
1
2
3
4
5
1
2
If your applications use cursors, see Index use and requirements for
cursors in Optimization for Cursors in Performance and Tuning Series:
Query Processing and Abstract Plans.
If you are creating an index on a table that will have a lot of insert activity,
use fillfactor to temporarily minimize page splits, improve concurrency,
and minimize deadlocking.
Keep the size of the key as small as possible. Your index trees remain
flatter, accelerating tree traversals.
130
CHAPTER 6
Be sure that the datatypes of the join columns in different tables are
compatible. If Adaptive Server has to convert a datatype on one side of a
join, it may not use an index for that table.
131
Choosing indexes
For allpages-locked tables, exclusive locks are held on affected index pages for
the duration of the transaction, increasing lock contention as well as processing
overhead.
Some applications experience unacceptable performance impacts with only
three or four indexes on tables that experience heavy data modification. Other
applications can perform well with many more tables.
Range queries.
Queries that table scan, but use a small subset of the columns on the table.
Tables that are read-only or read-mostly can be heavily indexed, as long as your
database has enough space available. If there is little update activity and high
select activity, provide indexes for all frequently used queries. Be sure to test
the performance benefits of index covering.
This covered point query needs to read only the upper levels of the index and
a single page in the leaf-level row in the nonclustered index of a 5000-row
table.
132
CHAPTER 6
This similar-looking query (using the same index) does not perform quite as
well. This query is still covered, but searches on au_id:
select au_fname, au_lname
from authors
where au_id = "A1714224678"
Since this query does not include the leading column of the index, it has to scan
the entire leaf level of the index, about 95 reads.
Adding a column to the select list in the query above, which may seem like a
minor change, makes the performance even worse:
select au_fname, au_lname, phone
from authors
where au_id = "A1714224678"
This query performs a table scan, reading 222 pages. In this case, the
performance is noticeably worse. For any search argument that is not the
leading column, Adaptive Server has only two possible access methods: a table
scan, or a covered index scan.
It does not scan the leaf level of the index for a nonleading search argument and
then access the data pages. A composite index can be used only when it covers
the query or when the first column appears in the where clause.
For a query that includes the leading column of the composite index, adding a
column that is not included in the index adds only a single data page read. This
query must read the data page to find the phone number:
select au_id, phone
from authors
where au_fname = "Eliot" and au_lname = "Wilk"
or au_lname, au_fname
or au_id
or au_fname, au_id
133
Choosing indexes
Choose the ordering of the composite index so that most queries form a prefix
subset.
Lookup tables
Columns that make a frequently used subset from a table with very wide
rows
134
Composite indexes where only a minor key is used in the where clause
CHAPTER 6
There are 10 rows per page; pages are 75 percent full, so the table has
approximately 135,000 pages.
190,000 (19%) of the titles are priced between $20 and $30.
135
You know that there are very few duplicate titles, so this query returns only one
or two rows.
Considering both this query and the previous query, Table 6-3 shows four
possible indexing strategies and estimate costs of using each index. The
estimates for the numbers of index and data pages were generated using a
fillfactor of 75 percent with sp_estspace:
sp_estspace titles, 1000000, 75
Index pages
36,800
650
136
CHAPTER 6
Index pages
Clustered on title
Nonclustered on price
3,770
6,076
Nonclustered on title,
36,835
36,835
price
Nonclustered on price,
title
For the range query on price, choice 4 is best; choices 1 and 3 are
acceptable with 16K I/O.
For the point query on titles, indexing choices 1, 2, and 3 are excellent.
The best indexing strategy for a combination of these two queries is to use two
indexes:
Choice 2, for point queries on title, since the clustered index requires very
little space.
You may need additional information to help you determine which indexing
strategy to use to support multiple queries. Typical considerations are:
What is the frequency of each query? How many times per day or per hour
is the query run?
What are the response time requirements? Is one of them especially time
critical?
What are the response time requirements for updates? Does creating more
than one index slow updates?
Is there a large data cache? Are these queries critical enough to provide a
35,000-page cache for the nonclustered composite indexes in index choice
3 or 4? Binding this index to its own cache would provide very fast
performance.
What other queries and what other search arguments are used? Is this table
frequently joined with other tables?
137
Use space management properties to reduce page splits and to reduce the
frequency of maintenance operations.
138
CHAPTER 6
Modify the logical design to make use of an artificial column and a lookup
table for tables that require a large index entry.
Drop indexes during periods when frequent updates occur, and rebuild
them before periods when frequent selects occur.
139
Index entries on varchar columns require more overhead than entries on char
columns. For short index keys, especially those with little variation in length in
the column data, use char for more compact index entries.
140
CHAPTER 6
After issuing sp_dboption, you must issue a checkpoint in the database for
which you are setting the ALS option:
sp_dboption "mydb", "async log service", "true"
use mydb
checkpoint
You can use the checkpoint to identify one or more databases, or use an all
clause.
checkpoint [all | [dbname[, dbname[, dbname.....]]]
Disabling ALS
Before you disable ALS, make sure there are no active users in the database. If
there are, you receive an error message when you issue the checkpoint:
sp_dboption "mydb", "async log service", "false"
use mydb
checkpoint
------------Error 3647: Cannot put database in single-user mode.
Wait until all users have logged out of the database and
issue a CHECKPOINT to disable "async log service".
If there are no active users in the database, this example disables ALS:
sp_dboption "mydb", "async log service", "false"
use mydb
checkpoint
141
------------Displaying ALS
Copy the log records from the ULC to the log cache.
The processes in steps 2 and 3 require you to hold a lock on the last log
page, which prevents any other tasks from writing to the log cache or
performing commit or abort operations.
142
CHAPTER 6
Task
Management
Report
Log Semaphore
Contention
per sec
per xact
count
% of total
58.0
0.3
34801
73.1
Heavy contention on the cache manager spinlock for the log cache.
You can tell that the cache manager spinlock is under contention when the
sp_sysmon output in the Data Cache Management Report section for the
database transaction log cache shows a high value in the Spinlock
Contention section. For example:
Table 6-5:
Cache c_log
Spinlock
Contention
per sec
n/a
per xact
n/a
count
n/a
% of total
40.0%
Note Use ALS only when you identify a single database with high transaction
requirements, since setting ALS for multiple databases may cause unexpected
variations in throughput and response times. If you want to configure ALS on
multiple databases, first check that your throughput and response times are
satisfactory.
143
Using ALS
Two threadsthe ULC flusher and the log writerscan the dirty buffers
(buffers full of data not yet written to the disk), copy the data, and write it to
the log.
ULC flusher
The ULC flusher is a system task thread that is dedicated to flushing the user
log cache of a task into the general log cache. When a task is ready to commit,
the user enters a commit request into the flusher queue. Each entry has a
handle, by which the ULC flusher can access the ULC of the task that queued
the request. The ULC flusher task continuously monitors the flusher queue,
removing requests from the queue and servicing them by flushing ULC pages
into the log cache.
Log writer
Once the ULC flusher has finished flushing the ULC pages into the log cache,
it queues the task request into a wakeup queue. The log writer patrols the dirty
buffer chain in the log cache, issuing a write command if it finds dirty buffers,
and monitors the wakeup queue for tasks whose pages are all written to disk.
Since the log writer patrols the dirty buffer chain, it knows when a buffer is
ready to write to disk.
144
Index
allpages locking 4
changing to with alter table 76
or strategy 31
specifying with create table 76
specifying with select into 79
specifying with sp_configure 75
ALS
log writer 144
user log cache 142
when to use 143
ALS, see asynchronous log service 141
alter table command
changing table locking scheme with 7680
sp_dboption and changing lock scheme 78
alternative predicates
nonqualifying rows 33
application design
deadlock avoidance 71
deadlock detection in 67
delaying deadlock checking 71
isolation level 0 considerations 21
levels of locking 43
primary keys and 130
user interaction in transactions 41
artificial columns 139
chains of pages
overflow pages and 100
clustered indexes 90
changing locking modes and 79
delete operations 101
guidelines for choosing 125
insert operations and 96
order of key values 94
overflow pages and 100
page reads 95
structure of 94
column-level locking
pseudo- 34
columns
artificial 139
composite indexes 132
advantages of 134
concurrency
deadlocks and 65
locking and 3, 65
configuration (Server)
lock limit 44
consistency
transactions and 2
constraints
primary key 123
unique 123
contention
avoiding with clustered indexes 89
reducing 40
contention, lock
locking scheme and 53
sp_object_stats report on 73
context column of sp_lock output 62
CPU usage
deadlocks and 67
create index command
locks acquired by 29
B
batch processing
transactions and lock contention
blocking 52
blocking process
avoiding during mass operations
sp_lock report on 62
sp_who report on 59
B-trees, index
nonclustered indexes 103
41
43
145
Index
deadlocks 6572, 73
application-generated 66
avoiding 70
defined 65
delaying checking 71
detection 67, 73
diagnosing 52
error messages 67
performance and 39
D
data
consistency 2
uniqueness 89
data modification
nonclustered indexes and 131
number of indexes and 118
data pages
clustered indexes and 94
full, and insert operations 97
database design
indexing based on 138
logical keys and index keys 125
databases
lock promotion thresholds for 44
data-only locking (DOL) tables
maximum row size 77
or strategy and locking 31
datapages locking
changing to with alter table 76
described 6
specifying with create table 76
specifying with select into 79
specifying with sp_configure 75
datarows locking
changing to with alter table 76
described 7
specifying with create table 76
specifying with select into 79
specifying with sp_configure 75
datatypes
choosing 130, 139
numeric compared to character 139
deadlock checking period configuration parameter
146
29
68
delete
E
error messages
deadlocks 67
escalation, lock 47
exclusive locks
page 9
sp_lock report on
table 10
62
F
71
62
Index
fillfactor
index creation and 130
fixed-length columns
for index keys 131
overhead 131
H
holdlock keyword
locking
83
hot spots
avoiding
86
42
I
IDENTITY columns
indexing and performance 125
index keys, logical keys and 125
index pages
locks on 5
page splits for 99
storage on 91
index selection 127
indexes 89114
access through 89
design considerations 115
dropping infrequently used 138
guidelines for 130
intermediate level 92
leaf level 91
leaf pages 103
locking with 9
number allowed 123
partitions 93
performance 89
root level 91
selectivity 118
size of entries and performance 119
types of 90
indexing
configure large buffer pools 141
create a claustered index first 140
infinity key locks 17
insert command
contention and 42
transaction isolation levels and 23
insert operations
clustered indexes 96
nonclustered indexes 107
page split exceptions and 98
intent table locks 10
sp_lock report on 62
intermediate levels of indexes 92
isolation levels 1926, 8085
cursors 85
default 80
dirty reads 21
lock duration and 26, 27, 28
nonrepeatable reads 23
phantoms 23
serializable reads and locks 17
transactions 19
J
joins
choosing indexes for 126
datatype compatibility in 131
K
key values
index storage 89
order for clustered indexes 94
overflow pages and 100
keys, index
choosing columns for 125
clustered and nonclustered indexes and
composite 132
logical keys and 125
monotonically increasing 99
size and performance 130
size of 123
unique 130
90
147
Index
L
latches 17
leaf levels of indexes 91
leaf pages 103
levels
indexes 91
locking 43
lock allpages option
alter table command 77
create table command 76
select into command 79
lock datapages option
alter table command 77
create table command 76
select into command 79
lock datarows option
alter table command 77
create table command 76
select into command 79
lock duration. See duration of locks
lock promotion thresholds 44??
database 50
default 50
dropping 51
precedence 51
promotion logic 49
server-wide 50
table 50
lock scheme configuration parameter 75
locking 145
allpages locking scheme 4
commands 7588
concurrency 3
contention, reducing 4044
control over 3, 8
cursors and 85
datapages locking scheme 6
datarows locking scheme 7
deadlocks 6572
entire table 8
for update clause 85
forcing a write 13
holdlock keyword 81
index pages 5
indexes used 9
isolation levels and 1926, 8085
148
Index
size of 3
table 10
table versus page 47
table versus row 47
table, table scans and 30
types of 7, 62
update page 9
viewing 61
worker processes and 14
locktype column of sp_lock output
logical keys, index keys and 125
62
M
matching index scans 110
messages
deadlock victim 67
monitoring
index usage 138
indexes 127130
indexes, examples of 128
lock contention 54
multicolumn index. See composite indexes
N
noholdlock keyword, select
84
nonclustered indexes 90
definition of 103
delete operations 108
guidelines for 126, 127
insert operations 107
number allowed 123
select and 105
size of 103
structure 104
nonmatching index scans 111112
nonrepeatable reads 23
null columns
variable-length 130
null values
datatypes allowing 130
number (quantity of)
bytes per index key 123
clustered indexes 90
indexes per table 123
locks in the system 44
locks on a table 48
nonclustered indexes 90
number of locks configuration parameter
data-only-locked tables and 45
number of sort buffers 140
numbers
row offset 103
O
observing deadlocks 73
offset table
nonclustered index selects and 105
row IDs and 103
optimistic index locking 56
added option in sp_chgattribute 56
cautions and issues 57
using 57
optimizer
dropping indexes not used by 138
indexes and 115
nonunique entries and 118
or queries
allpages-locked tables and 31
data-only-locked tables and 31
isolation levels and 32
locking and 31
row requalification and 32
order
composite indexes and 132
data and index storage 90
index key values 94
order by clause
indexes and 89
output
sp_estspace 119
overflow pages 100
key values and 100
overhead
datatypes and 130, 140
nonclustered indexes 131
variable-length columns 131
149
Index
page chains
overflow pages and
100
page lock promotion HWM configuration parameter 48
page lock promotion LWM configuration parameter 48
page lock promotion PCT configuration parameter 49
page locks 7
sp_lock report on 62
table locks versus 47
types of 8
page splits
data pages 97
index pages and 99
nonclustered indexes, effect on 97
performance impact of 99
pages
overflow 100
pages, data
splitting 97
pages, index
leaf level 103
storage on 91
parallel query processing
demand locks and 14
parallel sort
configure enough sort buffers 140
performance
clustered indexes and 55
data-only-locked tables and 55
indexes and 115
locking and 39
number of indexes and 118
phantoms 16
serializable reads and 17
phantoms in transactions 23
pointers
index 91
precedence
lock promotion thresholds 51
primary key constraint
index created by 123
promotion, lock 47
150
R
range queries
118
deadlocks and 29
lock duration 29
reads
clustered indexes and 95
reduce contention
suggestions 37
referential integrity
references and unique index requirements 130
root level of indexes 91
row ID (RID) 103
row lock promotion HWM configuration parameter 48
row lock promotion LWM configuration parameter 48
row lock promotion PCT configuration parameter 49
row locks
sp_lock report on 62
table locks versus 47
row offset number 103
row-level locking. See data-only locking
S
scan session 46
scanning
skipping uncommitted transactions
scans, table
avoiding 89
search conditions
clustered indexes and 125
locking 9
select 95
clustered indexes and 95
nonclustered indexes and 105
optimizing 117
queries 35
32
Index
skipping uncommitted transactions 32
serial query processing
demand locks and 13
serializable reads
phantoms and 17
set command
transaction isolation level 80
shared keyword
cursors and 86
locking and 86
shared locks
cursors and 86
holdlock keyword 83
page 9
sp_lock report on 62
table 10
size
nonclustered and clustered indexes 103
skip
nonqualifying rows 33
sleeping locks 59
sort operations (order by)
indexing to avoid 89
sp_chgattribute, added option for optimistic index
locking 56
sp_dropglockpromote 51
sp_droprowlockpromote 51
sp_help, displays optimistic index locking 56
sp_lock 61
sp_object_stats 7273
sp_setpglockpromote 50
sp_setrowlockpromote 50
sp_who
blocking process 59
space
clustered compared to nonclustered indexes 103
space allocation
clustered index creation 123
deallocation of index pages 103
index page splits 99
monotonically increasing key values and 99
page splits and 97
splitting
data pages on inserts 97
SQL standards
concurrency problems 44
storage management
space deallocation and
102
T
table locks 7
controlling 19
page locks versus 47
row locks versus 47
sp_lock report on 62
types of 10
table scans
avoiding 89
locks and 30
tables
locks held on 19, 62
secondary 139
tasks
demand locks and 13
testing
hot spots 126
nonclustered indexes 131
time interval
deadlock checking 71
transaction isolation level option, set
transaction isolation levels
lock duration and 26
or processing and 32
transactions
close on endtran option 86
deadlock resolution 67
default isolation level 80
locking 3
tsequal system function
compared to holdlock 43
80
U
uncommitted
inserts during selects 32
updates, qualifying old and new
unique constraints
index created by 123
unique indexes 89
36
151
Index
optimizing
130
update command
V
variable-length columns
index overhead and 140
W
wait times 73
when to use ALS 143
where clause
creating indexes for 126
worker processes
deadlock detection and 68
locking and 14
152