100% found this document useful (1 vote)
694 views

PeopleSoft On SQL 2008

ps

Uploaded by

Abdul Khadir
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
694 views

PeopleSoft On SQL 2008

ps

Uploaded by

Abdul Khadir
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 130

Microsoft SQL Server 2008 Tuning Tips for PeopleSoft Applications

Including: Setup procedures Microsoft SQL Server 2008 new features and performance optimizations Maintaining a high performance database Performance monitoring and troubleshooting

September 2008
Authors: Sudhir Gajre & Burzin Patel Contributors: Miguel Lerma & Ganapathi Sadasivam

Table of Contents
1 INTRODUCTION ...................................................................................................... 5 1.1 Structure of This White Paper .............................................................................. 5 1.2 Related Materials.................................................................................................. 5 2 Setup and Configuration ............................................................................................. 6 2.1 Input/Output (I/O) Configuration ......................................................................... 6 2.1.1 RAID Type Recommendations ..................................................................... 6 2.1.2 Typical I/O Performance Recommended Range .......................................... 7 2.2 Files, Filegroups, and Object Placement Strategies ............................................. 8 2.3 Tempdb Placement and Tuning............................................................................ 8 2.4 Data and Log File Sizing .................................................................................... 10 2.5 Recovery Models................................................................................................ 10 2.5.1 Simple Recovery Model ............................................................................. 10 2.5.2 Full Recovery Model .................................................................................. 10 2.5.3 Bulk-Logged Recovery Model ................................................................... 11 2.6 Database Options................................................................................................ 11 2.6.1 Read-Committed Snapshot ......................................................................... 11 2.6.2 Asynchronous Statistics Update ................................................................. 12 2.6.3 Parameterization ......................................................................................... 13 2.6.4 Auto Update Statistics................................................................................. 14 2.6.5 Auto Create Statistics .................................................................................. 14 2.7 SQL Server Configurations ................................................................................ 15 2.7.1 Installation Considerations.......................................................................... 15 2.7.2 Hyper-Threading ......................................................................................... 15 2.7.3 Memory Tuning .......................................................................................... 15 2.7.4 Important sp_configure Parameters ............................................................ 18 2.8 Network Protocols and Pagefile ......................................................................... 20 2.9 SQL Native Client .............................................................................................. 21 2.10 Application Setup ........................................................................................... 22 2.10.1 Dedicated Temporary Tables ......................................................................... 22 2.10.2 Statement Compilation................................................................................... 23 2.10.3 Statistics at Runtime for Temporary Tables .................................................. 27 2.10.4 Disabling Update Statistics ............................................................................ 28 2.11 Batch Server Placement .................................................................................. 28 3 SQL Server 2008 Performance and Compliance Optimizations for PeopleSoft Applications ...................................................................................................................... 30 3.1 Resource Management ....................................................................................... 30 3.1.1 Resource Governor ..................................................................................... 30 3.2 Backup and Storage Optimization...................................................................... 34 3.2.1 Backup Compression .................................................................................. 34 3.2.2 Data Compression ....................................................................................... 36 3.3 Auditing and Compliance................................................................................... 38 3.3.1 Transparent Data Encryption (TDE) ........................................................... 39 3.3.2 SQL Server Audit ....................................................................................... 40 3.4 Performance Monitoring and Data Collection ................................................... 42

3.4.1 Data Collector and Management Data Warehouse ..................................... 43 3.4.2 Memory Monitoring DMVs ........................................................................ 47 3.4.3 Extended Events.......................................................................................... 48 3.4.4 Query and Query Plan Hashes .................................................................... 50 3.5 Query Performance Optimization ...................................................................... 53 3.5.1 Plan Freezing .............................................................................................. 54 3.5.2 Optimize for Ad hoc Workloads Option ..................................................... 54 3.5.3 Lock Escalation ........................................................................................... 55 3.6 Hardware Optimizations .................................................................................... 55 3.6.1 Hot Add CPU .............................................................................................. 55 3.6.2 NUMA ........................................................................................................ 56 4 Database Maintenance .............................................................................................. 57 4.1 Table and Index Partitioning .............................................................................. 57 4.2 Managing Indexes .............................................................................................. 59 4.2.1 Parallel Index Operations ............................................................................ 60 4.2.2 Index-Related Dynamic Management Views ............................................. 60 4.2.3 Disabling Indexes........................................................................................ 63 4.3 Detecting Fragmentation .................................................................................... 64 4.4 Reducing Fragmentation .................................................................................... 65 4.4.1 Online Index Reorganization ...................................................................... 67 4.4.2 Program to Defragment............................................................................... 69 4.5 Statistics ............................................................................................................. 69 4.5.1 AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS ... 70 4.5.2 Disabling AUTO_UPDATE_STATISTICS at the Table Level ................. 70 4.5.3 User-Created Statistics ................................................................................ 71 4.5.4 Updating Statistics ...................................................................................... 71 4.5.5 Viewing Statistics ....................................................................................... 72 4.6 Controlling Locking Behavior ........................................................................... 73 4.6.1 Isolation Levels ........................................................................................... 74 4.6.2 Lock Granularity ......................................................................................... 74 4.6.3 Lock Escalations ......................................................................................... 75 4.6.4 Lock Escalation Trace Flags ....................................................................... 77 4.6.5 Deadlocks .................................................................................................... 77 4.7 Dedicated Administrator Connection (DAC) ..................................................... 82 5 Performance Monitoring and Troubleshooting ......................................................... 83 5.1 PeopleSoft Architecture ..................................................................................... 83 5.2 Narrowing Down the Cause of a Performance Issue ......................................... 83 5.2.1 Using System Monitor ................................................................................ 83 5.2.2 Capturing Traces ......................................................................................... 87 5.2.3 Using Dynamic Management Views .......................................................... 93 5.2.4 Finding a Showplan .................................................................................... 97 5.2.5 Finding Current Users and Processes ....................................................... 101 5.2.6 Decoding the Object Blocking a Process .................................................. 102 5.2.7 Selected DBCC Commands ...................................................................... 103 5.2.8 Using Hints ............................................................................................... 103 5.2.9 Correlating a Trace with Windows Performance Log Data...................... 114

5.3 Common Performance Problems ..................................................................... 115 5.3.1 High Processor Utilization ........................................................................ 115 5.3.2 Disk I/O Bottlenecks ................................................................................. 117 5.3.3 Memory Bottlenecks ................................................................................. 118 5.3.4 Blocking and Deadlocking Issues ............................................................. 118 5.3.5 ODBC API Server Cursor Performance Enhancements ........................... 119 5.4 Database I/O ..................................................................................................... 121 5.4.1 SQLIO Disk Performance Test Tool ..................................................... 121 5.4.2 SQLIOSim Disk Stress Test Tool .......................................................... 121 5.4.3 Instant File Initialization ........................................................................... 121 5.4.4 Long I/O Requests .................................................................................... 122 Appendix A - SQLAPI and TRC2API............................................................................ 124 Description .................................................................................................................. 124 Utilization ................................................................................................................... 124 TRACE to API ............................................................................................................ 125 Examples ..................................................................................................................... 126 Example. Reproducing a problem with SQLAPI. .................................................. 127

1 INTRODUCTION
This white paper is a practical guide for database administrators and programmers who implement, maintain, or develop PeopleSoft applications. It outlines guidelines for improving the performance of PeopleSoft applications running on Microsoft SQL Server 2008. Much of the information presented in this document is based on findings from real-world customer deployments and from PeopleSoft benchmark testing. The issues discussed in this document represent problems that prove to be the most common or troublesome for PeopleSoft customers.

1.1 Structure of This White Paper


This white paper is structured to provide information about basic tuning, database maintenance for high performance, troubleshooting common problems, and parameter tuning in SQL Server 2008. This is a living document that is updated as needed to reflect the most current feedback from customers. Therefore, the structure, headings, content, and length of this document are likely to vary with each posted version. To determine if the document has been updated since you last downloaded it, compare the date of your version to the date of the version posted on the PeopleSoft-Oracle Customer Connection Web site.

1.2 Related Materials


This white paper is not a general introduction to environment tuning and assumes that you are an experienced IT professional with a good understanding of PeopleSoft Enterprise Pure Internet Architecture and Microsoft SQL Server. To take full advantage of the information covered in this document, you should have a basic understanding of system administration, basic Internet architecture, relational database concepts and SQL, and how to use PeopleSoft applications. This white paper is an update to the paper Microsoft SQL Server 2005 Tuning Tips for Peoplesoft 8.x, published in October 2006. While the previous paper is still current for users of Microsoft SQL Server 2005, this white paper addresses users of SQL Server 2008. Some content from the previous paper that is still relevant is reused in this paper. This white paper is not intended to replace any of the documentation delivered with PeopleTools 8.x. Before you read this document, you should read the documentation about PeopleSoft batch processing to ensure that you have a well-rounded understanding of it. Additionally, refer to SQL Server 2008 Books Online as needed.

2 Setup and Configuration


This section discusses the following topics: Input/Output (I/O) Configuration Files, Filegroups, and Object Placement Strategies Tempdb Placement and Tuning Data and Log File Sizing Recovery Models Database Options SQL Server Configurations Network Protocols and Pagefile SQL Native Client Application Setup Batch Server Placement

2.1 Input/Output (I/O) Configuration


Ensure that the storage system used for the database server is configured for optimal performance. Incorrect configuration of the I/O system can severely degrade the performance of your system. Sizing and placement of your application database data files, log files, and the tempdb system database play a major role in dictating overall performance. PeopleSoft applications involve online transaction processing (OLTP), which mainly results in random data access, as well as batch processing that results in sequential access. Consider how much random data access and sequential access your applications will be making when selecting the disk I/O configuration.

2.1.1 RAID Type Recommendations


A common debate when discussing RAID options is the relative performance of RAID 5 versus RAID 10. RAID 10 will outperform a RAID 5 set of the same number of volumes, for the following reasons: Write performance for RAID 10 is superior. A write operation on RAID 5 requires four physical I/O operations, whereas RAID 10 requires two. Read performance of RAID 10 is enhanced in most implementations by balancing read requests across the two drives participating in the mirror. RAID 0 is unsuitable for use because the loss of a single drive will result in the loss of data. Even tempdb should not be placed on RAID 0 in a production environment because the loss of one drive on RAID 0 would result in an outage on the SQL Server instance. A possible use of RAID 0 could be as the temporary location of disk backups, prior to writing disk backups to tape or to another location. RAID 1 is appropriate for objects such as the SQL Server binaries, the master database, and the msdb database. I/O requirements for these objects are minimal and therefore they do not generally require high performance but do require fault tolerance to remain

available. To maintain continuous operation, you must implement fault tolerance for these objects. Note: You should isolate the database transaction log from all other I/O activity no other files should exist on the drives that contain the log file. This ensures that, with the exception of transaction log backup and the occasional rollback, nothing disturbs the sequential nature of transaction log activity. Overall RAID 10 affords the best performance, making it the preferred choice for all database files. The following table summarizes the RAID level recommendations for PeopleSoft applications: RAID type Data files Log files tempdb1 System databases and SQL Server binaries RAID 10 Recommended Recommended Recommended. N/A for PeopleSoft for log files. database data Isolate from all files. More other I/O spindles will activity. yield better performance. RAID 1 N/A N/A N/A Recommended.

2.1.2 Typical I/O Performance Recommended Range


For PeopleSoft applications with high performance requirements, the recommended range for SQL Server data and log files in milliseconds (ms) per read and milliseconds per write is as follows: SQL Server data files: Less than 10 ms is recommended. 10 to 20 ms is acceptable. Above 20 ms can have an adverse effect on the performance of the system and it is usually not acceptable, especially for high-throughput deployments. SQL Server transaction log files: Less than 5 ms is recommended. 5 to 10 ms is acceptable. Above 10 ms indicates that the disk cannot keep up with the workload and performance will be negatively affected.

Refer to section 2.3 Tempdb Placement and Tuning for more information about tempdb. 7

2.2 Files, Filegroups, and Object Placement Strategies


The complex pattern of I/O activity and the large number of tables and indexes in the PeopleSoft database make attempting strategic placement of objects and object types (that is, tables and indexes) a difficult task. A better strategy is to spread the data across as many physical drives as possible, which puts the entire storage system to work completing as many I/O requests as possible in parallel. It is recommended that you create a user-defined filegroup and then create secondary data files in it. Mark the userdefined filegroup as default and place the PeopleSoft objects in it. For increased manageability and performance it is recommended that you create multiple files. A good rule of thumb is to have the number of data files in the database equal to the number of processor cores. Therefore a 4 processor dual-core database server will have 8 data files configured for each PeopleSoft database. Note: PeopleSoft applications do allow you to assign tables and indexes to specific filegroups. To do so, update PeopleTools tables PSDDLMODEL and PSDDLDEFPARMS. When used, each table and index script is generated with its specific filegroup included.

2.3 Tempdb Placement and Tuning


The tempdb system database is a global resource that is shared across all databases in the database instance and all users connected to the instance of SQL Server. It is used to hold the following objects. Temporary user objects that are explicitly created, such as: global or local temporary tables, temporary stored procedures, table variables, and cursors. Internal objects created by the SQL Server database engine, for example: work tables to store intermediate results for spools or sorting during query execution Row versions generated by data modification transactions in a database that use the read-committed snapshot isolation Row versions generated by data modification transactions for features, such as online index operations. The configuration of tempdb is critical for best performance due to the potential of added performance stress on tempdb from new features such as read-committed snapshot isolation level and online index operations. It is recommended that tempdb be isolated from other database activity and be placed on its own set of physical disks. It is especially important to use RAID 10 for tempdb. To move tempdb to its own set of RAID disks, use the ALTER DATABASE statement with the MODIFY FILE clause to specify a new location for the tempdb data file and log file as explained in the SQL Server Books Online topic Moving System Databases. Pre-sizing tempdb to a sufficiently large size is strongly recommended. A good rule of thumb is to start with sizing tempdb to be 20 to 30% of the database size and increase it based on the utilization for your specific workload. You may also want to increase the

FILEGROWTH for tempdb to 50 MB. This prevents tempdb from expanding too frequently, which can affect performance. Set the tempdb database to auto grow, but use this option to increase disk space for unplanned exceptions. When the READ_COMMITTED_SNAPSHOT database option is ON, logical copies are maintained for all data modifications performed in the database. Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb until the transaction that modified the row is committed. The tempdb database should be sized to have sufficient capacity to store these row versions as well as the other objects that are usually stored in tempdb. Set the file growth increment to a reasonable size to avoid the tempdb database files from growing by too small a value. If the file growth is too small, compared to the amount of data that is being written to tempdb, tempdb may have to constantly expand. This will affect performance. See the following general guidelines for setting the FILEGROWTH increment for tempdb files. tempdb file size Less than 1 GB Greater than 1 GB FILEGROWTH increment 50 MB 10%

Note: Monitor and avoid automatic file growth, as it impacts performance. Every time SQL Server is started, the tempdb file is re-created with the default size. While tempdb can grow, it does take resources to perform this task. To reduce this overhead of tempdb growing, you may want to permanently increase the default size of tempdb after carefully monitoring its growth. Also, consider adding multiple data files to the tempdb filegroup2. Using multiple files reduces tempdb contention and yields significantly better scalability. As a general rule of thumb, create one data file for each processor core on the server (accounting for any affinity mask settings). For example, a 4-processor dual-core server would be configured with 8 tempdb data files. To add multiple data files, use the ALTER DATABASE statement with the ADD FILE clause. For example:
ALTER DATABASE tempdb ADD FILE ( NAME = tempdev2, FILENAME = 'C:\tempdb2.ndf', SIZE = 100MB, FILEGROWTH = 50MB ) ;

In SQL Server the tempdb database can only have a single filegroup. 9

Make each data file the same size; this allows for optimal proportional-fill performance.

2.4 Data and Log File Sizing


For the PeopleSoft installation, it is critical that you set sizes for the database data and log files appropriately. Ensure that the data and log files always have enough capacity to allow data modifications to happen seamlessly without causing a physical file expansion (autogrow). In other words, the data and log files should be pre-grown to a sufficiently large size. It is recommended that you enable autogrow for the data and log files, however; it is meant as a fallback mechanism in the event file expansion is required. For large databases, it is recommended that you enable autogrow by size (MB) rather than by percent.

2.5 Recovery Models


SQL Server provides three recovery models to determine how transactions are logged and the level of exposure to data loss. They are Simple Recovery, Full Recovery, and BulkLogged Recovery.

2.5.1 Simple Recovery Model


The Simple Recovery model allows you to recover the database to the point of the last backup. However, you cannot restore the database to the point of failure or to a specific point in time. It is not recommended to use the Simple Recovery model for PeopleSoft production deployments. You can consider using the Simple Recovery model for your development environment. The advantage of using Simple Recovery model for development environments is that it prevents extensive growth of the transaction log file and is easy to maintain.

2.5.2 Full Recovery Model


Full Recovery provides the ability to recover the database to the point of failure or to a specific point in time using the database backups and transaction log backups, and provides complete protection against media failure. If one or more data files are damaged, media recovery can restore all committed transactions. In-process transactions are rolled back. It is recommended to use the Full Recovery model for your production PeopleSoft deployments. In Full Recovery model, log backups are required. This model fully logs all transactions and retains the transaction log records until after they are backed up. The Full Recovery model allows a database to be recovered to the point of failure, assuming that the tail of the log can be backed up after the failure. The Full Recovery model also supports restoring individual data pages.

10

2.5.3 Bulk-Logged Recovery Model


The Bulk-Logged Recovery model allows the database to be recovered only to the end of a transaction log backup when the log backup contains bulk changes. Unlike the Full recovery model, point-in-time recovery is not supported. Bulk-Logged Recovery model provides protection against media failure combined with the best performance and minimal log space usage for certain large-scale or bulk copy operations. Operations such as SELECT INTO, CREATE INDEX, and bulk loading data are minimally logged, so the chance of a data loss for these operations is greater than in the Full Recovery model. In this model, log backups are still required. Like the Full Recovery model, the BulkLogged Recovery model retains transaction log records until after they are backed up. The tradeoffs are bigger log backups and increased work-loss exposure because the BulkLogged Recovery model does not support point-in-time recovery.

2.6 Database Options


The following database options may have performance implications on the PeopleSoft application. The database options are discussed below with the recommended setting for optimal performance for PeopleSoft applications.

2.6.1 Read-Committed Snapshot


The Read-committed snapshot isolation level was first introduced in SQL Server 2005; SQL Server 2008 has built on that and enhanced it to be more scalable. The performance of a typical PeopleSoft workload can benefit from this isolation level. Under this isolation level, blocking and deadlocking issues due to lock contention are greatly reduced. Read operations only acquire an Sch-s (schema stability) lock at the table. No page or row S (shared) locks are acquired and therefore do not block transactions that are modifying data. Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb. The read-committed snapshot isolation level provides the following benefits: SELECT statements do not lock data during a read operation. Read transactions do not block Write transactions, and vice versa. Since SELECT statements do not acquire locks in most cases, the number of locks required by a transaction is reduced, which reduces the system overhead required to manage locks. The possibility of blocking is significantly reduced. SELECT statements can access the last committed value of the row, while other transactions are updating the row without getting blocked. The number of blocks and deadlocks is reduced. Fewer lock escalations occur. The read-committed snapshot isolation level is not the default isolation level. It has to be explicitly enabled with the Enable_rcsi.sql script included with PeopleTools.

11

Use the following query to identify whether a database is currently set to use the readcommitted snapshot isolation level:
select name, is_read_committed_snapshot_on from sys.databases where name = <YourDatabaseName>

A value of 1 in the is_read_committed_snapshot_on column indicates that the readcommitted snapshot isolation level is set. For PeopleSoft applications, the recommendation is to enable the read-committed snapshot isolation level. PeopleSoft workloads typically have concurrent online and batch processing activities. There are possible blocking and deadlocking issues due to contention from the online and batch activity. This usually manifests as performance degradation issues due to lock contention. The read-committed snapshot isolation level will alleviate most of the lock contention and blocking issues. Warning! Please check the version of PeopleTools you are using supports the Read Committed Snapshot Isolation Level. You can only use it if it is supported by PeopleTools.

2.6.2 Asynchronous Statistics Update


When this option is enabled (set to ON), queries that trigger out-of-date statistics update execute without being blocked for the statistics update to complete. By default, the AUTO_UPDATE_STATISTICS_ASYNC option is OFF. For a typical PeopleSoft workload, which has a good mix of transactional and batch activities, it is recommended that you use the default setting of OFF for this option. A PeopleSoft workload may have a large batch process run on the database. The batch process can potentially cause significant changes to the data distribution through updates and inserts. Running a SELECT/UPDATE transactional query through the PeopleSoft online screen immediately following the batch process on the same data set may initiate the out-of-date statistics update. With this option set to OFF (default setting), the query will be on hold until the statistics are updated. However, this assures the latest statistical information for the query optimizer to use to create the execution plan. This mechanism may provide better performance for most scenarios, with the tradeoff of waiting for the statistics update. Because the option is OFF by default, no action is required. To check for the current setting, use the following command:
select name, is_auto_update_stats_async_on from sys.databases where name = <YourDatabaseName>

12

A value of 0 indicates AUTO_UPDATE_STATISTICS_ASYNC is OFF.

2.6.3 Parameterization
When his option is set to FORCED, the database engine parameterizes any literal value that appears in a SELECT, INSERT, UPDATE, or DELETE statement submitted in any form. The exception is when a query hint of RECOMPILE or OPTIMIZE FOR is used in the query. Use the following ALTER DATABASE statement to enable forced parameterization:
ALTER DATABASE <YourDatabaseName> SET PARAMETERIZATION FORCED ;

To determine the current setting of this option, examine the is_parameterization_forced column in the sys.databases catalog view as follows:
select name, is_parameterization_forced from sys.databases where name = <YourDatabaseName>

The default value for is_parameterization_forced is 0 (OFF); a value of 1 indicates that forced parameterization is enabled. For a PeopleSoft workload, it is recommended that you set this parameter to 1 (forced). Some PeopleSoft application queries pass in literals instead of parameters. For such workloads you may want to experiment with enabling the forced parameterization option and seeing if it has a positive effect on the workload by way of a reduced number of query compilations and reduced processor utilization. An example query from the PeopleSoft Financials online workload follows:
SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = 'MFG' AND CUST_ID = 'Z00000000022689';

In this example, the literal value Z00000000022689 is passed to the query. When the forced parameterization option is enabled, the hard-coded literal is automatically substituted with a parameter during the query compilation. The query plan would be cached and reused when this query is submitted again, with a different literal value for CUST_ID. Because the plan could be reused, the compilation overhead is eliminated, thereby reducing the processor utilization. NOTE: Queries that contain both literal and parameter values are not FORCED parameterized by the database engine. Example:
SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = @P1 AND CUST_ID = 'Z00000000022689';

13

However, note that in some cases when the data in the database table is highly skewed forced parameterization may cause a suboptimal plan to be reused, thus degrading performance. If the parameter value in the query changes significantly, when it warrants a different execution plan for better performance, the older plan would be reused from the cache, which may not be the most optimal from a performance perspective. It is best to experiment with this setting and use only if necessary. Parameterization can also be specified at the query level using a query hint specified via a Plan Guide (explained later in this paper).

2.6.4 Auto Update Statistics


When this option is set, any missing or out-of-date statistics required by a query for optimization are automatically built during query optimization. Use the following ALTER DATABASE statement to set this option:
ALTER DATABASE <YourDatabaseName> SET AUTO_UPDATE_STATISTICS ON ;

To determine the current setting of this option, examine the is_auto_update_stats_on column in the sys.databases catalog view as follows:
select name, is_auto_update_stats_on from sys.databases where name = <YourDatabaseName> ;

A value of 1 for is_auto_update_stats_on indicates that auto update statistics is enabled. For optimal performance in PeopleSoft applications, it is recommended that you leave the auto update statistics option enabled.

2.6.5 Auto Create Statistics


This database option automatically creates missing statistics on columns used in query predicates. Use the following ALTER DATABASE statement to set this option:
ALTER DATABASE <YourDatabaseName> SET AUTO_CREATE_STATISTICS ON ;

To determine the current setting of this option, examine the is_auto_create_stats_on column in the sys.databases catalog view as follows:
select name, is_auto_create_stats_on from sys.databases where name = <YourDatabaseName>

A value of 1 for is_auto_create_stats_on indicates that the auto create statistics option is enabled.

14

For optimal performance in PeopleSoft applications, it is recommended that you leave the auto create statistics option enabled.

2.7 SQL Server Configurations


This section discusses several configuration issues for SQL Server: Installation Considerations Hyper-Threading Tuning Memory Important sp_configure Parameters

2.7.1 Installation Considerations


Please refer to the PeopleTools Installation Guide for SQL Server for installation background and requirements. The installation guide is available from the PeopleSoft support Web site (https://metalink3.oracle.com/od/faces/secure/km/DocumentDisplay.jspx?id=703595.1).

2.7.2 Hyper-Threading
Hyper-threading is Intels implementation of simultaneous multithreading technology. The performance benefits of using hyper-threading are dependent upon workload. For PeopleSoft applications, it is recommended that you disable hyper-threading for the database server via the BIOS as our lab testing has shown little or no improvement.

2.7.3 Memory Tuning


Memory tuning can be critical to the performance of your PeopleSoft application. This section discusses memory tuning on both 32-bit and 64-bit systems and the various options available for each.

2.7.3.1 32-Bit Memory Tuning


/3GB Switch By default, all 32-bit operating systems can linearly address only up to 4 GB of virtual memory. The address space split: 2 GB of address space is directly accessible to the application, and the other 2 GB is accessible only to the Windows executive software. The 32-bit versions of Microsoft Windows Server 2008, Windows Server 2003 and Windows 2000 permit applications to access a 3 GB flat virtual address space, when the /3GB switch is specified in the boot.ini file. The /3GB switch allows SQL Server to use 3 GB of virtual address space. This switch is only relevant to 32-bit operating systems. Note: When the physical RAM in the system exceeds 16 GB and the /3GB switch is used, the operating system will ignore the additional RAM until the /3GB switch is removed. This is because of the increased size of the kernel required to support more page table entries. The assumption is made that the administrator would rather not lose the /3GB functionality silently and automatically; therefore, the administrator must explicitly change this setting. Physical Address Extension (/PAE)
15

PAE is an Intel-provided memory address extension that enables support of up to 64 GB of physical memory for applications running on most 32-bit (IA-32) Intel Pentium Pro and later platforms. Support for PAE is provided on Windows 2000, and later versions of the Advanced Server and Datacenter Server operating systems. PAE enables most processors to expand the number of bits that can be used to address physical memory from 32 bits to 36 bits through support in the host operating system for applications using the Address Windowing Extensions (AWE) API. PAE is enabled by specifying the /PAE switch in the boot.ini file. AWE Memory SQL Server 2008 can use as much memory as Windows Server allows. To use AWE memory, you must run the SQL Server 2008 database engine under a Windows account on which the Windows policy Lock Pages in Memory option has been enabled. SQL Server Setup will automatically grant the SQL Server (MSSQLServer) service account permission to use the Lock Pages in Memory option. To enable the use of AWE memory by an instance of SQL Server 2008, use SQL Server Management Studio or sp_configure command. PeopleSoft applications consume relatively large amounts of lock memory, so many deployments will benefit from enabling a combination of /3GB and AWE memory. It is recommended to set the max server memory option when using AWE memory. To set memory options: 1. If your installation of Microsoft Windows Server 2008, Windows Server 2003 or Windows 2000 has more than 4 GB of memory but less than 16 GB of memory, add the /3GB switch to boot.ini. 2. To enable Physical Address Extension, add the /PAE switch to boot.ini. 3. Use sp_configure to enable AWE; sp_configure 'awe enabled', 1. 4. Set the maximum amount of memory SQL Server can use with sp_configure 'max server memory'. 5. Enable the configuration changes using RECONFIGURE WITH OVERRIDE. 6. Restart the SQL Server instance. Note: Some services such as antivirus software have caused instability when used on systems that have /3GB enabled, and servers are constrained to no more than 16 GB if both /3GB and /PAE are enabled.

2.7.3.2 64-Bit Memory Tuning


Depending on the server and the Windows operating system used, memory in the range of terabytes can be addressed directly.

16

SQL Server 2008 64-bit editions can take full advantage of the large memory address space, thus eliminating the 4 GB virtual space limit imposed by 32-bit systems. The 64bit systems bring linear memory addressability to SQL Server, implying that no internal memory mapping is needed for large memory access and the buffer pool and all other memory structures of SQL Server can fully utilize the memory. For large PeopleSoft applications with high user concurrency in the range of thousands of users and a large database size, 64-bit systems can provide scalability and high performance. Such complex and highly concurrent PeopleSoft applications typically make heavy use of memory and can benefit from 64-bit systems in the following areas: Plan cache The ad hoc and dynamic SQL from PeopleSoft applications can fully utilize the large memory space. The plan generated can stay in memory longer thus promoting more reuse and fewer compilations. Workspace memory Index builds and complex concurrent hash joins can be done in memory. Connection memory Large numbers of concurrent connections can be easily handled. Thread memory High concurrency load can be easily handled. Lock memory Concurrent PeopleSoft workloads can utilize large amounts of lock memory. For PeopleSoft applications with large scalability and memory requirements the 64-bit platform is highly recommended. It is worth mentioning that Windows Server 2008 Standard Edition (64-bit) can only address 32GB of memory. To address memory beyond 32 GB, all the way up to 2TB, you should use Windows Server 2008 Enterprise Edition (64-bit). If the 32-bit platform is under memory pressure and memory is proving to be a bottleneck, migration to a 64-bit platform may help.

2.7.3.3 Lock Pages in Memory


In 32- and 64-bit computing environments, assign the SQL Server 2008 service account the Windows policy Lock Pages in Memory option. This policy determines which accounts can use a process to keep data in physical memory, preventing the system from paging the data to virtual memory on disk. The Lock Pages in Memory option is set to OFF by default in SQL Server 2008. If you have system administrator permissions, you can enable the option manually by using the Windows Group Policy tool (gpedit.msc) and assign this permission to the account that SQL Server is running. To enable Lock Pages in Memory: 1. On the Start menu, click Run. In the Open box, type gpedit.msc. The Group Policy dialog box opens.

17

2. On the Group Policy console, expand Computer Configuration, and then expand Windows Settings. 3. Expand Security Settings, and then expand Local Policies. 4. Select the User Rights Assignment folder. The policies will be displayed in the details pane. 5. In the pane, double-click Lock pages in memory. 6. In the Local Security Policy Setting dialog box, click Add. 7. In the Select Users or Groups dialog box, add an account with privileges to run sqlservr.exe. It is recommended that you set the Lock Pages in Memory option when using 64-bit operating systems. This keeps data in physical memory, preventing the system from paging the data to virtual memory on disk.

2.7.4 Important sp_configure Parameters


The following table presents some of the important sp_configure parameters along with their recommended values. Parameter Description affinity mask Limits SQL Server execution to only a certain set of processors defined by the bit mask. It is useful for reserving processors for other applications running on the database server. The default value is 0 execute on all processors. There is no need to alter this setting if your server is a dedicated database server. lightweight pooling Controls fiber mode scheduling. It primarily helps large multiprocessor servers that are experiencing a high volume of context switching and high processor utilization. The default value is OFF. For PeopleSoft applications, set this option to OFF. priority boost Boosts the priority at which SQL Server runs. For PeopleSoft applications, set this option to OFF (default value). max degree of parallelism Specified as an integer value, max degree of parallelism is used to limit the number of processor cores on which a query can be executed with a query plan. The default value is 0 all processors. This default setting may help some complex SQL statements, but it can take away CPU

18

cycles from other users during high online usage periods. Set this parameter to 1 during peak OLTP periods. Increase the value of this parameter during periods of low OLTP and high batch processing, reporting, and query activity. Note: Index creation and re-creation can take advantage of parallelism, so it is advisable to enable parallelism through this setting when planning to build or rebuild indexes. The OPTION hint in the index creation or rebuild statements can also be used to set max degree of parallelism. Performance tests on some of the batch processes showed that parallelism could result in very good performance. If you do not want to toggle this value based on the type of load, you can set the value to 1 to disable this setting. However, you may want to explore some middle ground by setting this option to 2, which may help some complex batch jobs as well as online performance. Specifies the cost threshold in seconds that needs to be met before a query is eligible to be executed with a parallel query execution plan. The default value is 5. Most of the PeopleSoft online SQL statements are simple in nature and do not require parallel query execution plans. Consider increasing the value to 60, so only true complex queries will be evaluated for parallel query execution plans. This option is used to control how cursors are populated. It is strongly recommended to leave this setting to its default setting of -1. Enable this parameter to take advantage of memory above 4 GB. This is primarily applicable for 32-bit operating systems, but is recommended for 64-bit servers as well. Specifies the maximum memory in

cost threshold for parallelism

cursor threshold

awe enabled

max server memory

19

min server memory

megabytes allocated to a SQL Server instance. The default value is 2,147,483,647 MB. If you are enabling AWE, remember that AWE memory is statically allocated for Windows 2000 and non-pageable. AWE memory is dynamically allocated for Windows Server 2003 and Windows Server 2008 operating systems. For a dedicated database server, plan to leave at least 1 to 2 GB for the operating system and other services on the database server. For example, if the database server has 16 GB, set max server memory to 14 GB. Monitor the memory: available bytes to determine if max server memory should be reduced or increased. You may need to leave additional memory if you are running other 3rd party applications like performance monitoring or backup. Specifies the minimum server memory to guarantee a minimum amount of memory available to the buffer pool of an instance of SQL Server. SQL Server will not immediately allocate the amount of memory specified in min server memory on startup. However, after memory usage has reached this value due to client load, SQL Server cannot free memory from the allocated buffer pool unless the value of min server memory is reduced. The default value is 0 MB. For dedicated database servers you should set the min server memory to 50-90% of the max server memory value.

2.8 Network Protocols and Pagefile


SQL Server 2008 supports the following network protocols: TCP/IP Named Pipes Virtual Interface Adapter (VIA) Shared Memory

20

Configuring the correct network protocol is critical from a performance and stability perspective. The following section discusses recommendations for configuring network protocols for PeopleSoft applications. For TCP/IP, data transmissions are more streamlined and have less overhead than Named Pipes. Data transmissions can also take advantage of TCP/IP Sockets performance enhancement mechanisms, such as windowing, delayed acknowledgements, and so on. This can be very helpful in high network traffic scenarios. For PeopleSoft applications, such performance differences can be significant. For best performance, install TCP/IP on the server and configure SQL Server TCP/IP to communicate with clients. The Named Pipes network protocol can be installed and used only when the application server or process scheduler is running on the same physical computer as the database engine. For the application to use Named Pipes as the first choice network protocol, make sure that in the Client Configuration section of SQL Server Configuration Manager that Named Pipes has a higher order than TCP/IP. Ensure that SQL Server uses Named Pipes in addition to TCP/IP. For this to work, you must also configure native ODBC connections to use Named Pipes. The VIA (Virtual Interface Adapter) protocol works only with specialized VIA hardware. Enable this protocol only if you have VIA hardware installed in the server and plan to use it. If you do not have VIA hardware installed in the server you should keep this protocol disabled. Shared Memory is a non-routable protocol and is not useful for PeopleSoft applications.

2.9 SQL Native Client


The SQL Native Client is a network library was first introduced in SQL Server 2005 and is similar to MDAC. It contains the SQL OLE DB provider and SQL ODBC driver in one native dynamic link library (DLL) supporting applications using native-code APIs (ODBC, OLE DB, and ADO) to Microsoft SQL Server. It is recommended to use the SQL Native Client rather than MDAC for data access to SQL Server 2008 for PeopleSoft applications. SQL Native Client provides access to new features of SQL Server and also provides full backward compatibility to ODBC and OLE DB. For PeopleSoft applications, SQL Native Client provides access to new features such as database mirroring and also provides some performance optimizations. It is recommended to use the SQL Native Client for connectivity. To configure SQL Native Client, navigate to the ODBC Data Source Administrator in the Control Panel. Select SQL Native Client as the new data source driver instead of SQL Server and configure the data source.

21

2.10 Application Setup


2.10.1 Dedicated Temporary Tables
One of the ways to improve scalability and reduce the time taken for processing batch programs is to run multiple instances of the same program on subsets of the data in parallel. For example, instead of processing orders 1 to 1,000,000 in a single instance, run five concurrent instances of the same batch program for 200,000 orders each. When running more than one instance of the program concurrently, the most serious potential problem is data contention, which can result in lock waits and deadlocks. To reduce contention, PeopleSoft Application Engine provides dedicated temporary tables. These temporary tables are permanent with respect to the Application Engine program definition. Only the data residing in these tables is temporary. Temporary tables also improve performance when a subset of data from a huge table is referenced multiple times within the program. Temporary tables can: Store intermediate data during the process. Minimize contention when running in parallel. Ensure that each process uses its own temporary table instance. It is important to allocate the correct number of dedicated temporary tables for your environment. If the temporary tables are under-allocated, the total available instances of a particular temporary table are less than the total number of programs running at one time that use the temporary table. When this condition occurs, the program either uses the base temporary table and continues to run, or it aborts with an error, depending on whether the program property setting If non-shared tables cannot be assigned is set to Continue or Abort, respectively. These options may both be undesirable. Following are the drawbacks if the program uses the base (non-shared) temporary table: Multiple instances may read or write into the same table, causing contention. Selecting becomes dependent on ProcessInstance as a leading qualifier. DELETE is performed instead of TRUNCATE TABLE. DELETE is far slower than TRUNCATE TABLE. Important reasons for PeopleSoft to choose regular tables as temporary tables instead of using SQL Servers temporary tables (#tables) include the following: A number of Application Engine programs are restartable. This means that if a program abends in the middle of the run, data stored in the temporary tables is preserved. This enables the program to restart from the last commit point. This is not possible if it uses #tables instead, because they will not be available after the session terminates. In SQL Server, this regular form of temporary table offers the same performance as the #tables when they are allocated correctly.

22

2.10.2 Statement Compilation


When an SQL statement is issued, SQL Server checks if the statement (an execution plan for the query) is already available in the SQL Server cache. If not, the query must be compiled fully. During the compilation phase SQL Server must check the statement syntactically and semantically, prepare a sequence tree, optimize based on statistics and other criteria, and generate an execution plan. This is referred to as a compile, which is expensive in terms of CPU usage. Compilation happens when SQL Server parses a query and cannot find an exact match for the query in the procedure cache. This occurs due to the inefficient sharing of SQL statements, and can be improved by using bind variables (parameters) instead of literals in queries. The number of hard parses can be identified with an Application Engine trace (128). Refer to section 5.2.2.1 for instructions on how to capture the Application Engine trace.

2.10.2.1 Use of Bind Variables


The number of SQL statement compiles can be reduced by constructing the statements to use bind variables instead of literals. Doing this ensures that similar statements where the only difference is the parameter value being passed in are treated as the same statement and do not result in a SQL compilation. Most of the PeopleSoft programs written in Application Engine, SQR, and COBOL have taken care to address this issue. You should make sure that any customizations you have implemented are also configured appropriately using the procedure explained below.

2.10.2.2 Application Engine ReUse Option


Application Engine programs use bind variables in their SQL statements, but these variables are specific to PeopleSoft. When a statement is passed to the database, Application Engine sends the statement with literal values. To indicate to the Application Engine program to send the bind variables, enable the ReUse option in the Application Engine step containing the statement that needs to use the bind variable. Example Statement in PC_PRICING.BL6100.10000001: UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = %Sql(PC_COM_LIT_CHAR,%NEXT(LAST_RESOURCE_ID),1,20,20) WHERE PROCESS_INSTANCE = %ProcessInstance AND BUSINESS_UNIT = %Bind(BUSINESS_UNIT) AND PROJECT_ID = %Bind(PROJECT_ID) AND ACTIVITY_ID = %Bind(ACTIVITY_ID) AND RESOURCE_ID = %Bind(RESOURCE_ID) AND LINE_NO = %Bind(LINE_NO) Statement without the ReUse Option

23

AE Trace: -- 16.46.00 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000498 WHERE PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1' AND ACTIVITY_ID = 'ACTIVITYA1' AND RESOURCE_ID = 'VUS004VA10114050' AND LINE_NO = 1 / -- Row(s) affected: 1
SQL Statement BL6100.10000001.S Compile Count Time 252 0.6 Execute Count Time 252 1.5 Fetch Count Time 0 0.0 Total Time 2.1

Statement with the ReUse Option

AE Trace: -- 16.57.57 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6 / -- Bind variables: -- 1) 10000751 -- 2) US004 -- 3) PRICINGA1 -- 4) ACTIVITYA1 -- 5) VUS004VA10114050 -- 6) 1
24

-- Row(s) affected: 1
SQL Statement BL6100.10000001.S Compile Count Time 1 0.0 Execute Count Time 252 0.4 Fetch Count Time 0 0.0 Total Time 0.4

Restrictions on Enabling the ReUse Option It is acceptable to enable ReUse if %Bind is used to supply a value to a column in a WHERE predicate, SET clause, or INSERT VALUES list. For example: UPDATE PS_PF_DL_GRP_EXEC SET PF_ODS_STATUS = 'C', PROCESS_INSTANCE = %Bind(PROCESS_INSTANCE) WHERE PF_DL_GRP_ID = %Bind(PF_DL_GRP_ID) AND PF_DL_ROW_NUM = %Bind(PF_DL_ROW_NUM) Do not enable ReUse if %Bind is used to supply a column name or portion of a table name. For example: SELECT DISTINCT KPI_ID , CALC_ID ,'' ,0 ,0 ,KP_CALC_SW ,KP_OFFCYCLE_CALC FROM PS_%Bind(KP_CALC_AET.KP_KPI_LST1,NOQUOTES) %Bind(EPM_CORE_AET.FACT_TABLE_APPEND ,NOQUOTES) WHERE LOOP_CNT = %Bind(KP_CALC_AET.LOOP_CNT) AND LOOP_PROGRESSION='B' Do not enable ReUse if %Bind appears in the SELECT list. For example: SELECT DISTINCT %Bind(EPM_CORE_AET.PROCESS_INSTANCE) , %Bind(EPM_CORE_AET.ENGINE_ID) , %CurrentDateTimeIn , 10623 , 31 , 'GL_ACCOUNT' ,'' ,'' ,'' ,''
25

, A.MAP_GL_ACCOUNT ,'' ,'' ,'' ,'' , 'LEDMAP_SEQ' FROM Do not enable ReUse if %Bind is being used to resolve to a value other than a standard Bind value and the contents of the Bind will change each time the statement executes. For example: %Bind(GC_EQTZ_AET.GC_SQL_STRING,NOQUOTES) In this case, the SQL is different each time (at least from the database perspective) and therefore cannot be reused. If NOQUOTES modifier is used inside %Bind, it is implied to be STATIC. For dynamic SQL substitution, the %Bind has a CHAR field and NOQUOTES to insert SQL rather than a literal value. If you enable ReUse, the value of the CHAR field is substituted inline, instead of using a Bind marker (as in :1, :2, and so on). The next time the same Application Engine action executes, the SQL that it executes will be the same as before, even if the value of a static bind has changed. For example: INSERT INTO PS_PF_ENGMSGD_S %Bind(EPM_CORE_AET.TABLE_APPEND,NOQUOTES) (PROCESS_INSTANCE , ENGINE_ID , MESSAGE_DTTM , MESSAGE_SET_NBR , MESSAGE_NBR , FIELDNAME1 , FIELDNAME2 , FIELDNAME3 , FIELDNAME4 , FIELDNAME5 , FIELDVAL1 , FIELDVAL2 , FIELDVAL3 , FIELDVAL4 , FIELDVAL5 , SOURCE_TABLE)

26

Use the %ClearCursor function to recompile a reused statement and reset any STATIC %Bind variables. Refer to the PeopleSoft Application Engine documentation for usage.

2.10.2.3 Application Engine Bulk Insert Option


By buffering rows to be inserted, specifying a ReUse statement value of Bulk Insert can provide considerable performance boost. PeopleSoft Application Engine offers this nonstandard SQL enhancement for Microsoft SQL Server. This feature improves performance only when an SQL INSERT statement is called multiple times and in the absence of intervening COMMIT statements. PeopleSoft Application Engine ignores the Bulk Insert option in the following situations: The SQL is not an INSERT statement. The SQL is other than an INSERT/VALUES statement that inserts one row at a time. For example, the following statements are ignored: INSERT/SELECT, UPDATE, and DELETE. The SQL does not have a VALUES clause. The SQL does not have a field list before the VALUES clause. In these situations, PeopleSoft Application Engine still executes the SQL; it just does not take advantage of the performance boost associated with Bulk Insert.3

2.10.3 Statistics at Runtime for Temporary Tables


PeopleSoft uses shared temporary tables or dedicated temporary tables in the batch processes. These temporary tables will have few or no rows in the beginning of the process and again few or no rows at the end of the process. Temporary tables are populated during the process, and are deleted or truncated at the beginning or end of the process. Because data in these tables changes so radically, accurate statistics on these tables may help the SQL statements significantly. Beginning with PeopleSoft 8, if the process is written in PeopleSoft Application Engine, then %UpdateStats meta-SQL can be used in the program after the rows are populated. This ensures that the statistics are updated before the selection happens from that table. Note: COMMIT is required prior to executing this statement. Make sure to use the COMMIT statement immediately following the previous step. If you do not do so, then this statement will be skipped by Application Engine. For example, suppose you have the following meta-SQL command in an SQL step of an Application Engine program:
3

PeopleTools 8.14: Application Engine. Advanced Development: Re-Using Statements: Bulk Insert. Peoplesoft, Inc. http://ps8dweb1.vccs.edu:6001/sa80books/eng/psbooks/tape/chapter.htm?File=tape/htm/aecomt04.htm%23 H4011 27

%UpdateStats(INTFC_BI_HTMP) This meta-SQL issues the following command to the database at runtime: UPDATE STATISTICS PS_INTFC_BI_HTMP Make sure the temporary table statistics have been handled as shown above. If you find that statistics created by AUTO UPDATESTATS is sufficient, you can disable %UpdateStats in the program.

2.10.4 Disabling Update Statistics


Update Statistics (%UpdateStats) can be disabled in two ways. Program level: Identify the steps that issue %UpdateStats and deactivate them. These steps can be identified by the Application Engine trace (for example, set the trace flag TraceAE=3). This is a program-specific setting. Installation level: If there is a compelling reason to disable update statistics for all batch programs, then the installation level setting can be applied to disable %UpdateStats. The following parameter should be set in the Process Scheduler configuration file psprcs.cfg to achieve this: ;------------------------------------------------------------------------; DbFlags Bitfield ; ; Bit Flag ; --- ---; 1 - Ignore metaSQL to update database statistics (shared with COBOL) DbFlags=1 Note: Carefully consider all the positive and negative effects before setting this flag because it can adversely affect performance.

2.11 Batch Server Placement


PeopleSoft Process Scheduler executes PeopleSoft batch processes. Process Scheduler (Batch Server) can be configured on the batch server (either on the same server as the application server or on a separate server) or on the database server. There are three potential locations for the batch server: Dedicated batch scheduler. This is the preferred configuration. One or more dedicated servers assigned to run the process scheduler service provides the best overall scalability for batch processing and the isolation needed to effectively tune the various tiers of the PeopleSoft infrastructure. Database server. In this scenario, PeopleSoft Process Scheduler runs directly on the database server. The drawback with this configuration is that it consumes costly database server resources that could otherwise be utilized by SQL Server. Application servers. PeopleSoft Process Scheduler can be collocated with the Tuxedo instances on one or more of the application servers. This is not an ideal configuration
28

because the application servers are memory-intensive processes and co-locating the batch server on the same system leaves less memory for the application server. Note: If the process scheduler is installed on a separate batch server and not on the database server, use a high bandwidth connection such as 1 Gbps between the batch server and database server. If a particular batch process uses extensive row-by-row processing, having the process scheduler on the database server may offer increased performance.

29

3 SQL Server 2008 Performance and Compliance Optimizations for PeopleSoft Applications
SQL Server 2008 introduces many new optimizations and enhancements at every layer of the database engine. Although a complete discussion of all the SQL Server 2008 changes is out of the scope of this white paper, the important ones related to PeopleSoft applications are discussed in the following sections. Many of these optimizations and enhancements are specific to the PeopleSoft applications and can be effectively leveraged to enhance performance and manageability.

3.1 Resource Management


PeopleSoft applications are characterized by a variety of workloads. It is fairly common for Batch and Online processes to run concurrently. This at times can pose a challenge of resource contention for memory and or CPU, which can result in overall performance degradation. To help address this problem SQL Server 2008 database engine introduces a new feature called Resource Governor to manage and govern physical resources, such as memory and processor utilization.

3.1.1 Resource Governor


Resource Governor is a memory and CPU resource management utility introduced in SQL Server 2008. The memory and CPU resources are governed by specifying consumption limits. In the context of PeopleSoft applications, Resource Governor can be used to solve the classic CPU and memory contention issues when Batch and Online activity runs concurrently. During such concurrent activity, the batch may consume disproportionate amount of CPU and memory, thus starving the online workload of critical resources. This usually results in performance degradation for the online activity and makes its performance sluggish. In this scenario, Resource Governor can be used to control the CPU and memory resources allocated to the batch based on a certain limit. This effectively caps the resources the batch workload can use, thus freeing them up for the online workload. There are three critical components of Resource Governor: Classifier Function: When Resource Governor is enabled, the classifier function is executed for all new connections. Based on the classifier function logic, the connection is assigned to workload groups. Classifier function is fully customizable. The following system functions can be used for classification: HOST_NAME(), APP_NAME(), SUSER_NAME(), SUSER_SNAME(), IS_SRVROLEMEMBER(), and IS_MEMBER(). As an example, you could use the APP_NAME() system function to classify between SQR Batch jobs and Online App engine workloads for PeopleSoft applications.

30

For more information on classifier function and considerations for writing one, please review the Considerations for Writing a Classifier Function topic in SQL Server 2008 Books Online. Workload Groups: A workload group serves as a container for session requests that are similar according to the classification criteria that are applied to each request. Two workload groups, Internal and Default, pre-exist when Resource Governor is enabled. User defined workload groups can be created. For instance, for PeopleSoft applications, you can create two user-defined workload groups, Batch and Online. The classifier function can then use the APP_NAME() system function to allocate the connections to these workload groups. Resource Pool: The Resource Pool represents the allocation of the physical resources of the server (CPU and memory). You can think of it as a virtual SQL Server instance. The internal and default resource pool are created when Resource Governor is enabled. User defined resource pools can be created as required. For instance, for PeopleSoft applications, you can create two user-defined resource pools, Batch and Online and allocate the appropriate CPU and memory limits to it. For more information on Classifier Function, Workload Groups, and Resource Pool, review the Resource Governor Concepts topic in SQL Server 2008 Books Online.

Classifier UDF ier UDF UDF Internal l GroupI Internal l PoolR User-Defined GroupD Resource ce PoolD Default GroupI Default PoolC

Using Resource Governor, the incoming requests from PeopleSoft applications (batch, online, etc), can be classified by using a classifier function. The classifier function assigns the request to a workload group. The workload group is associated with a resource pool. The resource pool is allocated minimum and maximum CPU and memory resource limits. In the diagram below, the classifier function has assigned the SQR and Batch workload from PeopleSoft to the Batch workgroup and the PeopleSoft Online activity to the OLTP workgroup. Each of these workgroups are then respectively assigned to their resource pools, Batch Pool or OLTP pool. The resources (CPU and memory) are governed for the resource pool.

31

SQL

SQR P PSFT Batch O B

ServerM O OnLine ActivityH

Min Max CPU Memory 90%O 10% Batch Pool OLTP PoolB Max Memory The example script below illustrates the use of Resource Governor, to govern the CPU 20% resource for the PeopleSoft batch process. In this example the workload from the batch Max process is CPU classified using the classifier function. If the workload is batch and it is normal business hours (8:00 AM to 5:00 PM), the batch gets assigned to a production 20%B workload group which is associated with a production resource pool. In this pool the
CPU is governed to a max of 50% during production hours, thus giving the online workload more CPU. If it is outside the business hours, the batch is assigned an off Hours workload group and an off hours resource pool in which up to 80% CPU resources are allocated for the batch. The script below presents the steps to Create and configure the new resource pools and workload groups. Assign each workload group to the appropriate resource pool.
--- Create a resource pool for batch processing USE master GO CREATE RESOURCE POOL rpBatchProductionHours WITH ( MAX_CPU_PERCENT = 50, MIN_CPU_PERCENT = 0 ) GO --- Create a corresponding workload group for batch production processing and configure --- the relative importance. CREATE WORKLOAD GROUP wgBatchProductionHours WITH ( IMPORTANCE = LOW )

HighM S Batch WorkloadS OLTP WorkloadO

32

--- Assign the workload group to the batch production processing --- resource pool. USING rpBatchProductionHours GO --- Create a resource pool for off-hours batch processing --- and set limits. CREATE RESOURCE POOL rpBatchOffHours WITH ( MAX_CPU_PERCENT = 80, MIN_CPU_PERCENT = 50 ) GO --- Create a workload group for off-hours processing --- and configure the relative importance. CREATE WORKLOAD GROUP wgBatchOffHours WITH ( IMPORTANCE = MEDIUM ) --- Assign the workload group to the off-hours processing --- resource pool. USING rpBatchOffHours GO --- Use the new configuration ALTER RESOURCE GOVERNOR RECONFIGURE GO

Create a classifier function to classify batch based on app name and current time:
CREATE FUNCTION fnBatchClassifier() RETURNS sysname WITH SCHEMABINDING AS BEGIN DECLARE @EightAM time DECLARE @FivePM time DECLARE @loginTime time SET @EightAM = '8:00 AM' SET @FivePM = '5:00 PM' SET @loginTime = CONVERT(time,GETDATE()) IF APP_NAME() = 'PFSTBATCH' AND (@loginTime BETWEEN @EightAM AND @FivePM) BEGIN

33

RETURN N'wgBatchProductionHours' END -- Its not production hours RETURN N'wgBatchOffHours' END GO

Register the classifier function and update the in-memory configuration:


ALTER RESOURCE GOVERNOR with (CLASSIFIER_FUNCTION = dbo. fnBatchClassifier) ; ALTER RESOURCE GOVERNOR RECONFIGURE ; GO

For more information on Resource Governor please refer to the Introducing Resource Governor section in SQL Server 2008 Books Online. It is also possible to use SQL Server Management Studio to configure Resource Governor. For more information, refer to Resource Governor How To topics in SQL Server 2008 Books Online. Resource Governor is only supported in SQL Server 2008 Enterprise and Developer editions.

3.2 Backup and Storage Optimization


The SQL Server 2008 release brings two significant improvements in the area of backup and storage optimization. The ability to do compressed backups and to compress data in a table can benefit PeopleSoft applications. The backup feature can have a significant impact for large PeopleSoft databases in terms of backup and restore time and overall I/O and backup storage efficiency. The data compression feature can be very effective for the large PeopleSoft tables like PS_JRNL_LN, PS_LEDGER, etc, in reducing the on-disk storage size.

3.2.1 Backup Compression


SQL Server 2008 introduces the feature to compress the backup during the backup operation. The compression reduces the overall backup size and the IO required to write the backup and read the backup during a restore. The reduction in I/O results in significant throughput improvements during the backup and the restore process. The backup compression ratio varies by the data in the database, however, some tests have revealed a 4X compression ratio, reduction in the backup time by 50%, and improvements in the restore time by 20-30%. For a 11 GB sample PeopleSoft database, we got the following backup compression, backup time, and restore time.
Test Uncompressed Backup Compressed Backup Difference

34

Size Time to backup Time to restore

9.6 GB 320 sec 328 sec

0.9 GB 133 sec 155 sec

-91 % -58 % -53 %

Configuration Backup compression is disabled by default for new installs. You can change the default at a server level by setting the value of backup compression default to 1, as shown below:
USE master; GO EXEC sp_configure 'backup compression default', '1'; RECONFIGURE WITH OVERRIDE ;

Alternatively, you can use SQL Server Management Studio to change this setting as well. Use the Database Settings page of the Server Properties dialog to set the backup compression default. Changing this server level option will cause all backups taken to be compressed, by default. However, you can also override the default backup compression setting for an individual backup by using keywords in the BACKUP command itself, as shown in the example below.
BACKUP DATABASE HCM849 TO DISK='Z:\PSFTBackups\HCM849.bak' WITH COMPRESSION ;

If you are using SQL Server Management studio to backup, you can use the Set Backup Compression option in the Back Up database options page. Compression Ratio To view the compression ratio achieved by the backup compression, you can query the backup-set history table, as shown below:
SELECT backup_size/compressed_backup_size as 'compression ratio', database_name FROM msdb..backupset ;

Performance Impact Compression is a CPU intensive operation and may increase CPU usage. It is important to consider the impact on concurrent operations when executing a BACKUP command with Compression option. We recommend running the backup operation during off peak hours. If concurrent execution is required and there is noticeable impact on CPU usage, you can consider leveraging the Resource Governor feature as discussed in section 3.1.1 to govern and limit the CPU usage by the BACKUP command.

35

It is important to note that you can only compress a backup when using SQL Server 2008 Enterprise or Developer editions, however you can restore a compressed backup on any SQL Server 2008 or later edition. For more information on Backup Compression and the factors that can impact compression, please refer to the Backup Compression topic in SQL Server 2008 Books Online.

3.2.2 Data Compression


SQL Server 2008 can effectively compress data in tables and indexes to help reduce the on-disk storage size. Efficient data storage techniques such as variable width storage, page level dictionary, and differential encoding between rows are leveraged to provide effective compression. Application changes are not required to enable compression. Compression only changes the physical storage format in which data is stored for the associated data type, but not its syntax or semantics. SQL Server 2008 supports row and page level compression for tables and indexes. Row Compression: Row Compression works by manipulating the row level metadata and storage formats at a row level in the physical page. The compression techniques used at a row level are: Use of variable length storage format for numeric and numeric based storage types such as integer, decimal, and float and datetime and money. Use of variable length storage format for fixed character strings and not storing the blank characters. Reducing metadata overhead associated with the record. NULL and 0 values across all data types are optimized and require no bytes for storage. For more information on Row Compression, please refer to the Row Compression Implementation topic in SQL Server 2008 Books Online. Page Compression: Page Compression works by using dictionary and row level encoding techniques explained below as well as ROW compression explained above. Overall it consists of the following three operations: 1. Row compression 2. Prefix compression 3. Dictionary compression Prefix compression uses the technique of identifying prefixes for the rows that can be used to optimize storage. The Prefix compression steps are: 1. For each column, identify a value, such as a repeating prefix pattern, that can be used to save storage space for the values in each column. 2. Create a row based on the identified prefix values in step 1 and store it in the pages compression information (CI) structure that immediately follows the page header. 3. Replace the repeating values in the column by a reference to the corresponding prefix. A partial prefix match can be used as well.

36

The replacement of the repeating values by the reference to the CI structure results in the space savings and thereby the compression. The following illustration from SQL Server 2008 Books Online shows a sample page of a table before and after prefix compression. Before prefix compression After Prefix Compression

Dictionary compression is the next step after prefix compression. Dictionary compression works on the entire page and replaces repeated values. The following illustration again taken from SQL Server 2008 Book Online shows the above page after dictionary compression.

Note that unlike prefix compression, in dictionary compression the value 4b has been referenced from different columns of the page. For more information on Page Compression, please refer to the Page Compression Implementation topic in SQL Server 2008 Books Online. Performance Considerations

37

Data Compression can be CPU intensive. Comparatively, row compression has lower overhead than page compression. We recommend testing your PeopleSoft application for CPU overhead when using compression. Before using compression, you can estimate the size and compression savings by using the sp_estimate_data_compression_savings system stored procedure. Please refer to SQL Server 2008 Books Online for further information on this. Configuration PeopleSoft applications can use the Data Compression option by using the ALTER TABLE command, as shown below. (SQL Server 2008 supports data compression during table creation time as well, however, this may not be natively available in PeopleTools).
ALTER TABLE PS_LEDGER REBUILD WITH (DATA_COMPRESSION = PAGE) ; GO

This will rebuild the entire table. It does not rebuild the related non-clustered indexes. We recommend verifying the support of Data Compression for your particular PeopleSoft application with PeopleSoft support before enabling it. For more information on Compression commands and syntax, please refer to the Creating Compressed Tables and Indexes topic in SQL Server 2008 Books Online. Benchmarks We tested Data Compression on 9.446 GB PeopleSoft database. The compression results are as follows: Database Original database ROW compressed database PAGE compressed database Size (GB) 9.45 3.78 2.11 Time taken to Compress -30 min 39 sec 33 min 27 sec

Data compression is only supported in SQL Server 2008 Enterprise and Developer editions.

3.3 Auditing and Compliance


Todays corporations and corporate systems are faced with many regulatory and compliance requirements. It is a fairly common requirement these days to be able to audit statements against specific tables and to be able to protect and secure data using encryption, for PCI (Payment Card Industry) type of requirements.

38

The PeopleSoft applications may also require Auditing and encryption for compliance. For such applications, it is not possible to modify the application and it is most desired to provide auditing and encryption at the database layer, without requiring any change to the application itself. In order to meet these requirements, SQL Server 2008 introduced the Transparent Data Encryption and Auditing features described in the section below.

3.3.1 Transparent Data Encryption (TDE)


SQL Server 2008 provides the ability to encrypt the database (data and log files and backups) without the need for application changes. The encryption is simply enabled at the database level, through a database command, and is completely transparent to the PeopleSoft application. TDE protects the data that is at rest on the disk. Unauthorized attach of the data file or unauthorized restore of the database backup will fail without the proper encryption key. However, TDE will allow full access and visibility of the data to authorized users. TDE operates at the I/O level. Any data that is written to the data and log files is encrypted. Snapshots and backups are also encrypted. The data in memory or in transit on the network is not encrypted and therefore not protected when using TDE. Encryption of the database file is performed at the page level. The pages in an encrypted database are encrypted before they are written to disk and decrypted when read into memory. TDE does not increase the size of the encrypted database. The encryption uses a database encryption key (DEK), which is stored in the database boot record for availability during recovery. The DEK is a symmetric key secured by using a certificate stored in the master database of the server or an asymmetric key protected by an EKM module. The backup files of TDE enabled databases are also encrypted. To restore a TDE database backup, the original certificate protecting the database encryption key must be available. Configuration Configuring TDE on a database requires the following steps: Create a master key Create or obtain a certificate protected by the master key Create a database encryption key and protect it by the certificate Set the database to use encryption The following example illustrates the steps required to encrypt a database using TDE:
USE master; GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ; GO

39

CREATE CERTIFICATE MyServerCert WITH SUBJECT = 'My DEK Certificate' ; GO USE HCM849 GO CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE MyServerCert ; GO ALTER DATABASE HCM849 SET ENCRYPTION ON ; GO

Performance Considerations In some of the lab tests we conducted, the typical performance impact for TDE was found to be about 3-5%. TDE is CPU intensive and is performed at the I/O level. Applications which are I/O intensive may have a higher CPU impact than applications that access data mostly from memory. If your PeopleSoft application has high CPU usage and is very I/O intensive, TDE may adversely affect performance. For low CPU usage applications, the performance degradation may not be adversely affected by the TDE operations. We ran some tests to measure the one-time durations taken to encrypt and decrypt a 11 GB sample PeopleSoft database. The results are as follows: Test Encrypt database Decrypt database Time taken 19 min 30 sec 22 min 30 sec

We highly encourage you to test your application before implementing TDE in production. For more information on TDE, please refer to the Understanding Transparent Data Encryption topic in SQL Server 2008 Books Online and the TDE section in the Database Encryption in SQL Server 2008 Enterprise Edition whitepaper available at: http://msdn.microsoft.com/en-us/library/cc278098.aspx. TDE is only supported in SQL Server 2008 Enterprise and Developer editions.

3.3.2 SQL Server Audit


SQL Server Audit allows you to audit actions and statements executed against a SQL Server 2008 instance or database. Actions such as creation of a new login, changing security permissions, creating/dropping tables, or issuing SQL statements to view or

40

change data can be captured and stored by using SQL Server Audit. The database activity can be captured and stored in the following destinations: File Windows Application Log Windows Security Log SQL Server Audit Components SQL Server Audit consists of several components such as the SQL Server Audit, Server Audit Specification, Database Audit Specification, and the Target. SQL Server Audit: The SQL Server audit object is defined at a SQL Server instance level and is a collection of server or database-level actions and groups of actions to monitor at a database or instance level. The audit destination needs to be defined as part of the audit. Server Audit Specification: Server-level action groups raised by the Extended Events feature are collected by the Server Audit Specification. These actions include server operations, such as management changes and logon and logoff operations. Database Audit Specification: The database level audit actions such as DML and DDL changes are part of the database Audit specification. Target: The audit results are sent to a file, called the target. The target can be a file, the Windows Security event log, or the Windows Application event log. Configuration The process for creating and using an audit is as follows: 1. Create an audit and define the target. 2. Create either a server audit specification or database audit specification that maps to the audit. Enable the audit specification. 3. Enable the audit. 4. Read the audit events by using the Windows Event Viewer, Log File Viewer, or the fn_read_audit_file function. The following example illustrates the use of audit to capture a SELECT statement against the PSEMPLOYEE table. Create an Audit object and define the target:
-- Create the SQL Server Audit object, and send the results to a file. CREATE SERVER AUDIT PSFT_SQL_Server_Audit TO FILE ( FILEPATH='C:\PSFTAudit\Audit\' ) -- The Queue Delay is set to 1000, meaning one second -- intervals to write to the target.

41

WITH ( QUEUE_DELAY = 1000, GO;

ON_FAILURE = CONTINUE) ;

Create the database audit specification and map it to the Audit object:
-- Create the Database Audit Specification object using an Audit event for the HCM849.PSEMPLOYEE table. USE HCM849 ; GO CREATE DATABASE AUDIT SPECIFICATION PSFT_Database_Audit_Specification FOR SERVER AUDIT PSFT_SQL_Server_Audit ADD (SELECT ON PSFT.EMPLOYEE BY PSFTUSER) WITH (STATE = ON) ; GO

Enable the audit:


-- Enable the audit. ALTER SERVER AUDIT PSFT_SQL_Server_Audit WITH (STATE = ON) ; GO

You can read the audit by using the following command:


SELECT * FROM sys.fn_get_audit_file (\\serverName\Audit\HIPPA_AUDIT.sqlaudit,default,default) ; GO

Performance Implications The SQL Server auditing architecture is based on the Extended Events. The extended events are fired internally in the engine and are usually low overhead. The overhead and performance implications of Auditing are directly related to the type and quantity of events configured for monitoring, so it is highly advisable to be selective while configuring them, especially for high-throughput systems. For more information on Auditing, please refer to the Understanding SQL Server Audit and SQL Server Audit How-to Topics topic in SQL Server 2008 Books Online.

3.4 Performance Monitoring and Data Collection


Over the course of last few years and SQL Server releases, the performance monitoring and performance data collection abilities of SQL Server have grown significantly. SQL Server 2005 introduced the Dynamic Management Views which gave deep insight in the inner workings of the engine and internal structures. DBAs developed scripts to gather the DMV data and store it to analyze and troubleshoot performance issues. Performance Management Dashboard reports were released by Microsoft for point in time
42

performance data reporting. SQL Server 2008 further enhances this concept and introduces new features such as Data Collectors, Management Data Warehouse, and preconfigured reports to analyze and warehouse performance data and troubleshoot performance issues. This information can also be used for trend analysis and capacity planning. The sections below discuss the key new features and their applications, with regards to PeopleSoft applications.

3.4.1 Data Collector and Management Data Warehouse


Performance tuning and troubleshooting is a time-consuming task that usually requires deep SQL Server skills and an understanding of the database internals. Windows System monitor (Perfmon), SQL Server Profiler and DMVs helped with some of this, but were often too intrusive, laborious to use, or the data was too difficult to interpret. To provide actionable performance insights, SQL Server 2008 delivers a fully extensible performance data collection and warehouse tool also known as Data Collector. The tool includes several out of the box data collection agents, a centralized data repository for storing performance data called management data warehouse (MDW), and several precanned reports to present the captured data. Data Collector is a scalable tool that can collect and assimilate data from multiple sources such as Dynamic Management Views (DMVs), Perfmon, T-SQL queries, etc., using a fully customizable data collection and assimilation frequency. Data Collector can be extended to collect data for any measurable attribute of an application. The Data Collector uses a Management Data Warehouse (MDW) to manage and house the collected data. Performance and trend analysis reporting can be done on this data. The diagram below illustrates the interactions and the various components of data collector, targets, MDW, and configuration UI and reports.

43

44

Data Collector Components and Architecture The Data Collector architecture can be broken down in the following components: Client Components: This is the user Interface (UI) to configure the data collector. SQL Server Management Studio is the main UI for configuring and managing data collector though all the actions can also be performed via T-SQL commands. API Components: They enable the interaction between the UI and the data collector. Execution Components: Components used for data collection and storage such as SSIS and SQL Server Agent. Storage Components: The database that contains the configuration information and collected data. The collected data is stored in a user defined Management Data Warehouse (MDW) database while the configuration is stored in the msdb system database.

PeopleSoft Application Performance Management and Tuning You can use the Data Collector and the Management Data Warehouse to troubleshoot some typical PeopleSoft performance issues such as Blocking Analysis, CPU Usage by query, Missing Indexes, I/O contention issues. To create a Blocking Analysis collection set, you can use a T-SQL query collector type to create a custom collection set. The T-SQL query will leverage the appropriate DMVs to gather the blocking information such as the blocker and blockee query text, lock modes, wait types etc. To configure this collector set, please refer to the How to: Create a Custom Collection Set That Uses a T-SQL Query Collector Type topic in SQL Server 2008 Books Online. For a sample T-SQL query to gather blocking information, please visit the SQL Server scripts center at: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true In addition to the custom collection sets you can create for specific troubleshooting requirements, the default system collection sets can provide valuable insight and information as well. The data collector installs three System Data collection sets during the SQL Server 2008 setup process. The system collection sets provide the following information: Disk Usage: disk and log usage data Server Activity: SQL and windows server processor and memory utilization Query Statistics: Collects query statistics, individual query text, query plans, and specific queries. The System Data collection also provides pre-built reports to view and analyze the data: Server Activity History Report: Overview of resource utilization and consumption and server activity as shown in the snapshot below.

45

Disk Usage Summary Report: Overview of disk space used for all databases on the server and the growth trends for the data and log file for each database.

46

Configuration The full configuration of the Data Collector and the MDW is outside the scope of this paper. Please refer to the Managing Data Collection How-to Topics in SQL Server 2008 Books Online.

3.4.2 Memory Monitoring DMVs


In addition to the DMVs introduced in SQL Server 2005, SQL Server 2008 introduces five new memory monitoring DMVs: 1. sys.dm_os_memory_brokers 2. sys.dm_os_memory_nodes 3. sys.dm_os_nodes 4. sys.dm_os_process_memory 5. sys.dm_os_sys_memory The section below discusses the two main memory related DMVs sys.dm_os_process_memory and sys.dm_os_sys_memory. For information on the other DMVs, please refer to SQL Server 2008 Books Online. sys.dm_os_process_memory This DMV can be used to get a complete picture of the process address space. The relevant memory information for PeopleSoft applications can be derived from the following columns of this DMV: physical_memory_in_use: Process working set in KB, as reported by operating system. locked_page_allocations_kb: Physical memory that is allocated by using AWE APIs. This can be good indicator for 32 bit systems using AWE.

47

page_fault_count: Number of page faults that are incurred by the SQL Server process. A large number can indicate memory pressure. process_physical_memory_low: Process is responding to low physical memory notification. This can be good indicator for low memory conditions. Please refer to the sys.dm_os_process_memory topic in SQL Server 2008 Books Online for a full description of this DMV. sys.dm_os_sys_memory This DMV reports the overall system memory usage. Specific columns such as the total and available physical_memory_kb are good indicators of the total and available memory. The system_low_memory_signal_state can be used to determine a low memory condition. Please refer to the sys.dm_os_sys_memory topic in SQL Server 2008 Books Online for a full description of this DMV.

3.4.3 Extended Events


Extended Events is an extensive event handling infrastructure built in SQL Server 2008. Extended Events get data from execution points in the SQL Server engine and can be passed to targets outside the engine. Extended Events can capture very detailed enginelevel information that can be used for performance tuning purpose. Extended Events provide the ability to dynamically monitor active SQL Server processes, while having minimal effect on those processes. It is very light weight, non-intrusive, and flexible event trace system. The main advantage of Extended Events over other performance troubleshooting tool such as DMVs, SQL Trace or Profiler is it enables collecting detailed SQL Server execution data with a very low overhead. In addition, it is extensible and can be fully configured to capture most attributes of the server. Architecture The main architectural components of Extended Events are shown in the following diagram.

48

Events: They are monitoring points of interests in the execution of a SQL Server code path. When the point of interest is encountered the event is fired and the state information from the event is captured. Events can be used for tracing purposes or for triggering actions. The actions can be triggered synchronously or asynchronously. Targets: The event consumers are targets. After the event is fired, the event data is consumed by the target. Targets can process data, either synchronously, or asynchronously. Extended Event can have the following targets: Event bucketing Event pairing Event Tracing for Windows (ETW) Event file Synchronous event counter Ring buffer. Actions: The programmatic response or series of responses are called Actions. Some examples of actions are: Stack dumper Execution plan detection (SQL Server only) T-SQL Stack collection (SQL Server only) Run time statistics calculation Gather user input on exception. Types: The Type object encapsulates the information required to interpret the event data. Predicates: Predicates are a set of logical rules used to evaluate events when they are processed. They can be used to selectively capture event data based on specific criteria. Maps: A table that maps internal values to a descriptive string. Configuration Extended Events can be very useful for troubleshooting PeopleSoft application performance issues. The following example illustrates a code sample to: 1. Create an event session 2. Write the target output to a file 3. Select the event data from the file. Create an event session and write to a target file
create event session xsession_HighCpu on server ADD EVENT sqlserver.sql_statement_completed (action (sqlserver.sql_text) WHERE duration > 0), ADD EVENT sqlserver.sp_statement_completed (action (sqlserver.sql_text) WHERE duration > 0) add target package0.asynchronous_file_target

49

(SET filename=N'C:\temp\wait_stats.xel', metadatafile=N'C:\temp\wait_stats.xem') ; --- Start the session alter event session xsession_HighCpu on server state = start ;

Select Event Data from the file


select top 3 CONVERT(xml, event_data).value('(/event/data/value)[4]','int') as 'cpu', CONVERT(xml, event_data).value('(/event/data/value)[5]','int') as 'duration', CONVERT(xml, event_data).value('(/event/data/value)[6]','int') as 'reads', CONVERT(xml, event_data).value('(/event/data/value)[7]','int') as 'writes', CONVERT(xml, event_data).value('(/event/action/value)[1]','nvarchar(max) ') as 'batch' from sys.fn_xe_file_target_read_file ('C:\temp\wait_stats*.xel', 'C:\temp\wait_stats*.xem', null, null) order by CONVERT(xml, event_data).value('(/event/data/value)[4]','int') desc

Drop the Event


DROP EVENT SESSION xsession_HighCpu ON SERVER ; GO

For more information on Extended Events, please refer to the SQL Server Extended Events topic in SQL Server 2008 Books Online.

3.4.4 Query and Query Plan Hashes


SQL Server 2008 introduces the Query and Query Plan Hashes, which can be used to easily identify and tune similar queries that may collectively consume large amount of system resources. Query Hash Query Hash is a binary hash value, calculated on the query by the query optimizer during the query optimization, and stored in the sys.dm_exec_query_stats and sys.dm_exec_requests DMV as the query_hash column. Similar queries will have the same hash value. If the query differs by a literal, the hash value will still be same. Following are some examples of queries that will return the same hash value:
SELECT 'x' FROM PS_CUST_CONVER

50

WHERE SETID = 'MFG' AND CUST_ID = 'Z00000000022689' ;

and the query:


SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = 'MFG' AND CUST_ID = 'Z00000000083589' ;

If queries differ by structure in any other way expect for literals or parameter values, the hash value will be different. In the example below, the two queries will have different hash values, since the first query has AND and the second query has OR:
SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = 'MFG' AND CUST_ID = 'Z00000000022689' ;

and the query:


SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = 'MFG' OR CUST_ID = 'Z00000000083589' ;

Query Plan Hash Query Plan Hash is a binary hash value computed on the query execution plan during the query compilation phase. The query plan hash is calculated based on the logical and physical operators, and other important operator attributes. Query plan hash values will be the same for queries that have the same physical and logical operator tree structure, and identical attribute values for the important operator attributes. It is likely, that for some queries with varied parameter values, the query hash could be same, but the query plan hash is different. In the example below, the two queries have the same query hash value, but different query plan hash value. This is due to the fact that the optimizer chooses a different execution plan for each query, based on the cardinality of the data distribution for the parameter values:
SELECT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = PO001 AND RECEIPT_DT BETWEEN '2006-01-01' AND '2006-01-05' ;

and the query:


SELECT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING

51

FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = PO001 AND RECEIPT_DT BETWEEN '2006-01-01' AND '2008-01-05' ;

In the queries above, since the RECEIPT_DT values are so vastly different for the two queries, the optimizer may choose different execution plans for each. You can use the following code to find the query hash and the query plan hash for the above two queries:
-- Show the query_hash and query plan hash SELECT ST.text AS "Query Text", QS.query_hash AS "Query Hash", QS.query_plan_hash AS "Query Plan Hash" FROM sys.dm_exec_query_stats QS CROSS APPLY sys.dm_exec_sql_text (QS.sql_handle) ST WHERE ST.text = 'SELECT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = ''PO001'' AND RECEIPT_DT BETWEEN ''2006-01-01'' AND ''2006-01-05'';' OR ST.text = 'SELECT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = ''PO001'' AND RECEIPT_DT BETWEEN ''2006-01-01'' AND ''2008-01-05'';' ; GO

Performance Tuning using Query Hash and Query Plan Hash The Query Hash and Query Plan Hash can be a very powerful and effective performance tuning technique. Some practical applications for performance tuning are as follows: Cumulative Query Cost: Many a times, you may face a high CPU utilization issue on your database server. It is highly possible, that one large query may not be responsible for it, but many small queries may be causing a cumulative high CPU utilization. In this scenario, the query hash can be used to group those queries together as shown in the code below, taken from SQL Server 2008 Books Online:
-- Aggregated view of top-5 queries according to average CPU time. SELECT TOP 5 query_stats.query_hash AS "Query Hash", SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time", MIN(query_stats.statement_text) AS "Statement Text" FROM (SELECT QS.*, SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
52

((CASE statement_end_offset WHEN -1 THEN DATALENGTH(ST.text) ELSE QS.statement_end_offset END - QS.statement_start_offset)/2) + 1) AS statement_text FROM sys.dm_exec_query_stats AS QS CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats GROUP BY query_stats.query_hash ORDER BY 2 DESC ; GO

The following example returns information about the top five query plans according to average CPU time. This example aggregates the queries according to their query plan hash so that queries with the same query plan hash are grouped by their cumulative resource consumption:
SELECT TOP 5 query_plan_hash AS "Query Plan Hash", SUM(total_worker_time)/SUM(execution_count) AS "Avg CPU Time", MIN(CAST(query_plan as varchar(max))) AS "ShowPlan XML" FROM sys.dm_exec_query_stats AS QS CROSS APPLY sys.dm_exec_query_plan(QS.plan_handle) GROUP BY query_plan_hash ORDER BY 2 DESC ; GO

Baseline Query Plan Benchmarks: The Query Hash and Query Plan Hashes could be used as an effective tool to benchmark baseline query plans. You can run a stress test and capture the query plan hashes for the important and frequently executing queries. These hash values could then be compared to hash values on the production server, if a performance issue is noticed. Another application of this is also to monitor plan changes due to configuration or hardware changes. The baseline hash value of important queries can be recorded before the change and can be compared with the hash values after the change. This would help determine if any plans got changed.

3.5 Query Performance Optimization


SQL Server 2008 builds on the significant improvements and new features introduced in SQL Server 2005 from a Query Performance Optimization perspective. Features such as Snapshot Isolation level are further optimized in SQL Server 2008 and new features are introduced to easily force query execution plans and fine tuning of ad-hoc workloads.

53

3.5.1 Plan Freezing


SQL Server 2008 builds on the plan guides mechanism introduced in SQL Server 2005 in two ways: it expands the support for plan guides to cover all DML statements (INSERT, UPDATE, DELETE, MERGE), and introduces a new feature, Plan Freezing, that can be used to directly create a plan guide (freeze) for any query plan that exists in the SQL Server plan cache, for example:
sp_create_plan_guide_from_handle @name = N'MyQueryPlan', @plan_handle = @plan_handle, @statement_start_offset = @offset;

A plan guide created by either means has a database scope and are stored in the sys.plan_guides table. Plan guides are only used to influence the query plan selection process of the optimizer and do not eliminate the need for the query to be compiled. A new function sys.fn_validate_plan_guide has also been introduced to validate existing plan guides which you may have created for your PeopleSoft workloads running on SQL server 2005 and ensure their compatibility with SQL Server 2008. Plan freezing is available in the SQL Server 2008 Standard, Enterprise, and Developer editions.

3.5.2 Optimize for Ad hoc Workloads Option


SQL Server 2008 introduces a new option called optimize for ad hoc workloads which is used to improve the efficiency of the plan cache. When this option is set to 1, the SQL Server engine stores a small stub for the compiled ad hoc plan in the plan cache instead of the entire compiled plan, when a batch is compiled for the first time. The compiled plan stub is used to identify that the ad hoc batch has been compiled before but has only stored a compiled plan stub, so that when this batch is invoked again the database engine compiles the batch, removes the compiled plan stub from the plan cache, and replaces it with the full compiled plan. This mechanism helps to relieve memory pressure by not allowing the plan cache to become filled with large compiled plans that are not reused. Unlike the Forced Parameterization option, optimize for ad hoc workloads does not parameterize the query plan and therefore does not result in saving any processor cycles by way of eliminating compilations. For PeopleSoft applications, we recommend enabling the Forced Parameterization option, as discussed in section 2.6.3. When this is enabled, it in effect overrides the Optimize For Ad Hoc Workloads option, since the optimizer will create a parameterized plan and cache it once. The stub will not be cached due to Forced Parameterization. By default, the Optimize For Ad Hoc Workloads option is set to 0, indicating do not optimize for ad hoc workloads. We recommend leaving this option set to 0 for PeopleSoft applications. You can check the setting of this option by using the command:

54

sp_configure 'optimize for ad hoc workloads';

A config_value and run_value of 0 is desirable.

3.5.3 Lock Escalation


Lock escalation has often caused blocking and sometimes even deadlocking problems for PeopleSoft applications. Previous versions of SQL Server permitted controlling lock escalation (trace flags 1211 and 1224), but this was only possible at an instance-level granularity. While this helped work around the problem, it at times had adverse sideeffects on other tables. Another problem with the SQL Server 2005 lock escalation algorithm was that locks on partitioned tables were directly escalated to the table level, rather than the partition level. SQL Server 2008 offers a solution for both these issues. A new option has been introduced to control lock escalation at a table level. If an ALTER TABLE command option is used, locks can be specified to not escalate, or to escalate to the partition level for partitioned tables. For example:
ALTER TABLE PSOPERDEFN SET (LOCK_ESCALATION = OFF) GO

Both these enhancements help improve the scalability and performance without having negative side-effects on other objects in the instance. Lock escalation is supported in all editions of SQL Server 2008.

3.6 Hardware Optimizations


SQL Server 2008 Database engine introduces some hardware related optimizations such Hot Add CPU and NUMA support. The section below describes some of these capabilities and recommendations for PeopleSoft applications.

3.6.1 Hot Add CPU


Hot add CPU is defined as the ability to dynamically add CPUs to a running system. Prior versions of SQL Server did not have this ability and the system had to be stopped and restarted for the new CPU to be added and utilized. A CPU can be added to a system in a few different ways, such as: Physically -- Adding new physical CPU (hardware addition) Logically Online hardware partitioning Virtually Through virtualization technologies SQL Server 2008 Enterprise Edition supports all three ways mentioned above, to add a CPU. It is important to note that Hot Add CPU is not just a SQL Server feature. It is a feature based on some hardware and OS requirements, such as:
55

64-bit edition of Windows Server 2008 Datacenter or the Windows Server 2008 Enterprise Edition for Itanium-Based Systems operating system Inherent hardware capability to support Hot Add CPU

For SQL Server 2008 to be able to Hot Add CPU, it cannot be configured to use softNUMA.

3.6.2 NUMA
Non-Uniform Memory Access (NUMA) is a memory design technique used with multiprocessor servers. In NUMA, each CPU can access memory associated with the other groups in a coherent way, thereby reducing memory latency and improving scalability. The main benefit of NUMA is scalability, especially for large multiprocessor machines. Full discussion of NUMA is beyond the scope of this whitepaper. Please refer to the SQL Server 2008 Books Online topics on Understanding Non-uniform Memory Access and How SQL Server Supports NUMA for an in-depth NUMA discussion. SQL Server 2008 and some earlier versions (SQL Server 2000 SP3 and beyond) are NUMA aware. Some key changes were introduced in SQL Server 2005 for NUMA support. SQL Server has been designed for NUMA hardware and no configuration changes are required. It performs well on NUMA hardware without special configuration. Hardware and Soft-NUMA Support SQL Server supports hardware and soft-NUMA. For hardware-NUMA, SQL Server configures itself during startup based on underlying operating system and hardware configuration. For soft-NUMA, SQL Server needs to be physically configured, before it can use softNUMA. Please refer to the SQL Server 2008 Books Online topic on How to: Configure SQL Server to Use Soft-NUMA for soft-NUMA configuration.

56

4 Database Maintenance
The following sections discuss issues of database maintenance, such as managing indexes, detecting and reducing fragmentation, using database statistics, and controlling locking behavior.

4.1 Table and Index Partitioning


For large PeopleSoft installations, the table and index size can grow to become unmanageable over time and easily be tens of millions of rows. SQL Server 2008 offers the concept of table and index partitioning which can help partition the data. The data in partitioned tables and indexes is horizontally partitioned into units that can be spread across one or more filegroups in the database. You can choose an appropriate partitioning scheme to make large tables and indexes more manageable and scalable. For PeopleSoft applications, table and index partitioning can reduce time for management and maintenance and may also help improve scalability and performance. Maintenance operations, such as index reorganizations and rebuilds, can be performed on a specific partition, thereby optimizing the maintenance process. With an efficient partitioning design, maintenance operations can be significantly reduced. In this situation, the partitioning design would most likely create partitions based on the static (data that does not change or seldom changes) and dynamic (data that is affected or changed very frequently) nature of data. All static data is stored in specific partition(s) and dynamic data is stored in other partition(s). Maintenance operations are only performed against the dynamic data partition, affecting only a very small portion of the data in the table. Backups can also be performed for the dynamic partition data only. Note: You cannot directly back up a partition; however, you can place a partition on a specific filegroup and back up just that filegroup. Performance and Scalability Queries with an equi-join on two or more partitioned tables can show improvements in performance if their partitioning columns are the same as the columns on which the tables are joined. The SQL Server query optimizer can process the join faster, because the partitions themselves can be joined. It is important to note that the partitioning function should be the same for the tables. Performance may also be improved for queries that access only a specific partition of the table. The assumption is that the entire data required to process the query is contained within the same partition. Because the index size and the index tree depth are smaller as compared to a similar size non-partitioned table, the query execution will be more efficient. What follows is an example of how to partition a table or index in PeopleSoft applications. It is important to note that the example explains the steps required to create

57

partitions. The choice of tables and the columns to partition on will depend on your PeopleSoft application and specific scenario. Step 1 - Create a Partition Function A partition function specifies how the table or index is partitioned. The function helps divide the data into a set of partitions. The following example maps the rows of a table or index into partitions based on the values of a specified column:
CREATE PARTITION FUNCTION AcctRangePF1 (char(10)) AS RANGE LEFT FOR VALUES ( '1000', '2000', '3000', '4000') ;

Based on this function, the table to which this function is applied will be divided into five partitions as shown below: Partition Values 1 col1<= '1000' 2 col1 > '1000' AND col1 <= '2000' 3 col1 > '2000' AND col1 <= '3000' 4 col1 > '3000' AND col1 <= '4000' 5 col1 > '4000'

Step 2 - Create a Partition Scheme A partition scheme maps the partitions produced by a partition function to a set of filegroups that you define. The following example creates a partition scheme that specifies the filegroups to hold each one of the five partitions. This example assumes the filegroups already exist in the database.
CREATE PARTITION SCHEME AcctRangePS1 AS PARTITION AcctRangePF1 TO (HR1fg, HR2fg, HR3fg, HR4fg, HR5fg) ;

Step 3. Create a Table or Index Using the Partition Scheme The example below creates the PS_LEDGER table using the partition scheme defined in Step 2.
CREATE TABLE PS_LEDGER ( [BUSINESS_UNIT] [char](5) COLLATE Latin1_General_BIN NOT NULL, [LEDGER] [char](10) COLLATE Latin1_General_BIN NOT NULL, [ACCOUNT] [char](10) COLLATE Latin1_General_BIN NOT NULL,

58

[ALTACCT] [char](10) COLLATE Latin1_General_BIN NOT NULL, ) ON AcctRangePS1 (ACCOUNT) ; GO

The PS_LEDGER table will be created on the five partitions based on the partitioning function and the scheme created in Steps 1 and 2, respectively. The following table shows the partitions for PS_LEDGER based on the previous examples:

FileGroups Partition Values

HR1fg 1 ACCOUNT <= '1000'

HR2fg 2 ACCOUNT > '1000' AND col1 <= '2000'

HR3fg 3 ACCOUNT > '2000' AND col1 <= '3000'

HR4fg 4 ACCOUNT > '3000' AND col1 <= '4000'

HR5fg 5 ACCOUNT > '4000'

It is strongly recommended that you evaluate the choice to partition or not, and what tables to partition based on your specific PeopleSoft application scenario and requirements. Partitioning for most scenarios is most beneficial for management and maintenance. For some specific scenarios, partitioning can yield some performance improvement as well. If the tables involved in a query are not joined on the partitioning key and are not partitioned by the same partitioning function, or for a single table query, if all the data required for the query is not co-located on the same partition, performance may be negatively impacted. For more information about table and index partitioning in SQL Server 2008, refer to the topic Partitioned Tables and Indexes in SQL Server 2008 Books Online.

4.2 Managing Indexes


SQL Server maintains indexes automatically. However, indexes can become fragmented over time. The fragmentation can be of two types: internal and external. Internal fragmentation occurs when large amounts of data are deleted and pages are less full. A certain amount of free space on index pages is beneficial as it allows room for future inserts into these pages without having to split a page, however when the free space is excessive, the density of the page decreases and more pages need to be accessed to read the data. When the next logical page of an index is not the next physical page, it is called external fragmentation. This may impact performance when SQL Server is doing an ordered scan

59

of all or part of a table, or an index. The access by a range of values is no longer sequential, limiting the ability of the storage engine to issue large I/O requests.

4.2.1 Parallel Index Operations


For PeopleSoft applications, creating or maintaining indexes on large tables can benefit tremendously when index creation or maintenance operation is performed on multiple processors in parallel. SQL Server 2008 provides the ability to use the MAXDOP query hint to manually specify the number of processors that are used to run the index statement. When used, the MAXDOP value specified on the query overrides the instance-wide max degree of parallelism configuration value. Parallel index execution and the MAXDOP index option can be used for the following operations: CREATE INDEX ALTER INDEX REBUILD DROP INDEX (for clustered indexes only) ALTER TABLE ADD (index) CONSTRAINT ALTER TABLE DROP (clustered index) CONSTRAINT. Note: Parallel index operations are available only in SQL Server 2008 Enterprise and Developer Edition. Following is an example of using the MAXDOP index option with an ALTER INDEX statement:
ALTER INDEX PSALEDGER ON dbo.PS_LEDGER REBUILD WITH (MAXDOP = 4) ;

As mentioned in Chapter 2, for PeopleSoft applications it is recommended to set the MAXDOP server wide setting to 1. However, for better performance and CPU resource utilization during index maintenance operations, this setting either be temporarily increased or should be overridden by using the MAXDOP query hint as shown above.

4.2.2 Index-Related Dynamic Management Views


SQL Server 2008 provides the ability to query and return server and database state information to monitor the health of a server instance, diagnose problems, and tune performance through dynamic management views. The following sections describe the index-specific dynamic management views that can be used for monitoring and tuning indexes.

4.2.2.1 Identify Frequently Used Indexes


The sys.dm_db_index_usage_stats dynamic management view returns information on the usage frequency of an index. The following example query can be used to return
60

information on the frequency of seeks, scans, and lookups by a user query on all indexes for all user tables in a specific database:
select db_name(database_id)as 'DB Name', object_name(isu.object_id) as 'Table Name' , si.name as 'Index Name', user_seeks as 'Seeks', user_scans as 'Scans' , user_lookups as 'Lookups' from sys.dm_db_index_usage_stats isu inner join sys.indexes si on si.index_id = isu.index_id and si.object_id = isu.object_id inner join sys.objects so on so.object_id = si.object_id and so.type = 'U' ;

For more information on this dynamic management view, see the topic sys.dm_db_index_usage_stats in SQL Server 2008 Books Online. The sys.dm_db_index_usage_stats dynamic management view or the query using it can be used in PeopleSoft applications to retrieve information on the index usage statistics of the database. This information can help to evaluate index usage and plan for index maintenance operations as well. It is important to note that the information in this dynamic management view is cleared out when the SQL Server service is started.

4.2.2.2 Identify Missing Indexes


The SQL Server 2008 query optimizer has the ability to identify missing indexes for queries. When the query optimizer generates a query plan, it analyzes the best indexes for a particular filter condition. If the best indexes do not exist, the query optimizer generates a query plan based on available indexes, but stores information about the desired missing indexes. This information can be retrieved from the dynamic management views and analyzed to improve the indexing and query performance. The following 4 dynamic management views that help identify the missing index information. sys.dm_db_missing_index_group_stats: Returns summary information about missing index groups, for example, the performance improvements that could be gained by implementing a specific group of missing indexes. sys.dm_db_missing_index_groups: Returns information about a specific group of missing indexes, such as the group identifier and the identifiers of all missing indexes that are contained in that group. sys.dm_db_missing_index_details: Returns detailed information about a missing index; for example, it returns the name and identifier of the table where the index is missing, and the columns and column types that should make up the missing index. sys.dm_db_missing_index_columns: Returns information about the database table columns that are missing an index.

61

The information in this dynamic management view is reset or deleted when SQL Server service is started. For more details about these dynamic management views and the Missing Indexes feature, see the topic About the Missing Indexes Feature in SQL Server 2008 Books Online. The following example query can be used to identify missing index information for PeopleSoft applications.
select d.* , s.avg_total_user_cost , s.avg_user_impact , s.last_user_seek ,s.unique_compiles from sys.dm_db_missing_index_group_stats s ,sys.dm_db_missing_index_groups g ,sys.dm_db_missing_index_details d where s.group_handle = g.index_group_handle and d.index_handle = g.index_handle order by s.avg_user_impact desc go --- suggested index columns & usage declare @handle int select @handle = d.index_handle from sys.dm_db_missing_index_group_stats s ,sys.dm_db_missing_index_groups g ,sys.dm_db_missing_index_details d where s.group_handle = g.index_group_handle and d.index_handle = g.index_handle select * from sys.dm_db_missing_index_columns(@handle) order by column_id ;

It is highly recommended for PeopleSoft applications that you do a thorough analysis of the missing index data before creating any new indexes. Adding indexes may help improve query performance, however you should keep in mind that adding indexes, especially on highly volatile columns can have a significant negatively impact performance due to the extra processing that needs to be done to maintain them. It is recommended to only us PeopleSoft Application Designer to create any new indexes.

4.2.2.3 Identify Indexes Not Used to a Point in Time


Indexes that are created in the database, but have not been used by any query until a point in time can be identified with the sys.dm_db_index_usage_stats dynamic management view. The following example query identifies unused indexes:
select object_name(i.object_id), i.name
62

from sys.indexes i, sys.objects o where i.index_id NOT IN (select s.index_id from sys.dm_db_index_usage_stats s where s.object_id=i.object_id and i.index_id=s.index_id and database_id = db_id(PSFTDB) ) and o.type = 'U' and o.object_id = i.object_id order by object_name(i.object_id) asc ;

For PeopleSoft applications, it is important to understand that some indexes could be used quite infrequently; however, they still could be quite critical for performance of some specific functionality. For example, a batch process could run monthly, quarterly, or even annually. The index identified by the previous example query may never be used, but a batch process that is scheduled to be run could be using it. Deleting such an index could have adverse effects on performance for scheduled (but un-run) batch processes. Though it may lower overhead to delete indexes that are never used, thorough analysis should be made before deleting any index. If in doubt, it is best to disable an unused index, as explained below, rather than delete it. Note: The dynamic management view used to identify the unused index information gets cleared-out when SQL Server is restarted, therefore the information revealed by the previous query is from the last known SQL Server instance was restarted to the point in time the dynamic management view query is executed. To get an accurate view of the indexes that are not used over a longer period of time, and across SQL Server restarts, it is recommended to store snapshots of the output of the query over period of time and then analyze the aggregated data.

4.2.3 Disabling Indexes


SQL Server 2008 provides the ability to disable a non-clustered or clustered index which is very useful for maintenance and performance troubleshooting purposes. When an index is disabled, the index definition metadata and index statistics are retained. Disabling a non-clustered index physically deletes the index data. The disk space made available when data is deleted can be used for subsequent index rebuilds or other operations. When a non-clustered index is disabled, the rebuild operation requires enough temporary disk space to store both the old and new index. Disabling a clustered index on a table prevents access to the data; the data still remains in the table, but is unavailable for DML operations until the index is dropped or rebuilt. Since this would prevent the PeopleSoft application from accessing the table, it is not advisable to disable clustered indexes. For PeopleSoft applications, you can disable non-clustered indexes for the following reasons: o To correct I/O errors and then rebuild an index.

63

o To temporarily remove the index for performance troubleshooting purposes. o To optimize space while rebuilding other indexes. The following example shows how to disable an index:
ALTER INDEX PSCLEDGER ON dbo.PS_LEDGER DISABLE ; GO

To enable an index, use


ALTER INDEX REBUILD ; GO

or
CREATE INDEX WITH DROP_EXISTING ; GO

Make sure to evaluate each index carefully before disabling it. Some indexes may be required for monthly, quarterly, or year-end processes. Disabling infrequently used indexes could cause performance issues for those processes.

4.3 Detecting Fragmentation


In SQL Server 2008 fragmentation can be detected by using the index-related dynamic management view sys.dm_db_index_physical_stats. When the dynamic management view is executed, the fragmentation level of an index or heap is shown in the avg_fragmentation_in_percent column. For heaps, the value represents the extent fragmentation of the heap. For indexes, the value represents the logical fragmentation of the index. The following example query demonstrates how the dynamic management view returns fragmentation information:
SELECT database_id,object_id, index_id, index_type_desc, index_depth, index_level, avg_fragmentation_in_percent, fragment_count, avg_fragment_size_in_pages FROM sys.dm_db_index_physical_stats (DB_ID(N'YourPSFTdbName'), OBJECT_ID(N'PSFTtableName'), NULL, NULL , NULL) ;

The sys.dm_db_index_physical_stats dynamic management function replaces the DBCC SHOWCONTIG statement in earlier versions of SQL Server. Unlike DBCC SHOWCONTIG, the fragmentation calculation algorithms in both cases consider storage that spans multiple files and, therefore, are more accurate. As a result, the fragmentation values may appear to be higher. For PeopleSoft applications, the value for avg_fragmentation_in_percent should ideally be as close to zero as possible for maximum performance. However, values up to 15 percent are acceptable.

64

For more information about sys.dm_db_index_physical_stats, refer to the topic sys.dm_db_index_physical_stats in SQL Server 2008 Books Online.

4.4 Reducing Fragmentation


In SQL Server 2008, there are three ways to reduce fragmentation: Drop and re-create the index Reorganize the index Rebuild the index. The CREATE INDEX ... WITH (DROP_EXISTING = ON) statement can be used to drop and re-create the index. The following example demonstrates using the CREATE INDEX statement to drop and re-create an index:
CREATE CLUSTERED INDEX PS_LEDGER ON PS_LEDGER(BUSINESS_UNIT, OPERATING_UNIT, FISCAL_YEAR, ACCOUNTING_PERIOD, LEDGER, ACCOUNT, ALTACCT, DEPTID, PRODUCT, PROJECT_ID, AFFILIATE, CURRENCY_CD, STATISTICS_CODE, FUND_CODE, CLASS_FLD, MSSCONCATCOL) WITH (DROP_EXISTING = ON) ;

Use the DROP_EXISTING option to change the characteristics of an index or to rebuild indexes without having to drop the index and re-create it. The benefit of using the DROP_EXISTING option is that you can modify indexes created with PRIMARY KEY or UNIQUE constraints. This option performs the following: Removes all fragmentation. Reestablishes FILLFACTOR/PAD_INDEX. Recalculates index statistics. The second method to reduce fragmentation is to reorganize the index. To reorganize the index, use the ALTER INDEX REORGANIZE statement. It is the replacement for DBCC INDEXDEFRAG, and it will reorder the leaf-level pages of the index in a logical order. Use this option to perform online logical index defragmentation. This operation can be interrupted without losing work that has already been completed. The drawback in this method is that it does not do as good a job of reorganizing the data as an index rebuild operation and it does not update statistics. The following example demonstrates the ALTER INDEX REORGANIZE statement:
ALTER INDEX PS_LEDGER ON PS_LEDGER REORGANIZE; GO

65

The third method to reduce fragmentation is to rebuild the index. To do so, use the ALTER INDEX REBUILD statement. It is the replacement for DBCC DBREINDEX and it will rebuild the index online or offline. Use this option to: o Remove heavy defragmentation o Rebuild the physical index online or offline. The ALTER INDEX REBUILD statement requires a statistics update. The following example demonstrates the ALTER INDEX REBUILD statement:
ALTER INDEX PS_LEDGER ON PS_LEDGER REBUILD; GO

In general, when the avg_fragmentation_in_percent value is between 5 and 30 percent, the ALTER INDEX REORGANIZE statement can be used to remove fragmentation. For heavy fragmentation (more than 30 percent) the ALTER INDEX REBUILD or CREATE INDEX DROP EXISTING statements are recommended. Use the following guidelines to decide between the two options. Functionality Index definition can be changed by adding or removing key columns, changing column order, or changing the column sort order.* Index options can be set or modified. More than one index can be rebuilt in a single transaction. Most index types can be rebuilt online without blocking running queries or updates. Partitioned index can be repartitioned. Index can be moved to another filegroup. Additional temporary disk space is required. Rebuilding a clustered index rebuilds associated non-clustered indexes. ALTER INDEX CREATE INDEX WITH REBUILD DROP_EXISTING No Yes Yes Yes No No Yes Yes** Yes No Yes Yes Yes Yes

No No Unless the Unless the index definition keyword ALL is changed. specified. Yes No

Indexes enforcing PRIMARY KEY and UNIQUE constraints can be rebuilt without Yes dropping and re-creating the constraints. Single index partition can be rebuilt. Yes

66

* A non-clustered index can be converted to a clustered index type by specifying CLUSTERED in the index definition. This operation must be performed with the ONLINE option set to OFF. Conversion from clustered to non-clustered is not supported regardless of the ONLINE setting. ** If the index is re-created by using the same name, columns and sort order, the sort operation may be omitted. The rebuild operation checks that the rows are sorted while building the index.4 Fragmentation alone is not a sufficient reason to reorganize or rebuild an index. The main effect of fragmentation is that it slows down page read-ahead throughput during index scans. This causes slower response times. It is also not recommended to remove fragmentation for fragmentation of 5 percent or less, since depending on the index size, the cost may outweigh the benefit.

4.4.1 Online Index Reorganization


For most PeopleSoft applications, high application uptime and availability are expected. Database maintenance operations can sometimes affect the uptime requirement, because the database may have to be taken offline for database maintenance. SQL Server 2008 provides capabilities for database maintenance, specifically index reorganizations to be done online without affecting application uptime and availability.

4.4.1.1 Online Operations


In SQL Server 2008 you can create, rebuild, or drop indexes online. The ONLINE option allows concurrent user access to the underlying table or clustered index data and any associated non-clustered indexes during these index operations. When the indexes are being built with the ONLINE option, concurrent user access to query and modify the underlying table data is permitted. The ONLINE option is available in the following T-SQL statements: CREATE INDEX ALTER INDEX DROP INDEX ALTER TABLE (To add or drop UNIQUE or PRIMARY KEY constraints with CLUSTERED index option). The following example demonstrates an online index rebuild operation:
ALTER INDEX PS_LEDGER ON PS_LEDGER REBUILD WITH (ONLINE = ON) ; GO

Some text in this section, including the table and associated notes, are taken from the following source: Microsoft SQL Server 2008 Books Online. Reorganizing and Rebuilding Indexes. Microsoft Corporation. http://msdn2.microsoft.com/en-us/library/ms189858.aspx 67

In the following example, all indexes on the PS_LEDGER table are rebuilt online.
ALTER INDEX ALL ON PS_LEDGER REBUILD WITH (ONLINE = ON) ; GO

When you perform online index operations, the following guidelines apply: The underlying table cannot be modified, truncated, or dropped while an online index operation is in process Clustered indexes must be created, rebuilt, or dropped offline when the underlying table contains large object (LOB) data types: image, ntext, text, varchar(max), nvarchar(max), varbinary(max), and xml Non-unique non-clustered indexes can be created online when the table contains LOB data types, but none of these columns are used in the index definition as either key or non-key (included) columns. Non-clustered indexes defined with LOB data type columns must be created or rebuilt offline. You can perform concurrent online index operations on the same table only when doing the following: Creating multiple non-clustered indexes. Reorganizing different indexes on the same table. Reorganizing different indexes while rebuilding non-overlapping indexes on the same table. All other online index operations performed at the same time fail. For example, you cannot rebuild two or more indexes on the same table concurrently, or create a new index while rebuilding an existing index on the same table.

4.4.1.2 Disk Space Considerations


Additional temporary disk space is required for online operations. If a clustered index is created, rebuilt, or dropped online, a temporary non-clustered index is created to map old bookmarks to new bookmarks. If the SORT_IN_TEMPDB option is set to ON, this temporary index is created in tempdb. If SORT_IN_TEMPDB is set to OFF, the same filegroup or partition scheme as the target index is used. The temporary mapping index contains one record for each row in the table, and its content is the union of the old and new bookmark columns, including uniqueifiers and record identifiers, and including only a single copy of any column used in both bookmarks. Online index operations use row versioning to isolate the index operation from the effects of modifications made by other transactions. This avoids the need for requesting share locks on rows that have been read.

68

4.4.1.3 Performance Considerations


Online index operations are typically slower than equivalent offline index operations due to the extra steps required to allow online operations. Heavy update activity may further impede the online index operation as well. Online index operations are fully logged, potentially leading to decreased performance compared to bulk-logged operations. Because both the source and target structures are maintained during the online index operation, the resource usage for insert, update, and delete transactions is increased, potentially up to double the amount of usage. This could cause a decrease in performance and greater resource usage, especially CPU time, during the index operation. For PeopleSoft applications that have high uptime requirements, online index operations are recommended. However, they should be scheduled during times of low user or batch activity to avoid performance degradation. If the maintenance is being performed during a period when the application is offline it is recommended to use the offline index operations as in most cases they are faster.

4.4.1.4 Transaction Log Considerations


Online index operations require a large amount of transaction space. The space required depends on the size of the index. In addition to the index operations, concurrent online operations also consume log space. From a performance perspective, it is important to consider the extra transaction log space usage and pre-size the log file accordingly to avoid expansion during index or user operations.

4.4.2 Program to Defragment


SQL Server 2008 Books Online includes a sample program that dynamically defragments an index based on the fragmentation level of the index. For more information, refer to the topic sys.dm_db_index_physical_stats in SQL Server 2008 Books Online. Under the topic section examples, see D. Using sys.dm_db_index_physical_stats in a Script to Rebuild or Reorganize Indexes to find a sample script to rebuild indexes, based on fragmentation level. This script can be modified to suit specific needs or to do online index defragmentation.

4.5 Statistics
Statistics are details about the uniqueness (or density) of the data values, including a histogram consisting of an even sampling of the values for the index key (or the first column of the key for a composite index) based on the current data. It also includes the number of pages in the table or index. SQL Server uses a cost-based optimizer, which means that if the statistics are not relatively current, it is misleading to the optimizer and can result in poor execution plans.

69

4.5.1 AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS


As described in sections 2.6.4 and 2.6.5, for PeopleSoft applications it is recommended that you enable the AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS database options at the database level. When these options are enabled, SQL Server automatically updates statistics when the query optimizer determines that they are out of date. The AUTO_UPDATE_ STATISTICS option will automatically update the statistics for a table when a specific change threshold has been reached. The rowmodctr column in the sys.sysindexes catalog view maintains a running total of all relevant modifications to a table. This counter is updated each time any of the following events occurs: o A row is inserted into the table. o A row is deleted from the table. o An indexed column is updated. Every database is created with the database options AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS set to TRUE. For PeopleSoft applications, it is recommended that you leave these options set unless there is a compelling reason to disable them. If the options must be disabled in exceptional cases, they should only be disabled at the index or table level. The following sections describe the methods to disable statistics at a table level.

4.5.2 Disabling AUTO_UPDATE_STATISTICS at the Table Level


If you do not want statistics to be automatically updated during the normal operational hours for a specific table, you can disable the option at the table or index level. You then take the responsibility of maintaining statistics for that table or index by explicitly updating statistics. At a table level, the AUTO_UPDATE_STATISTICS option can be disabled using either the sp_autostats stored procedure or the UPDATE STATISTICS statement with the WITH NORECOMPUTE option. Use the sp_autostats procedure to indicate that statistics should or should not be updated automatically for a table. For example, to disable automatic updating of statistics for all the indexes on the table PS_BO:
sp_autostats PS_BO, 'OFF' ; GO

For example, to disable automatic updating of statistics for a specific index on the table PS_BO:
sp_autostats PS_BO, 'OFF', PSABO ; GO

70

Alternatively, use the UPDATE STATISTICS statement with the WITH NORECOMPUTE option. This indicates that statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates again. For example:
UPDATE STATISTICS PS_BO WITH NORECOMPUTE ; GO

Note: Setting the AUTO_UPDATE_STATISTICS database option to FALSE overrides any individual table settings.

4.5.3 User-Created Statistics


If a particular column in a table is not a leading column (the first column) in any indexes of that table, histograms will not be available on that column by default. If the AUTO_CREATE_STATISTICS database option is set to ON for the database or table, SQL Server may create statistics and histogram for that column as needed. Users can also explicitly create statistics on a table column. This creates a histogram on the first supplied column and associated density groups (collections) over the supplied column or set of columns, as the following example demonstrates:
CREATE STATISTICS BO_FIRST_NAME ON PS_BO_NAME (FIRST_NAME); GO

Statistics are used by the query optimizer to estimate the selectivity of expressions, and thus the size of intermediate and final query results. Good statistics allow the optimizer to accurately assess the cost of different query plans and choose a better query plan. User-created statistics are required for very few advanced performance tuning scenarios. In the majority of cases the statistics created by SQL Server are usually sufficient for the optimizer to produce efficient execution plans. For a detailed discussion about statistics, refer to the white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2005 available from Microsoft TechNet at http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx. Even though this paper is targeted to SQL Server 2005, the content is relevant and accurate for SQL Server 2008.

4.5.4 Updating Statistics


The SQL Server database engine automatically updates the statistics on the tables using an algorithm explained in the Maintaining Statistics in SQL Server 2005 section of the Statistics Used by the Query Optimizer in Microsoft SQL Server 2005 paper referenced above. Statistics can also be manually updated any time using the UPDATE STATISTICS statement. This statement updates information about the distribution of key values for one or more statistics groups (collections) in the specified table.

71

The following example updates statistics for all the indexes on PS_BO table, using a default sampling:
UPDATE STATISTICS PS_BO ; GO

Usually a default sampling is good enough. However, there were few occasions during tuning and benchmarking of a PeopleSoft application that the SQL Server optimizer failed to produce the best execution plan for some SQL statements. Further testing showed that updating statistics with FULLSCAN improved the situation. The following example updates statistics for all the indexes on PS_BO table, using a specific sampling:
UPDATE STATISTICS PS_BO WITH SAMPLE 40 PERCENT ; GO

The following example updates statistics for all the indexes on PS_BO table, using all the rows:
UPDATE STATISTICS PS_BO WITH FULLSCAN ; GO

Note: Use UPDATE STATISTICS with FULLSCAN as an exceptional situation, only if you believe the optimizer is not selecting a good execution plan due to inaccuracies in the sampled statistics on the index or table. Statistics can also be updated on all user-defined tables in the current database, using the sp_updatestats stored procedure as shown below. This may take a very long time to complete, especially when run against large databases.
USE PSFTDB GO EXEC sp_updatestats ; GO

4.5.5 Viewing Statistics


The DBCC SHOW_STATISTICS command reports the distribution statistics for a specified indexed or non-indexed column. The report contains the following useful information: o Date and time that statistics were collected. o Number of rows in the table. o Rows that were sampled. o Number of steps in the histogram. o Density for non-frequent column values.

72

o Average key length. o All density. o Distribution histogram. The following is an example of the DBCC SHOW_STATISTICS command and its output:

The statistics information can also be viewed using the sys.dm_db_index_physical_stats dynamic management view using the command shown below.
SELECT * FROM sys.dm_db_index_physical_stats (DB_ID('TVP'), NULL, NULL, NULL, 'DETAILED') ; GO

4.6 Controlling Locking Behavior


SQL Server 2008 uses isolation levels and locking as its method of achieving data consistency (atomicity). SQL Server locks are applied at various levels of granularity in the database. Locks can be acquired on row, page, key, ranges of key, index, partition, table, or database level. SQL Server dynamically determines the appropriate level at which to place locks for each Transact-SQL statement. SQL Server uses dynamic locking which means that little or no configuration is needed in order for SQL Server to achieve isolation and concurrency. The important aspects of SQL Server locking that particularly affect performance include isolation levels, lock granularity, and lock escalations.

73

4.6.1 Isolation Levels


Transaction isolation levels control the following in a SQL Server database: o Whether locks are acquired and the type of locks acquired when data is read. o Duration of the shared read lock o Whether a read operation referencing rows modified by another transaction: o Blocks until the exclusive lock on the row is freed. o Retrieves the committed version of the row that existed at the time the statement or transaction started. o Reads the uncommitted data modification. Choosing a transaction isolation level does not affect the locks acquired to protect data modifications. A transaction always gets an exclusive lock on any data it modifies, and holds that lock until the transaction completes, regardless of the isolation level set for that transaction. For read operations, transaction isolation levels primarily define the level of protection from the effects of modifications made by other transactions. As described in section 2.6.1 Read-Committed Snapshot, the recommended isolation level for PeopleSoft applications is the read-committed snapshot isolation level. Refer to section 2.6.1 in this document for more information about how to set and use this isolation level. Warning! Ensure that the version of PeopleTools you are using supports the readcommitted snapshot isolation level. You can only use it if it is supported by PeopleTools. Under the Read Committed Snaphot isolation level, blocking and deadlocking issues due to lock contention are greatly reduced. Read operations only acquire a Sch-s lock at the table. No page or row S locks are acquired and therefore do not block transactions that are modifying data. For a detailed description of isolation levels, refer to the topic Isolation Levels in the Database Engine in SQL Server 2008 Books Online.

4.6.2 Lock Granularity


SQL Server 2008 supports the following basic lock granularities (levels): Table Partition Page Key Key Range Row (or RID) These locking granularities represent the initial locking grain as determined by SQL Servers dynamic locking mechanism. When the lock grain is higher (table or page), it reduces the amount of CPU and memory resources spent on maintaining the locks.

74

However, it reduces concurrency. When lock grain is lower (key or row), the reverse is true. In SQL Server 2008, the ALTER INDEX statement with the ALLOW_ROW_LOCKS and ALLOW_PAGE_LOCKS options can be used to customize the initial lock grain for an index or an entire table, including indexes. These options will allow (or disallow) row or page locks on the specified object. The default for these options is ON, that is, row and page locks are allowed. Note: Row locks on non-clustered indexes refer to the key or row locator entries in the indexs leaf pages. By disallowing page locks, you can increase write concurrency, and can reduce writer writer deadlocks. For example:
ALTER INDEX PS_BO ON PS_BO SET (ALLOW_PAGE_LOCKS = OFF); GO

Note: In SQL Server 2008, when using the read-committed snapshot isolation level, lock contention or blocking issues due to writers blocking readers are eliminated. Therefore, the need to use the ALLOW_ROW_LOCKS and ALLOW_PAGE_LOCKS options for controlling locking to avoid blocking and improving concurrency is greatly reduced for writer reader blocking scenarios. If an index (or table) is dropped and re-created, ALTER INDEX will need to be reexecuted to reestablish the customized locking for the table or index.

4.6.3 Lock Escalations


Lock escalation is a technique to lower the number of locks taken by a transaction, and to manage the total amount of memory used for locks in control. SQL Server automatically escalates row, key, and page locks to coarser partition (if the table is partitioned and the option to escalate to partition locks is selected) or table locks as appropriate. Lock escalation converts a bunch of individual row or page locks to table locks. Lock Escalation Triggers and Mechanism Lock escalation is triggered when the lock count for one transaction exceeds 5000, or the lock count for one index or table exceeds 765. The lock manager determines how much memory is allocated for locks. If more than 40 percent of the memory pool is used for locks, SQL Server attempts to escalate multiple page, key, or RID locks to table locks. SQL Server tries to find a table that is partially locked by the transaction, and holds the largest number of locks for which no escalation has already been performed and is capable of escalation. If any other process holds a contradictory lock on any rows, pages, or keys on the table, lock escalation cannot be done on that table.

75

When the threshold is reached (that is, lock memory is greater than 40 percent of SQL Server memory), SQL Server attempts to escalate locks to control the amount of memory used for locks. It identifies a table within the transaction that holds the maximum amount of locks as a good candidate for lock escalation. However, if it finds any incompatible locks held on the table, it skips that table. If the table lock requested cannot be granted, escalation is not blocked; the transaction continues and escalation will be requested when the next multiple of 1250 locks have been acquired by the transaction. Lock Escalation Hierarchy Lock escalation never converts row locks to page locks, but always converts them to partition (if the table is partitioned and the option to escalate to partition locks is selected) or table locks. The escalation is always directly from row or page to a partition or table lock. Lock Escalation and Performance Though lock escalation may at times result in blocking and deadlocks, it is not the only cause of blocking and deadlocks. Often blocking and deadlocks happens because of the application and the nature of the usage, even without any lock escalation. If lock escalation is causing performance issues through excessive blocking or deadlocking, lock escalation can be prevented. Use the following techniques to prevent lock escalation: o Use SQL Server Profiler and monitor lock escalations to find out how frequently lock escalation occurs and on what tables. o Ensure that SQL Server has sufficient memory. Use the SQL Server Performance Monitor to monitor Total Server Memory (KB) and Lock Memory. o Determine if the transaction provides a way to control the commit frequency. If yes, increase the commit frequency. A good example is Global Payroll PAYCALC. o Selectively disable lock escalation on one or more tables using the method explained below. Controlling Lock Escalation SQL Server 2008 introduces a new option to control table lock escalation. Using the ALTER TABLE command, as explained in section 3.5.3, locks can be specified to not escalate, or to escalate to the partition level for partitioned tables. Both these enhancements help improve the scalability and performance without having negative side-effects on other objects in the instance. Controlling the lock escalation is done at the database table level and does not require any PeopleSoft application change. In SQL Server 2008, the read-committed snapshot isolation level is the recommended setting for PeopleSoft applications. This isolation level has no direct effect on lock escalation; however, it does alleviate lock contention or blocking problems caused by lock escalation. For instance, if an UPDATE statement causes lock escalation and the

76

entire table is locked, under read-committed snapshot isolation level, a concurrent read transaction on the table would not be blocked. Warning! Ensure that the version of PeopleTools you are using supports the readcommitted snapshot isolation level. You can only use it if it is supported by PeopleTools.

4.6.4 Lock Escalation Trace Flags


In very specific customer cases, trace flag 1211 and 1224 were sometimes used to suppress the request for table locks. While this was the case with earlier versions of SQL Server, with the introduction of table level lock control mechanism explained above this trace flag is not advisable for PeopleSoft applications. You can use the command below to check the status of the trace flags enabled on the instance of SQL Server.
DBCC TRACESTATUS ; GO

4.6.5 Deadlocks
In SQL Server 2008, with the introduction of the new read-committed snapshot isolation level, lock contention or blocking problems are greatly reduced. Read-committed snapshot isolation eliminates writers blocking readers and readers blocking writers. This, in turn, would also eliminate the read write deadlock scenarios, where an UPDATE, INSERT or DELETE and a SELECT transaction deadlock. However, deadlocks caused by two concurrent UPDATE, INSERT or DELETE transactions and other writer writer scenarios still may exist. The information that follows will help you analyze and debug these deadlocks. A deadlock occurs when two processes are waiting for a resource and neither process can advance because the other process prevents it from getting the resource. Knowing the following information is a starting point to resolve a deadlock: Processes that caused the deadlock. Deadlock trace. SQL statements that caused the deadlock. SQL statements within the transaction, where the deadlock happened. Execution plan on each SQL statement that resulted in a deadlock.

Deadlocks can be monitored in one of three ways: using trace flag 1204, using trace flag 1222, or using SQL Server Profiler. Using Trace Flag 1204 or Trace Flag 1222 When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is captured in the SQL Server 2008 error log. Trace flag 1204 reports deadlock information formatted by each node involved in the deadlock. Trace flag 1222 formats deadlock
77

information, first by processes, and then by resources in an XML formatted output. It is possible to enable both trace flags to obtain two representations of the same deadlock event. SQL Server can be started with trace flag 1204 or 1222 enabled. To do so: 1. Open SQL Server Configuration Manager, and select SQL Server Services 2. Right-click on the SQL Server service corresponding to the PeopleSoft database instance and select your Properties 3. In the Advanced tab enter the following traceflags to end of the Startup Parameters entry. -T1204 -T3605 Make sure to not accidentally change any of the existing Startup Parameter values that exist. The output of the deadlock trace will be logged into the SQL Server error log specified by the -e parameter in Startup Parameters. The 3605 flag causes the output to go to the error log rather than the screen. This is set as the default in SQL Server 2008. Alternatively, deadlock tracing can be enabled with the following command:
DBCC TRACEON (1204, 3605,-1) ; GO

Here is sample output from DBCC TRACEON for 1204:

The output contains the following information: The object involved in the deadlock. In this example, it is 10:2045302396:2, where 10 is the database ID, 2045302396 is the object ID, and the final 2 is the index ID. Entering the following gives you the name of the database where the deadlock occurred:
SELECT DB_NAME(captured_database_id) ;

78

From the deadlocked database, entering the following shows the table involved:
SELECT OBJECT_NAME(captured_object_id) ;

Entering the following shows the index involved:


SELECT NAME FROM sys.indexes WHERE object_id = captured_object_id AND indid = captured_index_id ;

The statement type shows what kind of statement it is (for example, INSERT or UPDATE). Input buf: (input buffer) shows the actual statement. However, in a PeopleSoft environment you see either sp_prepexec or sp_cursorexec. This is not very useful in identifying the SQL statement.

For more information about the trace flags, see Detecting and Ending Deadlocks in SQL Server 2008 Books Online. Using SQL Server Profiler A better alternate is to enable SQL Server Profiler. The list of events and data columns required is specified in Troubleshooting Tips within section 5.2.3 Using SQL Server Profiler. The following image represents sample output captured by the profiler:

To use this SQL Profiler output to determine the cause of a deadlock: 1. Save the output into a trace table. From the File menu, select Save As, and choose Trace Table. 2. Use the following T-SQL statement to find the list of Deadlock Chain events.
SELECT * FROM DLTRACE1 WHERE EventClass=59 ; GO

79

DLTRACE1 is the trace table and EventClass 59 is for deadlocks. From the output you can determine which SPID is involved in the deadlock and note down the row number for this Deadlock Chain event. 3. Substitute the values in the following query and you will find all the T-SQL statements used by that process as part of the deadlocked transaction.
DECLARE @LastCommitPoint int, @DLSpid int, @DLChainRowNumber int /* Set the Deadlock SPID and the Deadlock Chain's rownumber */ SET @DLSpid = 134 SET @DLChainRowNumber = 159501 SELECT @LastCommitPoint = max(RowNumber) FROM DLTRACE1 WHERE SPID = @DLSpid AND RowNumber < @DLChainRowNumber AND EventClass = 41 -- SQL:StmtCompleted AND TextData like 'COMMIT TRAN%' SELECT * FROM DLTRACE1 WHERE SPID = @DLSpid AND RowNumber < @DLChainRowNumber AND RowNumber > @LastCommitPoint AND EventClass = 45; -- SP:StmtCompleted GO

4. Repeat the previous steps for the other Deadlock Chain events. These SQL statements will present a clear picture of how the deadlock happened. The following EventClass classes and their corresponding IDs are relevant to the PeopleSoft environment.
/* RPC:Completed - 10 Show Plan Text -96 Execution Plan - 68 RPC:Starting - 11 Lock:Escalation - 60 Lock:Deadlock - 25 Lock:Deadlock Chain - 59 SP:StmtStarting - 44 SP:StmtCompleted - 45 SQL:StmtStarting - 40 SQL:StmtCompleted - 41 (COMMIT TRAN) */

This Appendix section in this document includes an example of a procedure that automates the process. You can use this procedure as a model and modify it for your purposes.

4.6.5.1 Eliminating a Deadlock


The following actions can help eliminate a deadlock

80

Determine whether read-committed snapshot isolation is enabled. (Only applicable to those versions of PeopleTools that support read-committed snapshot isolation level). Determine whether the table contains up to date statistics. Check the missing index dynamic management views explained in section 4.2.2.2 to check for any missing indexes that may exist on the tables involved in the deadlock. Create any additional indexes that could help resolve the deadlock. Review the execution plans of the SQL statements that caused the deadlock and determine if they do an index scan. If they do, see if creating an additional index changes the access path for the SQL statement from index scan to index seek. For example, examine the following SQL statement:
SELECT DISTINCT EOEW_MAP_OBJ FROM PS_EOEW_RUN_PROC WHERE RUN_CNTL_ID LIKE :1 %CONCAT .%

The SQL statement does a clustered index scan because the leading key of the existing index is OPRID, but the SQL statement does not use OPRID as part of the WHERE clause. The solution is to add another index with RUN_CNTL_ID as a leading key:

Note: PeopleSoft applications are delivered with the necessary indexes that are required for an application and its performance for typical usage. They are not delivered with all the possible indexes because an index creates unnecessary overhead (on INSERT, UPDATE, and DELETE) if it is not useful for an implementation. Your implementation (data and business processes) may warrant some additional indexes. Adding an index cover for a non-clustered index to cover the query could help resolve the deadlock. In the previous example, the SQL statement would use the new index

81

first, but to get the EOEW_MAP_OBJ, it has to go to the table. It would use the available clustered index to perform this task. If EOEW_MAP_OBJ is also added to the new non-clustered index, the query becomes a covered query. In other words, SQL Server could build the result set entirely by reading the index. If the column you are trying to add to a non-clustered index is part of clustered index, there is no need to add that column to the non-clustered index for the purpose of index cover. Pay attention to lock escalations. If the deadlocks are being reported at the PAGE level in the SQL errorlog output, use the INDEXPROPERTY statement to determine whether page locks are disallowed on a table. For example:
SELECT INDEXPROPERTY(OBJECT_ID('PS_BO'), 'PS0BO', 'IsPageLockDisallowed'); GO

A return value of 0 means that page locks are allowed; a value of 1 means page locks are disallowed. If needed you can use ALTER INDEX to disallow page locks.
ALTER INDEX PS_BO ON PS_BO SET (ALLOW_PAGE_LOCKS = OFF); GO

4.7 Dedicated Administrator Connection (DAC)


SQL Server 2008 provides a special guaranteed access connection for administrators for situations when the SQL Server database cannot be accessed via a regular connection to the instance because the system resources have been exhausted by some rouge T-SQL or process. This diagnostic connection permits a database administrator to access SQL Server to execute diagnostic queries and troubleshoot problems. The DAC is available through the sqlcmd utility and SQL Server Management Studio. To connect to a server using the DAC 1. In SQL Server Management Studio, with no other DACs open, on the toolbar, click Database Engine Query. 2. In the Connect to Database Engine dialog box, in the Server name box, type ADMIN: followed by the name of the server instance. For example, to connect to a server instance named HRSrvr\HRProd, type ADMIN: HRSrvr\HRProd. 3. Complete the Authentication section, providing credentials for a member of the sysadmin group, and then click Connect. For more information about DAC and how to configure and use DAC, refer to the topic Using a Dedicated Administrator Connection in SQL Server 2008 Books Online.

82

5 Performance Monitoring and Troubleshooting


5.1 PeopleSoft Architecture
PeopleSoft Enterprise Pure Internet Architecture is a four-tier architecture. For optimum performance, every tier should work well as the overall performance of the system will be dictated by the performance of the worst performing tier. Refer to other PeopleSoft red papers for information about tuning the application server and Web server.

5.2 Narrowing Down the Cause of a Performance Issue


It is important first to identify the problem area before starting to fine-tune a T-SQL statement or a database parameter. A first step is to find out which of the four tiers is causing the performance issue.

5.2.1 Using System Monitor


If your Web and Application servers are using the Windows operating system, you can use Windows System Monitor to monitor system usage and performance. In Microsoft Windows Server 2008 and Windows Server 2003 and Windows Server 2000 System Monitor, also commonly known as Perfmon, provides not only Windows counters but also SQL Server 2008 counters. These counters monitor system characteristics, such as the present processor utilization, the SQL Server buffer cache hit ratio, Average Disk seconds/read and Average Disk seconds/write operation, which help you determine the health of your system. Following are some tips for using System Monitor: o Run System Monitor on the server which is least busy or run it on a different server. o Include counters for all the servers in one place. This can be done by selecting the server you want to monitor (for example, \\SS-APP1), selecting the object that you want to monitor (for example: processor), then selecting the corresponding counter and the instance (for example, %Processor Time, Total). o The default update frequency is one second. This may be too frequent for some counters like Processor or Memory. You can set this to a higher value, such as 30 seconds, without losing any valuable information. You can change the value by right-clicking on the counter and selecting Properties. o It is always a good idea to create Counter Logs and save the data to a file.

83

o System Monitor enables you to capture system as well as SQL Serverspecific information. The following table summarizes some of the useful System Monitor counters. Processor Counters Performance Counter Object Processor % Privileged Time

Description % Privileged Time is the percentage of non-idle processor time spent in privileged mode. (Privileged mode is a processing mode designed for operating system components and hardware-manipulating drivers. It allows direct access to hardware and all memory.) % Privileged Time includes time servicing interrupts and DPCs. A high rate of privileged time might be attributable to a large number of interrupts generated by a failing device. This counter displays the average busy time as a percentage of the sample time. % User Time is the percentage of non-idle processor time spent in user mode. (User mode is a restricted processing mode designed for applications, environment subsystems, and integral subsystems.) This counter displays the average busy time as a percentage of the sample time.

Processor

% User Time

Memory Counters Performance Counter Description Object Memory Available MBytes Available MBytes is the amount of physical memory available to processes running on the computer, in megabytes (bytes/1,048,576). Memory Committed Bytes Committed Bytes is the amount of committed virtual memory, in bytes. (Committed memory is physical memory for which space has been reserved on the disk paging file, in case it needs to be written back to disk.) This counter displays the last observed value only; it is not an average. Memory Page Faults/Sec Page Faults/sec is the overall rate that faulted pages are handled by the processor. It is measured in numbers of pages faulted per second. A page fault occurs when a process requires code or data that is not in its working set (its space in physical memory). This counter includes both hard faults (those that require disk access) and soft faults (where the faulted page is found elsewhere in physical

84

memory). Most processors can handle large numbers of soft faults without consequence. However, hard faults can cause significant delays. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Physical Disk Counters Performance Counter Object Logical Disk Avg. Disk Queue Length

Description Avg. Disk Queue Length is the average number of both read and write requests that were queued for the selected disk during the sample interval. This value should be <= two per disk. This counter measures how busy a physical array is (not logical partitions or individual disks in an array); it is a good indicator of the I/O for each array on your server. Avg. Disk Sec/Read is the average time in seconds of a read of data from the disk. This value should ideally be less than 11 milliseconds at all times. Avg. Disk Sec/Write is the average time in seconds of a write of data to the disk. This value should ideally be less than 11 milliseconds at all times for the disks hosting the data files. For the SQL Server database transaction log the value should preferably be less than 5 msec.

Logical Disk

% Disk Time

Logical Disk

Avg. Disk Sec/Read Avg. Disk Sec/Write

Logical Disk

Network Counters Performance Counter Object Network Bytes Total/Sec Interface Network Interface Network Interface Bytes Sent/Sec Bytes Received/Sec

Description Bytes Total/sec is the rate that bytes are sent and received on the interface, including framing characters. Bytes Sent/sec is the rate that bytes are sent on the interface, including framing characters. Bytes Received/sec is the rate that bytes are received on the interface, including framing characters.

SQL Server Counters Performance Counter Object SQL Server: Buffer Cache Hit Buffer Ratio Manager

Description Percentage of pages that were found in memory, thus not requiring a physical I/O operation. This is your indicator of how well the SQL Server buffer cache is performing. The higher the number the better. You
85

SQL Server: Buffer Manager SQL Server: Databases SQL Server: Databases

Page Life Expectancy

Active Transactions Transactions/Sec

SQL Server: Memory Manager SQL Server: Memory Manager

Lock Memory (KB) Total Server Memory (KB)

should typically see this value to be greater than 95%. Estimated number of seconds a page will stay in the buffer pool before it is written out (if not referenced). Low values (less than 180) may be a sign of an insufficient memory condition. Number of active transactions currently executing in the database. Number of transactions per second for this database. This counter shows how much activity is occurring in the system. The higher the value, the more activity is occurring. Total amount of memory, in kilobytes, that is allocated to locks. Total amount of dynamic memory, in kilobytes, that the server is currently consuming. SQL Server dynamically allocates and de-allocates memory based on how much memory is available in the system. This counter offers you a view of the memory that is currently being used. Number of full table or index scans per second. Since PeopleSoft applications do not use heap tables, you will not see any explicit table scans. Clustered index scans should be treated as full table scans. If this counter shows a non-zero value (>1), it is an indication that some queries can be optimized. This could be an opportunity for efficient indexing. This counter shows the number of user connections, not the number of users that currently are connected to SQL Server. If this counter exceeds 255, you may want to increase the SQL Server configuration setting max worker threads to a number higher than 255. If the number of connections exceeds the number of available worker threads, SQL Server begins to share worker threads, which can hurt performance. Number of SQL Server batch requests executed per second. A batch can be a single T-SQL statement or a group of T-SQL statements. For most PeopleSoft applications the batches are executed as single TSQL statements. Number of SQL Server query compilations per second. This value should be lower than 20. For values higher than that you may want to consider enabling the PARAMETERIZATION FORCED

SQL Server: Access Methods

Full Scans/Sec

SQL Server General

User Connections

SQL Server: SQL Statistics

Batch Requests/sec

SQL Server: SQL Statistics

SQL Compilations/sec

86

SQL Server: SQL Statistics

SQL ReCompilations/sec

option. Number of SQL Server query re-compilations per second. For PeopleSoft applications, recompilations are primarily caused by the statistics on the table changing and thereby invalidating existing cached plans. This number is usually less than 10.

5.2.2 Capturing Traces


Tracing can be enabled at various levels to provide relevant information and to help in troubleshooting. Traces can be captured from the PeopleSoft application, or you can use SQL Server Profiler to capture a trace at the database level. The following settings are recommended to capture the traces to identify problems. Be sure to set the values back to zero after capturing the trace. Note: Running the production environment with these settings can cause performance issues due to the overhead introduced with tracing.

5.2.2.1 Application Engine Trace


Modify psprcs.cfg as follows: ;---------------------------------------------------------------------; AE Tracing Bitfield ; ; Bit Type of tracing ; --- --------------- ; 1 - Trace STEP execution sequence to AET file ; 2 - Trace Application SQL statements to AET file ; 4 - Trace Dedicated Temp Table Allocation to AET file ; 8 - not yet allocated ; 16 - not yet allocated ; 32 - not yet allocated ; 64 - not yet allocated ; 128 - Timings Report to AET file ; 256 - Method/BuiltIn detail instead of summary in AET Timings Report ; 512 - not yet allocated ; 1024 - Timings Report to tables ; 2048 - DB optimizer trace to file ; 4096 - DB optimizer trace to tables ;TraceAE=(1+2+128) TraceAE=131 Note: For some batch programs, setting TraceAE flag to 131 can generate a huge file (by including the SQL Statements option). For those cases, using TraceAE=128 might help. Also setting TraceSQL=128 works well for collecting statistics for COBOL programs.

87

Online Trace Modify psappsrv.cfg as follows: ;---------------------------------------------------------------------; SQL Tracing Bitfield ; ; Bit Type of tracing ; --- --------------; 1 - SQL statements ; 2 - SQL statement variables ; 4 - SQL connect, disconnect, commit and rollback ; 8 - Row Fetch (indicates that it occurred, not data) ; 16 - All other API calls except ssb ; 32 - Set Select Buffers (identifies the attributes of columns to be selected). ; 64 - Database API specific calls ; 128 - COBOL statement timings ; 256 - Sybase Bind information ; 512 - Sybase Fetch information ; 4096 - Manager information ; 8192 - Mapcore information ; Dynamic change allowed for TraceSql and TraceSqlMask TraceSql=3 TraceSqlMask=12319 Note: TraceSql=3 captures the SQL information with relatively low overhead. PeopleTools development uses a value of 63 for SQL debugging. ;---------------------------------------------------------------------; PeopleCode Tracing Bitfield ; ; Bit Type of tracing ; --- --------------; 1 - Trace entire program ; 2 - List the program ; 4 - Show assignments to ; 8 - Show fetched values variables ; 16 - Show stack ; 64 - Trace start of programs ; 128 - Trace external function calls ; 256 - Trace internal function calls ; 512 - Show parameter values ; 1024 - Show function return value ; 2048 - Trace each statement in program ; Dynamic change allowed for TracePC and TracePCMask TracePC=456 TracePCMask=0

88

5.2.2.2 Using SQL Server Profiler


PeopleSoft applications make extensive use of the ODBC API cursor prepare/execute model in both online and batch applications. When tracing PeopleSoft activity using SQL Server Profiler, it is easier to start with tracing RPC events rather than SQL statements or SQL batch events. A significant portion of the database processing issued by PeopleSoft is performed by cursors. PeopleSoft applications use the Fast forward-only cursor with the autofetch (FFAF) option when possible, and achieves relatively good performance through this cursor model. RPC events can show SQL statements of the following types: sp_prepexec sp_cursorprepexec sp_cursorprepare sp_cursorexecute SQL Server Profiler can be used to detect inefficient SQL queries, deadlocks, and other events that cause performance problems. Following are some highlights of SQL Server Profiler: o Unlike dynamic management views which offer only aggregated information about the query execution statistics, SQL Profiler traces provide a chronologically ordered event by event trace. o Displays statement level resource consumption. o Helps to monitor and drill down into queries. o Organizes data into Events (rows) and Data Columns (columns). o Saves commonly used Event/DataColumn/Filter combinations as Trace Templates. o Allows you to save a trace as a Trace Table (Save As). It saves data into the database user table and permits querying with regular Transact-SQL. For example:
SELECT Max(Duration) FROM TrcDB.dbo.HRDB; GO

o Allows you to look for specific information, such as queries involving a particular table, with the filters (note the search is case-sensitive). For example: ObjectID - Equals - 1977058079 OR TextData - Like - %PS_BO_REL_CAT_ITEM% o Allows you to search upward for a cursor number to find the SQL statement. For example, you see a command such as sp_cursorexecute 41992. If this step shows performance problem, such as high reads or a higher duration, search upward for the cursor number 41992 to find the appropriate prepare statement.

89

To monitor SQL statements in the PeopleSoft environment, some specific events need to be captured. You can include these events and save them as a trace template for future use. The following table summarizes some potentially useful events on a PeopleSoft database. Lock Events Category of Event Lock

Specific Event Deadlock

Lock

Deadlock Chain

Lock

Escalation

Explanation/Remarks Indicates that two concurrent transactions have deadlocked each other by trying to obtain incompatible locks on the resources that the other transaction owns. Produced for each of the events leading up to the deadlock. For example, if three transactions are involved in a deadlock, three processes corresponding to the three transactions are listed as a Deadlock Chain. A finer-grained lock has been converted to a coarser-grained lock. SQL Server lock escalation will always convert row or page level locks into table level locks.

Database Events Category of Event Database

Specific Event Data File Autogrow

Database

Log File Autogrow

Explanation/Remarks Indicates that the data file grew automatically. This event is not generated if data file is grown explicitly through ALTER DATABASE. Performance is severely impacted during the autogrowth of a database. The database should be sized properly so that this event never occurs on a production database. Capturing this event has very low overhead. Indicates that the log file grew automatically. This event is not generated if the log file is grown explicitly through ALTER DATABASE. Performance is severely impacted during the autogrowth of a log file. The log should be sized properly so that this event never occurs on a production database. Capturing this event has very low overhead.

Performance Events Category of Event Specific Event Performance Showplan All Performance Showplan

Explanation/Remarks Displays the query plan of the SQL statement with full compile-time details. Displays the query-plan with full run-time
90

Statistics Profile details (including actual and estimated number of rows passing through each operation) of the statement that was executed. It requires that the Binary Data column be included. Stored Procedure Events Category of Event Specific Event Stored Procedures RPC:Starting

Stored Procedures Stored Procedures Stored Procedures

Explanation/Remarks Occurs when a remote procedure call has started. PeopleSoft applications extensively use the stored procedure (type) ODBC calls, such as sp_cursorprepare, sp_cursorexecute, and sp_cursorprepexec. They all fall under the Stored Procedures event category. RPC:Completed Indicates when the stored procedure is completed. SP:StmtStarting Indicates when a statement within the stored procedure is starting. SP:StmtCompleted Indicates when a statement within the stored procedure has completed.

Transact-SQL Events Category of Event Specific Event TSQL SQL:StmtStarting TSQL

Explanation/Remarks Occurs when a Transact-SQL statement is starting. SQL:StmtCompleted Occurs when a Transact-SQL statement is completed.

Data Columns The following SQL Profiler data columns are required in order to capture the relevant information for the events suggested above. Columns EventClass SPID processor Duration TextData Explanation/Remarks Type of event class captured. Server process ID assigned by SQL Server to the process associated with the client. Amount of processor time (in milliseconds) used by the event. Amount of time (in milliseconds) used by the event. Text value dependent on the event class captured in the trace. This column is important if you want to apply a filter based on the query text, or if you save the file into a table and run TransactSQL queries against the table. Binary value dependent on the event class captured in the trace. For some events such as Performance. ShowPlan Statistics, it is necessary to include this data column. This column is readable only using SQL Server Profiler as it stores the binary form of the
91

BinaryData

StartTime

EndTime

IndexID

ObjectID Reads

Writes

data. Time at which the event started, when available. For filtering, expected formats are YYYY-MM-DD and YYYY-MM-DD HH:MM:SS. Time at which the event ended. This column is not populated for starting event classes, such as SP:StmtStarting or RPC:Starting. For filtering, expected formats are YYYY-MM-DD and YYYY-MM-DD HH:MM:SS. ID for the index on the object affected by the event. To determine the index ID for an object, use the index_id column of the sys.indexes system table. System-assigned ID of the object. Number of logical reads performed by the server on behalf of the event. This column is not populated for starting event classes, such as SP:StmtStarting or RPC:Starting. Number of physical writes performed by the server on behalf of the event. This column is not populated for starting event classes, such as SP:StmtStarting or RPC:Starting.

Note: Though SQL Server Profiler captures trace files with these events and data columns may provide comprehensive information, trace files or tables may become huge. Troubleshooting Tips The following table summarizes of common problems and the possible causes that can result in SQL Server Profiler not producing the desired output. . Issue Possible Cause Event is captured but no relevant data Correct columns are not selected, for displayed. example, Performance. ShowPlan Statistics without BinaryData column. In SQL Server 2008, the relevant columns are automatically selected for an event in SQL Server Profiler. Setting a filter does not filter out all Rows with NULL values are not filtered. unrelated data. Search yields no matches even when values Profiler search is case-sensitive. exist. Heavy processor usage and disk activity. Reduce amount of data being captured; log to faster disks. Warning message about events not Unable to write all the trace information in captured appears. time. Reduce amount of data being captured; log to faster disk sub-system.

92

5.2.3 Using Dynamic Management Views


SQL Server 2008 introduced dynamic management views to expose internal performance and execution-related information through catalog tables and views. Dynamic management views and functions return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. The use of the dynamic management view data can be leveraged to tune and identify performance bottlenecks for high performing applications such as PeopleSoft applications. There are many types of dynamic management views relating to query execution, I/O, indexes, and other SQL Server features and functionality. Some dynamic management views by themselves can reveal important performance-related information. Others may have to be joined to other dynamic management views to get specific information. The following sections illustrate some sample code using dynamic management views to retrieve common performance-related information. The code samples that follow demonstrate some possible uses of the dynamic management views. These queries can also be used as Data Collectors as explained in Chapter 3. For more information about each dynamic management view, see Dynamic Management Views and Functions in SQL Server 2008 Books Online. Currently Executing Queries The dynamic management views sys.dm_exec_requests and sys.dm_exec_sql_text can be used together to find all currently executing SQL statements. The following query will show all currently executing SQL statements:
select r.session_id ,status ,substring(qt.text,r.statement_start_offset/2, (case when r.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else r.statement_end_offset end r.statement_start_offset)/2) as query_text --- this is the statement executing right now ,qt.dbid ,qt.objectid ,r.cpu_time ,r.total_elapsed_time ,r.reads ,r.writes ,r.logical_reads ,r.scheduler_id from sys.dm_exec_requests r cross apply sys.dm_exec_sql_text(sql_handle) as qt where r.session_id > 50

93

order by r.scheduler_id, r.status, r.session_id ; GO

Top 10 Processor Consumers The dynamic management views sys.dm_exec_query_stats and sys.dm_exec_sql_text can be used to identify the top processor consumers. The following dynamic management view query will retrieve the top 10 processor consumers:
SELECT TOP 10 qs.total_worker_time/qs.execution_count as [Avg CPU Time], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg CPU Time] DESC ; GO

Top 10 I/O Consumers The dynamic management views sys.dm_exec_query_stats and sys.dm_exec_sql_text can be used to identify the top I/O consumers. The following dynamic management view query will retrieve the top 10 I/O consumers:
SELECT TOP 10 (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count as [Avg IO], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid, qs.sql_handle, qs.plan_handle FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg IO] DESC ; GO

Top 10 Queries and Query Plans for Duration


94

The dynamic management views sys.dm_exec_query_stats, sys.dm_exec_sql_text, sys.dm_exec_query_plan can be joined to retrieve the top 10 queries and their execution plans by elapsed duration. The elapsed duration is in microseconds. The query plan is in an XML format.
SELECT TOP 10 qs.last_elapsed_time as 'Elapsed Time' , SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid, qp.query_plan FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt cross apply sys.dm_exec_query_plan (qs.plan_handle) as qp ORDER BY [Elapsed Time] DESC ; GO

System Waits The dynamic management view sys.dm_os_waiting_tasks lists all the current waiting tasks and the wait types associated with the tasks. This dynamic management view is useful to get an overall feel for the current system waits. The following code retrieves the current waiting tasks:
select session_id , exec_context_id , wait_type , wait_duration_ms , blocking_session_id from sys.dm_os_waiting_tasks where session_id > 50 order by session_id, exec_context_id ; GO

For historical wait statistics in the system, or in other words, for statistical information on waits that have already been completed, use the sys.dm_os_wait_stats dynamic management view as follows:
SELECT * from sys.dm_os_wait_stats ORDER BY wait_time_ms DESC ; GO

95

It is important to note that these statistics are not persisted across SQL Server restarts, and all data is cumulative since the last time the statistics were reset or the server was started. To reset wait counts you can use the following command:
DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR) ; GO

Blocking-Related Dynamic Management Views The dynamic management views sys.dm_tran_locks, sys.dm_os_waiting_tasks, sys.dm_exec_requests, sys.dm_exec_sql_text, and sys.sysprocesses can be used to retrieve the blocker and blocked SQL text and the lock requested modes as follows:
select t1.resource_type ,db_name(resource_database_id) as [database] ,t1.resource_associated_entity_id as [blk object] ,t1.request_mode ,t1.request_session_id -- spid of waiter ,(select text from sys.dm_exec_requests as r --- get sql for waiter cross apply sys.dm_exec_sql_text(r.sql_handle) where r.session_id = t1.request_session_id) as waiter_text ,t2.blocking_session_id -- spid of blocker ,(select text from sys.sysprocesses as p --get sql for blocker cross apply sys.dm_exec_sql_text(p.sql_handle) where p.spid = t2.blocking_session_id) as blocker_text from sys.dm_tran_locks as t1, sys.dm_os_waiting_tasks as t2 where t1.lock_owner_address = t2.resource_address ; GO

I/O-Related Dynamic Management Views Average I/O Stalls The dynamic management view sys.dm_io_virtual_file_stats can be used to identify the I/O stalls as follows:
select database_id, file_id ,io_stall_read_ms ,num_of_reads ,cast(io_stall_read_ms/(1.0+num_of_reads) as numeric(10,1)) as 'avg_read_stall_ms' ,io_stall_write_ms ,num_of_writes ,cast(io_stall_write_ms/(1.0+num_of_writes) as numeric(10,1)) as 'avg_write_stall_ms' ,io_stall_read_ms + io_stall_write_ms as io_stalls ,num_of_reads + num_of_writes as total_io

96

,cast((io_stall_read_ms+io_stall_write_ms)/(1.0+num_of _reads + num_of_writes) as numeric(10,1)) as 'avg_io_stall_ms' from sys.dm_io_virtual_file_stats(null,null) order by avg_io_stall_ms desc ; GO

5.2.4 Finding a Showplan


Execution plans can be produced in a text format or in the newly introduced XML format, with or without additional performance estimates, or in a graphical representation. The following SQL is shown with all of these representations:
SELECT EMPLID FROM PS_DEDUCTION_BAL B1 WHERE B1.EMPLID = 'PA100000001' AND B1.COMPANY = 'GBI' AND B1.BALANCE_ID = 'CY' AND B1.BALANCE_YEAR = 2000 AND B1.BALANCE_PERIOD = ( SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2 WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR AND DB2.DEDCD = B1.DEDCD AND DB2.DED_CLASS = B1.DED_CLASS AND DB2.BENEFIT_RCD_NBR = B1.BENEFIT_RCD_NBR AND DB2.BALANCE_PERIOD = 4 ) AND B1.DED_YTD <> 0 ; GO

SHOWPLAN_TEXT or SHOWPLAN_XML shows all of the steps involved in processing the query, including the order of table access, mode of access, types of joins used, and so on, as in the following example.
SET SHOWPLAN_TEXT ON ; -- (or you can use SET SHOWPLAN_XML ON) GO SELECT EMPLID FROM PS_DEDUCTION_BAL B1 WHERE B1.EMPLID = 'PA100000001' AND B1.COMPANY = 'GBI' AND B1.BALANCE_ID = 'CY' AND B1.BALANCE_YEAR = 2000 AND B1.BALANCE_PERIOD = ( SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2

97

WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR AND DB2.DEDCD = B1.DEDCD AND DB2.DED_CLASS = B1.DED_CLASS AND DB2.BENEFIT_RCD_NBR = B1.BENEFIT_RCD_NBR AND DB2.BALANCE_PERIOD = 4 ) AND B1.DED_YTD <> 0 ; GO SET SHOWPLAN_TEXT OFF ; GO

Here, the inner query is resolved first by Clustered Index Seek on PS_DEDUCTION_BAL. The outer query is resolved next using Clustered Index Seek on PS_DEDUCTION_BAL. The two result sets are merged using a Hash Match join. SHOWPLAN_ALL provides the same information as SHOWPLAN_TEXT, plus estimates of number of rows that are expected to meet the search criteria, estimated size of the result rows, estimated processor time, total cost estimate, and so on, as in the following example:
SET SHOWPLAN_ALL ON ; GO SELECT EMPLID FROM PS_DEDUCTION_BAL B1 WHERE B1.EMPLID = 'PA100000001' AND B1.COMPANY = 'GBI' AND B1.BALANCE_ID = 'CY' AND B1.BALANCE_YEAR = 2000 AND B1.BALANCE_PERIOD = ( SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2 WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR
98

AND AND AND AND ) AND GO

DB2.DEDCD = B1.DEDCD DB2.DED_CLASS = B1.DED_CLASS DB2.BENEFIT_RCD_NBR = B1.BENEFIT_RCD_NBR DB2.BALANCE_PERIOD = 4 B1.DED_YTD <> 0 ;

SET SHOWPLAN_ALL OFF ; GO

A graphical Showplan can be obtained in SQL Query Analyzer by selecting Display Estimated Execution Plan from the Query menu or by pressing Ctrl+L. In this case, the query is not executed. Alternately, the query can be executed and the actual execution plan can be obtained by selecting Show Execution Plan from the Query menu or by pressing Ctrl+K. The following is the graphical plan for the same SQL:

5.2.4.1 Getting an Execution Plan Using Bind Variables


PeopleSoft applications often use bind variables, instead of literals, to reduce compilations on repetitive SQL. When analyzing SQL that uses bind variables, use parameters to get an accurate Showplan. If you use literals instead of parameters, it may result in a different execution plan. For example:
SET SHOWPLAN_TEXT ON ; -- ( or SET SHOWPLAN_XML ON) GO DECLARE @P1 CHAR(11), @P2 CHAR(3), @P3 CHAR(2), @P4 smallint, @P5 smallint SELECT EMPLID FROM PS_DEDUCTION_BAL B1 WHERE B1.EMPLID = @P1 AND B1.COMPANY = @P2 AND B1.BALANCE_ID = @P3 AND B1.BALANCE_YEAR = @P4 AND B1.BALANCE_PERIOD = (
99

SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2 WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR AND DB2.DEDCD = B1.DEDCD AND DB2.DED_CLASS = B1.DED_CLASS AND DB2.BENEFIT_RCD_NBR = B1.BENEFIT_RCD_NBR AND DB2.BALANCE_PERIOD = @P5 ) AND B1.DED_YTD <> 0 ; GO SET SHOWPLAN_TEXT OFF ; GO

Note: You can use sp_help nameOfYourTable to determine the data types for the required columns.

5.2.4.2 Getting an Execution Plan Using a Stored Statement


Most of the SQL statements used by PeopleSoft applications are executed as prepared statements. Using the method shown in this section is a more accurate way to find an execution plan, and results in an execution plan either the same or closer to the one resulting when the same statement is called by the PeopleSoft application. For the following example, first turn on Show Execution Plan by pressing Ctrl+M:
declare @P0 INT, @P1 INT, @P2 INT, @P3 INT, @P4 INT set @P1=0 set @P2=4104 -- (Cursor Type) 8 Static + 4096 Parameterized set @P3=8193 -- ( Cursor Concurrency) 1 - READ_ONLY + 8192 ALLOW_DIRECT exec sp_cursorprepexec @P0 output, @P1 output, N'@P7 CHAR(11), @P8 CHAR(5), @P9 CHAR(2) ', N'SELECT EMPLID FROM PS_DEDUCTION_BAL B1 WHERE B1.EMPLID = @P7 AND B1.COMPANY = @P8 AND B1.BALANCE_ID = @P9 AND B1.BALANCE_YEAR = 2000 AND B1.BALANCE_PERIOD = (SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2 WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR AND DB2.DEDCD = B1.DEDCD AND DB2.DED_CLASS = B1.DED_CLASS AND DB2.BENEFIT_RCD_NBR = B1.BENEFIT_RCD_NBR AND DB2.BALANCE_PERIOD = 4 )AND B1.DED_YTD <> 0 ORDER BY PLAN_TYPE, BENEFIT_PLAN, DEDCD, DED_CLASS', @P2 output, @P3 output, @P4 output, 'PA100000001', 'GBI', 'CY' exec sp_cursorfetch @P1
100

Note: In this example, @P2 defines the cursor type, @P3 defines cursor concurrency, @P7, @P8, and @P9 are the user-defined parameters used in the query.

5.2.5 Finding Current Users and Processes


From the early versions of SQL Server to SQL Server 2008, database administrators often use the system stored procedure sp_who to get information about current SQL Server users and processes. Alternately, you can use its close variant sp_who2, which provides some additional information and formats the results better. When a command submitted is more than 32 characters, neither sp_who nor sp_who2 shows the complete command. The command DBCC INPUTBUFFER( SPID) offers some help and shows the full text of the command, as in the following example:
DBCC INPUTBUFFER (69) ;

In the PeopleSoft environment, because most of the SQL statements are executed as RPCs, neither sp_who nor DBCC would help find the actual command. Also, because the application server masks the actual user ID, it is difficult to find the SPID corresponding to a user. The context_info and sql_handle columns of the sys.dm_exec_requests dynamic management view, and the sys.dm_exec_sql_text dynamic management view can be used to get the actual SQL, as in the following example:
select session_id, cast(context_info as varchar(max)), qt.text from sys.dm_exec_requests cross apply sys.dm_exec_sql_text(sql_handle) qt ; GO

Zero cost plans are not cached. If you would like to retrieve the SQL statement for those plans, use trace flag 2861. Trace flag 2861 instructs SQL Server to keep zero cost plans cached, which SQL Server would typically not cache. However, this trace flag should only be used on development or test systems because it may add significant overhead and memory pressure due to the caching of all query plans, and should be disabled as soon as your investigation is complete. To temporarily enable zero cost plan caching with the DBCC TRACEON statement as follows:
DBCC TRACEON (2861); GO

To disable zero cost plan caching use the TRACEOFF statement as follows:
DBCC TRACEOFF (2861); GO

101

You can use DBCC TRACESTATUS to determine the status of a particular trace flag or all the trace flags enabled in the SQL Server instance using of the following commands:
DBCC TRACESTATUS (2861) -- Used to view status of traceflag 2861.

or
DBCC TRACESTATUS (-1) -- Used to view status of all traceflags.

Note: If you turn on the trace flag 2861 instance-wide with DBCC TRACEON ( 2861, -1), the system performance can be affected severely. You can use this on test servers, but it is recommended that you never use it on a production server. Check Appendix B, SP_PSWHO for a sample stored procedure that reveals much more information, and can be used as an alternate to sp_who in PeopleSoft environments.

5.2.6 Decoding the Object Blocking a Process


Sometimes it is necessary to troubleshoot plain blocking issues. The stored procedures sp_who and SP_PSWHO are helpful. You could also query the system tables to find this information directly, as in the following procedure. 1. Issue the following command to determine whether a process is blocked by another process:
select session_id, blocking_session_id from sys.dm_exec_requests where blocking_session_id <> 0 ; GO

The output indicates that process 87 is blocked by process 122.


session_id -----87 blocking_session_id ------122

2. To find the ID of the blocking object, issue the following command:


sp_lock

This produces the following output and indicates that process 87 is waiting on object id 1359395962.
spid dbid ObjId IndId Type Resource Status ------ ------ ----------- ------ ---- -------------------87 7 0 0 DB S 87 7 0 0 PAG 3:6086 IS 87 7 1359395962 0 RID 3:6086:0 S Mode -------GRANT GRANT GRANT -

102

87 87 87

7 7 7

893962261 1359395962 1058974999

9 0 0

PAG RID TAB

1:569718 3:6086:1

IS S Sch-S

GRANT WAIT GRANT

3. To decode the object name from the object id, issue the following command:
select name, type from sys.objects where object_id = 1359395962; GO

This produces the following output, indicating that you are waiting for PS_TSE_JHDR_FLD. The type is U, indicating it is a user table.
Name ------PS_TSE_JHDR_FLD Type ---------U

5.2.7 Selected DBCC Commands


The following table lists DBCC commands that are relevant to performance tuning. DBCC Command DBCC CACHESTATS DBCC TRACEON (trace flag #) DBCC TRACEOFF (trace flag #) DBCC TRACESTATUS (trace flag #) Description Displays cache statistics (hit ratio, object count, and so on) for each object. Enables the specified trace flag. Disables the specified trace flag(s). Displays the status of trace flags.

For more information about DBCC commands and their usage, see DBCC in SQL Server 2008 Books Online.

5.2.8 Using Hints


PeopleSoft applications are developed to be database platform agnostic, implying that the code, or at least a major part of it, is completely independent of the database. Because hints are database-specific they are never used in the code by default. However, there may be some situations where hints could help. For such cases one or more hints could be specified in the query, or via the use of a Plan Guide explained below. SQL Server provides hints for query optimization. These hints are useful when you are certain how a particular statement must be executed in your environment because of your data or business procedure. You can explicitly direct the optimizer to favor a particular index, use a certain type of join or a particular query plan.

103

As an alternate to query hints, you can also directly fix the query execution plan using the Plan Freezing feature explained in Chapter 3.

5.2.8.1 Query Processing Hints


The OPTION clause comes in handy for queries that do not follow ANSI-style syntax. Hints can be specified as part of the OPTION clause of a SELECT or UPDATE statement. This specifies that the indicated query hint should be used throughout the query. Each query hint can be specified only once, although multiple query hints are permitted. The OPTION clause must be specified with the outermost query of the statement. The query hint affects all operators in the statement. OPTION Syntax: [ OPTION (<query_hint> [,...n) ] <query_hint> ::= { { HASH | ORDER } GROUP | { CONCAT | HASH | MERGE } UNION | { LOOP | MERGE | HASH } JOIN | FAST number_rows | FORCE ORDER | MAXDOP number_of_processors | OPTIMIZE FOR ( @variable_name { UNKNOWN | = literal_constant } [ , ...n ] ) | OPTIMIZE FOR UNKNOWN | PARAMETERIZATION { SIMPLE | FORCED } | RECOMPILE | ROBUST PLAN | KEEP PLAN | KEEPFIXED PLAN | EXPAND VIEWS | MAXRECURSION number | USE PLAN N'xml_plan' | FORCESEEK | INDEX } For description and usage of each query hint in the previous example, see "Query Hint" in SQL Server 2008 Books Online. The following example demonstrates how to use the hint with the OPTION clause:
SELECT 75 , 'ABMD' , GETDATE() , 10623 , 21 , 'ABC_ACT_ID' , ' ' , ' ' , ' ' , ' ' , A.ABC_OBJ_ID , ' ' , ' ' , ' ' , ' ' , 'ACT_TBL' FROM PS_ACT_T2 A WHERE A.ABC_TAR_OBJ = 'Y' AND A.ABC_SRC_OBJ = 'Y' AND NOT EXISTS (SELECT 'X' FROM PS_DRIVER_T2 B , PS_DRIVER_TAR2_S2 C

104

WHERE B.ABC_DRIVER_ID = C.ABC_DRIVER_ID AND C.ABC_OBJ_ID= A.ABC_OBJ_ID AND B.ABC_DRIVER_SOURCE = 'A') AND EXISTS (SELECT 'X' FROM PS_DRIVER_T2 B1 , PS_DRIVER_TAR2_S2 C1 WHERE B1.ABC_DRIVER_ID = C1.ABC_DRIVER_ID AND C1.ABC_OBJ_ID = A.ABC_OBJ_ID AND B1.ABC_DRIVER_TARGET = 'A') OPTION (MERGE JOIN) ; GO

If you need to use query hints and cannot directly modify the query, you can use the plan guides features explained in the next section 5.2.8.4.

5.2.8.2 Index Hints


One or more specific indexes can be used by naming them directly in the FROM clause. The following example shows how to force the query to use the index named PS_LEDGER.
SELECT LEDGER, ACCOUNT, DEPTID FROM PS_LEDGER (INDEX (PS_LEDGER)) WHERE BUSINESS_UNIT like 'BU%' ; GO

If you need to use query hints and cannot directly modify the query, you can use the plan guides features explained in the next section 5.2.8.4.

5.2.8.3 OPTIMIZE FOR Query Hint


For queries with parameters, this query hint can be useful to optimize the query for a specific parameter value specified in the OPTIMIZE FOR clause. The hint can be used in scenarios when the parameter has a known value at compile time, but you want a different value to be used for compilation to yield better performance, because the cached plan may be optimized for a non-representative parameter value. The other scenario in which this hint could be used is when the parameter value is known only during runtime, but, for better performance, the query plan should be optimized using a fixed specific value. This option is useful in cases when there is just one optimal value for a parameter and it is easily identifiable and known. The following is an example of the OPTIMIZE FOR query hint. In this example, it is beneficial to optimize the query for a specific Business Unit, because this Business Unit is the most frequently queried Business Unit.
SELECT DISTINCT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING, BUSINESS_UNIT_PO, DESCR254_MIXED, INV_ITEM_ID, ORIGIN, PO_ID, SHIPTO_ID, SHIPMENT_NO, VENDOR_ID, VENDOR_NAME1, RECEIPT_DT, (CONVERT(CHAR(10),RECEIPT_DT,121)), RECV_INQ_STATUS FROM PS_RECV_INQ_SRCH
105

WHERE BUSINESS_UNIT = @BU AND PO_ID LIKE 'MPO%' AND RECEIPT_DT BETWEEN 2006-01-01 AND 2006-08-25 ORDER BY PO_ID, RECEIPT_DT, BUSINESS_UNIT, RECEIVER_ID DESC OPTION ( OPTIMIZE FOR (@BU = ' PO001') ) ; GO

The OPTIMIZE FOR hint can also be used with plan guides explained in the next section.

5.2.8.4 Plan Guides


The plan guides feature presents a method to inject query hints into T-SQL statements without requiring any modification to the query itself. This is particularly useful for PeopleSoft application queries that cannot be easily modified, for example, compiled COBOL jobs. The plan guides feature uses an internal lookup system table (sys.plan_guides) to map the original query to a substitute query or template. Every T-SQL query statement or batch is first compared against the optimizers cached plan store to check for a match. If one exists, the cached query plan is used to execute the query. If not, the query or batch is checked against the sys.plan_guides table for a match. If an active plan guide exists for the statement or batch, the original matching statement is substituted with the one from the plan guide. Once the statement is substituted, the query plan is compiled and cached, and the query is executed. The following flowchart depicts the flow of operations involved in the query mapping process.

106

You can use plan guides to specify any query hint individually, or in valid combinations. Plan guides are administered using two stored procedures: sp_create_plan_guide creates plan guides, and sp_control_plan_guide drops, disables, or enables them. Even though you can view the plan guides in the sys.plan_guides table if you have the correct access privileges, they should never be modified directly; you should always use the stored procedures provided to administer them. See Designing and Implementing Plan Guides in SQL Server 2008 Books Online for more information. The fabricated example below depicts a plan guide created for an SQL statement originating in the PeopleSoft Enterprise Human Capital Management (HCM) application that is used to inject an OPTIMIZE FOR query hint without modifying the query in the application. Note: This example is simply provided to explain the plan guides feature. The query hint specified is not really required by the query and does not help resolve any issue. The original query contained in the API cursor-based query is shown in the following Microsoft SQL Server Profiler output:
declare @P1 int set @P1=15 declare @P2 int set @P2=180150008 declare @P3 int set @P3=8 declare @P4 int set @P4=1 declare @P5 int set @P5=1 exec sp_cursorprepexec @P1 output, @P2 output, N'@P1 decimal(4,1)', N'SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1 ', @P3 output, @P4 output, @P5 output, 322.0; GO

107

You can create a plan guide to inject the OPTIMIZE FOR query hint using the DDL that follows to optimize the query for a value of @P1 = 14.0.
sp_create_plan_guide @name = N'MSGS_SEQ_PlanGuide', @stmt = N'SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1', @type = N'SQL', @module_or_batch = NULL, @params = N'@P1 decimal(4,1)', @hints = N'OPTION (OPTIMIZE FOR (@P1 = 14.0))' ; GO

Plan guides can also be used to match a set of queries, where the only difference among them is the value of the literal being passed in. This is done using a plan guide template. Once you create a plan guide template, the template will match all invocations of the specific query irrespective of the literal values. For example, to specify the PARAMETERIZATION FORCED query hint for all invocations of the following sample query:
SELECT MAX(MESSAGE_SEQ) PROCESS_INSTANCE = 28 ; GO FROM PS_MESSAGE_LOG WHERE

108

You can create the following plan guide template using the sp_get_query_template stored procedure.
DECLARE @stmt nvarchar(max) ; DECLARE @params nvarchar(max) ; EXEC sp_get_query_template N'SELECT MAX(MESSAGE_SEQ) PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = 28', @stmt OUTPUT, @params OUTPUT EXEC sp_create_plan_guide N'TemplateBased_PG', @stmt, N'TEMPLATE', NULL, @params, N'OPTION(PARAMETERIZATION FORCED)' ; GO

FROM

Note: For more information about this stored procedure, see sp_get_query_template in SQL Server 2008 Books Online. Plan guides are scoped to a particular database and can be viewed by querying the sys.plan_guides table. For example, the following statement lists all the plan guides in the HR90 database, as shown in the figure:
USE HR90 ; GO SELECT * FROM sys.plan_guides ; GO

109

The creation of a plan guide does not guarantee its use for a particular query. You should always make sure that the plan guides you create are applied to the particular query, and that the actions specified in the query hint have the desired effects.

5.2.8.5 USE PLAN Query Hint


SQL Server generates optimal query plans for most queries; however, there are times, especially with complex applications like PeopleSoft applications, when certain queries benefit from user intervention and some level of hand tuning. In earlier versions of SQL Server, you had to adopt a trial-and-error approach to force a query plan using one or more hints. This process was tedious and time consuming. SQL Server 2005 introduces a new query hint called USE PLAN, which guides the optimizer to create a query plan based on a specified XML query plan. The USE PLAN query hint can be used to guide the query optimizer into selecting a specific query plan, thereby filling this void and providing users virtually complete control over query plan generation. The USE PLAN query hint is specified as an OPTION clause along with an XML Showplan. For example, the Transact-SQL originating from the PeopleSoft HCM application has been hinted with the USE PLAN query hint. The original query is as follows:

110

SELECT MAX(MESSAGE_SEQ) FROM PROCESS_INSTANCE = @P1 ;

PS_MESSAGE_LOG WHERE

The following is the same query with the USE PLAN query hint specified:
SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1 OPTION (USE PLAN N' <ShowPlanXML xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showp lan" Version="1.0" Build="9.00.2047.00"> <BatchSequence> <Batch> <Statements> <StmtSimple StatementText="DECLARE @P1 decimal(4,1)&#xA;SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1" StatementId="1" StatementCompId="1" StatementType="SELECT" StatementSubTreeCost="0.00328942" StatementEstRows="1" StatementOptmLevel="FULL" StatementOptmEarlyAbortReason="GoodEnoughPlanFound"> <StatementSetOptions QUOTED_IDENTIFIER="false" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="false" ANSI_NULLS="false" ANSI_PADDING="false" ANSI_WARNINGS="false" NUMERIC_ROUNDABORT="false" /> <QueryPlan CachedPlanSize="9"> <RelOp NodeId="0" PhysicalOp="Stream Aggregate" LogicalOp="Aggregate" EstimateRows="1" EstimateIO="0" EstimateCPU="1.1e-006" AvgRowSize="11" EstimatedTotalSubtreeCost="0.00328942" Parallel="0" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Column="Expr1004" /> </OutputList> <StreamAggregate> <DefinedValues> <DefinedValue> <ColumnReference Column="Expr1004" /> <ScalarOperator ScalarString="MAX([HR90].[dbo].[PS_MESSAGE_LOG].[MESSAGE_SE Q])"> <Aggregate Distinct="0" AggType="MAX"> <ScalarOperator> <Identifier> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="MESSAGE_SEQ" /> </Identifier> </ScalarOperator> </Aggregate>

111

</ScalarOperator> </DefinedValue> </DefinedValues> <RelOp NodeId="1" PhysicalOp="Table Scan" LogicalOp="Table Scan" EstimateRows="1" EstimateIO="0.003125" EstimateCPU="0.0001614" AvgRowSize="20" EstimatedTotalSubtreeCost="0.0032864" Parallel="0" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="MESSAGE_SEQ" /> </OutputList> <TableScan Ordered="0" ForcedIndex="0" NoExpandHint="0"> <DefinedValues> <DefinedValue> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="MESSAGE_SEQ" /> </DefinedValue> </DefinedValues> <Object Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" /> <Predicate> <ScalarOperator ScalarString="[HR90].[dbo].[PS_MESSAGE_LOG].[PROCESS_INSTAN CE]=[@P1]"> <Compare CompareOp="EQ"> <ScalarOperator> <Identifier> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="PROCESS_INSTANCE" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Column="@P1" /> </Identifier> </ScalarOperator> </Compare> </ScalarOperator> </Predicate> </TableScan> </RelOp> </StreamAggregate> </RelOp> </QueryPlan> </StmtSimple> </Statements>
112

</Batch> </BatchSequence> </ShowPlanXML>') ;

In this example, the USE PLAN query hint and <xml showplan> are specified via the OPTION clause following the original SELECT query. While this is a somewhat trivial example shown in order to introduce the feature, the true power of this feature lies in being able to force the query plan for more complex queries that involve multiple table joins with multiple predicates and aggregate clauses. While the USE PLAN query hint provides a powerful option to influence the execution of a query, it should be used selectively and only by experienced users as a last resort in query tuning. Once specified, it locks down the query plan and prevents the optimizer from adapting to changing data shapes, new indexes, and improved query execution algorithms in successive SQL Server releases, service packs, and quick-fix engineering (QFE) changes. The USE PLAN query hint should always be specified via a plan guide and never be directly coded into the PeopleSoft application code. The corresponding plan guide that specifies the USE PLAN query hint for the previous SQL statement is as follows:
sp_create_plan_guide @name = N'UsePlan_PG', @stmt = N'SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1', @type = N'SQL', @module_or_batch = NULL, @params = N'@P1 decimal(4,1)', @hints = N'OPTION (USE PLAN N'' <ShowPlanXML xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showp lan" Version="1.0" Build="9.00.2047.00"> <BatchSequence> <Batch> <Statements> ... </Statements> </Batch> </BatchSequence> </ShowPlanXML>'')';

For readability purposes, a large section of the XML query plan has been replaced by the ellipsis. For an in-depth explanation for USE PLAN, including the procedure to capture the XML Showplan and its usage restrictions, see Plan Forcing Scenarios and Examples in SQL Server 2008 Books Online.

113

5.2.9 Correlating a Trace with Windows Performance Log Data


Troubleshooting and tuning line of business applications such as PeopleSoft often require combining performance data from various sources to draw definitive conclusions. It is a fairly common practice to combine data from SQL Server Profiler and System Monitor to troubleshoot performance. If you are troubleshooting a slow query, it is beneficial to analyze the reads, writes, processor, and duration data from SQL Server Profiler while reviewing the physical disk and processor counters (for example) from System Monitor. In SQL Server 2008 you can open a Windows performance log, choose the counters you want to correlate with a trace, and display the selected performance counters alongside the trace in the SQL Server Profiler graphical user interface. When you select an event in the trace window, a vertical red bar in the System Monitor data window pane of SQL Server Profiler indicates the performance log data that correlates with the selected trace event. To correlate a trace with Windows performance log data perform the following steps: 1. Start SQL Server Profiler 2. Open a saved SQL Profiler trace file or trace table. The trace must contain both StartTime and EndTime data columns Note: you cannot correlate a running trace that is still collecting event data. For accurate correlation with System Monitor data. 3. On the SQL Server Profiler File menu, click Import Performance Data 4. In the Open dialog box, select a file that contains a performance log. The performance log data must have been captured during the same time period in which the trace data is captured 5. In the Performance Counters Limit dialog box, select the check boxes that correspond to the System Monitor objects and counters that you want to display alongside the trace. Click OK 6. Select an event in the trace events window, or navigate through several adjacent rows in the trace events window by using the arrow keys. The vertical red bar in the System Monitor data window indicates the performance log data that is correlated with the selected trace event 7. Click a point of interest in the System Monitor graph. The corresponding trace row that is nearest in time is selected. To zoom in on a time range, press and drag the mouse pointer in the System Monitor graph. The screen shot below shows a sample correlation between SQL Server Profiler and Windows System Monitor data.

114

5.3 Common Performance Problems


This section describes some common performance problems related to PeopleSoft applications. The general steps for troubleshooting and problem resolution are discussed below.

5.3.1 High Processor Utilization


The probable causes for high processor utilization in PeopleSoft applications are discussed below, as well as some troubleshooting steps. One or a combination of a few causes discussed below could lead to high processor utilization. Intra-Query Parallelism A parallel query plan could use all the processor resources and could potentially lead to high processor consumption. Use the following query to identify wait types associated with parallelism for overall wait statistics:
select * from sys.dm_os_wait_stats ; GO

115

A wait type of CXPACKET associated with a high wait_time_ms could be an indication of this problem. For currently running tasks, use the following query to identify CXPACKET wait types. High wait_duration_ms associated with the CXPACKET wait type is an indication of processor utilization and performance degradation due to parallelism.
Select session_id , exec_context_id , wait_type , wait_duration_ms , blocking_session_id from sys.dm_os_waiting_tasks where session_id > 50 order by session_id, exec_context_id ; GO

To resolve this issue you should set the MAXDOP server configuration setting to a lower value. For PeopleSoft applications it is recommended to set this value to 1, see section 2.7.4 for additional details. Inefficient Query Plan An inefficient query plan leading to a full table scan or excessive reads can cause high processor utilization. To solve this problem, first it is important to identify the query that causes excessive processor consumption. You can use the following dynamic management view code to identify the query that is consuming the most processor resources:
select cpu_time, st.text from sys.dm_exec_requests er cross apply sys.dm_exec_sql_text(er.sql_handle) st order by cpu_time desc ; GO

Run the previous query a few times and if you notice that the value of cpu_time is constantly increasing, it is most likely the culprit statement(s). For cached queries you can use the following code to find high CPU consumers:
SELECT TOP 50 qs.total_worker_time/qs.execution_count as [Avg CPU Time], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text,
116

qt.dbid, dbname=db_name(qt.dbid), qt.objectid FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg CPU Time] DESC; GO

Identify and analyze the query and add appropriate indexes as required. Excessive Compilations Excessive SQL compilations can cause high CPU usage in PeopleSoft applications. The key data counters to look for in System Monitor are as follows: SQL Server: SQL Statistics: Batch Requests/sec SQL Server: SQL Statistics: SQL Compilations/sec SQL Server: SQL Statistics: SQL Recompilations/sec If the SQL Compilations/sec or SQL Recompilations/sec are excessively high, the processor usage could be elevated. And you may want to consider parameterizing the queries as explained in sections 2.6.3 and 2.10.2.2.

5.3.2 Disk I/O Bottlenecks


This problem is symptomatic with low processor utilization and poor response times for the queries. It is important to note that an I/O subsystem causing a bottleneck is a different problem than a query not using the proper indexing. An improperly indexed query can cause excessive I/O utilization and exhibit slow performance; however, it is clearly an indexing problem and not an I/O problem. When you have slow performing queries, ensure that optimal indexing is in place. If the problem persists, I/O bottleneck could be a possibility. The following steps can help you troubleshoot I/O bottlenecks. 1. Check SQL Server error log for error message 833 as described in section 5.4.4 Long I/O Requests. If there are I/O stalls of 15 seconds or higher, there might be some very severe problems with the I/O disk sub-system and these will be logged in the SQL Server error log. 2. However, not having the 833 error message does not rule out I/O bottlenecks. You could still be having a slow performing I/O. Use the query below to retrieve the average I/O throughput:
select database_id, file_id ,io_stall_read_ms ,num_of_reads ,cast(io_stall_read_ms/(1.0+num_of_reads) as numeric(10,1)) as 'avg_read_stall_ms'

117

,io_stall_write_ms ,num_of_writes ,cast(io_stall_write_ms/(1.0+num_of_writes) as numeric(10,1)) as 'avg_write_stall_ms' ,io_stall_read_ms + io_stall_write_ms as io_stalls ,num_of_reads + num_of_writes as total_io ,cast((io_stall_read_ms+io_stall_write_ms)/(1.0+num_of _reads + num_of_writes) as numeric(10,1)) as 'avg_io_stall_ms' from sys.dm_io_virtual_file_stats(null,null) order by avg_io_stall_ms desc ; GO

Refer to the section 2.1.2 Typical I/O Performance Recommended Range for the recommended I/O ranges. If the I/O values are not in the range as described, you most likely have an I/O bottleneck. Please engage the storage team to troubleshoot the I/O bottleneck. 3. You can also use the following System Monitor counters to detect I/O bottlenecks as explained in sections 2.1.2 and 5.2.1.

5.3.3 Memory Bottlenecks


For large PeopleSoft application workloads, the lack of sufficient memory can be a bottleneck and can lead to slow performance. At a high level, check the following System Monitor counters for memory pressure: SQL Server: Buffer Manager object o Low Buffer Cache Hit Ratio A value of 95 or lower can indicate memory pressure. o Low Page life Expectancy A value of 180 seconds or lower can indicate memory pressure. o High number of Checkpoint Pages/sec o High number Lazy Writes/sec If the system is under memory pressure, physically adding more memory to the server will help alleviate the problem. Moving to a 64-bit server is a possible solution as well.

5.3.4 Blocking and Deadlocking Issues


Blocking and deadlocking in PeopleSoft applications are fairly common issues leading to performance degradation. Blocking and deadlocking issues are usually triggered by one or more of the following conditions: o The read-committed snapshot isolation level is not enabled o A sub optimally coded query

118

o No optimal index available to perform index seek operation o Missing or stale statistics o Suboptimal hardware configuration, such as disks or processor, etc. It is best to eliminate these possible causes before investigating other possibilities in depth. Refer to sections 4.6.5 Deadlocks and 5.2.6 Decoding the Object Blocking a Process for more information about resolving blocking and deadlocking.

5.3.5 ODBC API Server Cursor Performance Enhancements


PeopleSoft applications use ODBC for database connectivity to the SQL Server database. The ODBC API server cursors are used extensively throughout the PeopleSoft application suite for result set processing. In previous versions of SQL Server, there were several known issues with ODBC API server cursors. For example the ODBC API server cursors could change the cursor type requested by the application to another cursor type based on a well-defined list of conditions; such as whether the query contained references to text, ntext, or image columns, and numerous other documented conditions. This is defined as cursor conversion, also known as cursor degradation. Typically, when these conversions occurred, the cursor type degraded to a more expensive cursor type. Generally, a fast forwardonly cursor performs best, followed by dynamic cursor, keyset-driven cursor, and finally, the static cursor is the lowest performing cursor. Sometimes these conversions caused performance problems particularly when the resulting cursor type was static or keyset-driven. This is because these two cursor types required that the entire result set (static) or keys be populated in a work table before the first row could be returned to the application. Whether this is problematic is directly related to the number of rows the cursor must gather. When the cursor created a very large result set, the application slowed when retrieving the initial set of rows. This can be problematic for applications that tend to join many tables with many target rows but only plan to use a small number of rows from the beginning of the result set. To improve some of the above-mentioned issues and to further enhance the API server cursors, two enhancements have been made to the API server cursor model in: implicit cursor conversion and real-time cursor tracking with dynamic management views.

5.3.5.1 Implicit Cursor Conversion


In SQL Server 2008, for most of the cases, the requested cursor type will be the resulting cursor type. Cursor degradation is now very limited and will happen in very few cases, as documented in SQL Server 2008 Books Online under the topic Implicit Cursor Conversions (ODBC). The fact that the majority of cases that caused cursor degradation
119

to occur have been eliminated should result in a more consistent application experience when using ODBC API server cursors in SQL Server 2008. Since this is an optimizer improvement in SQL Server 2008, no manual steps are required to leverage it.

5.3.5.2 Real-Time Cursor Tracking with Dynamic Management Views


Dynamic management views in SQL Server 2008 can be effectively used for cursor tracking and monitoring PeopleSoft applications ODBC API server cursors. The sys.dm_exec_cursors dynamic management object is a specific cursor-related dynamic management view. Querying this dynamic management object will return information about the open cursors in the database, such as: Cursor name Properties for declaration interface, cursor type, cursor concurrency, cursor scope, cursor nesting level Sql_handle handle to the text of the cursor that could be used with the sys.dm_exec_sql_text(sql_handle) view to return the exact cursor text Creation time Reads or writes performed by the cursor Fetch buffer size In the following example, the sys.dm_exec_cursors dynamic management view can be joined with the sys.dm_exec_sql_text to find the session_id, properties, reads, writes, creation time, and the exact cursor code executing currently, for all cursors:
select c.session_id, c.properties, c.reads, c.writes, c.creation_time, substring(qt.text,c.statement_start_offset/2, (case when c.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else c.statement_end_offset end c.statement_start_offset)/2) as cursor_text from sys.dm_exec_cursors (0) c cross apply sys.dm_exec_sql_text(c.sql_handle) qt ; GO

To find information about a specific cursor, replace the 0 with a session_id as the input parameter for the sys.dm_exec_cursors dynamic management view. By using the sys.dm_exec_cursors dynamic management view you have significantly improved capabilities to diagnose cursor-based applications over previous versions of SQL Server. For example, you can determine whether the cursors are truly the cursor type that application requested. You can also see if a keyset or static cursor is currently being asynchronously populated, and so on. For additional information on the sys.dm_exec_cursors dynamic management view, refer to the SQL Server 2008 Books Online topic, sys.dm_exec_cursors.
120

5.4 Database I/O


Incorrectly or sub-optimally configured disk subsystems (DAS, SAN, etc.) is the most common cause of performance problems. It is strongly recommended to thoroughly test the performance of the disk subsystem end to end before deployment. The following two tools are recommended for performance and stress testing the your disk subsystems.

5.4.1 SQLIO Disk Performance Test Tool


SQLIO is a tool provided by Microsoft which can be used to determine the I/O performance of a disk subsystem. For configuration and usage instructions, and to download the tool, refer to the following website: http://www.microsoft.com/downloads/details.aspx?FamilyID=9a8b005b-84e4-4f248d65-cb53442d9e19&DisplayLang=en. The Predeployment I/O Best Practices whitepaper (http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/pdpliobp.mspx) also explains the different I/O measurements and the interpretation of the data. SQLIO is one of the surest ways to determine the end to end performance of the disk subsystem.

5.4.2 SQLIOSim Disk Stress Test Tool


The SQLIOSim utility simulates the read and write patterns of a heavily loaded server that is running SQL Server, and it uses a Write Ahead Logging (WAL) protocol that is similar to the protocol that SQL Server uses. These patterns include heavy page insert/split simulations, inserts, updates, checkpoint stress scenarios, read-aheads, sorts, hashes, and backup scan activities that include large and varied scatter and gather I/O requests. The simulation also imposes heavy data file activity that requires high transaction log activity. For configuration and usage instructions and to download the tool, see How to use the SQLIOSim utility to simulate SQL Server activity on a disk subsystem at the following website: http://support.microsoft.com/default.aspx?scid=kb;en-us;231619.

5.4.3 Instant File Initialization


For large PeopleSoft installations the newly instant file initialization feature is very useful in avoiding long wait times associated with the data file expansion. Data and log files are first initialized by filling the files with zeros when you perform one of the following operations: o Create a database. o Add files, log or data, to an existing database. o Increase the size of an existing file (including autogrow operations). o Restore a database or filegroup. The initialization of the data or log file with zeros usually leads to wait time and blackouts, if a large expansion is required.
121

In SQL Server 2008, data files can be initialized instantaneously for fast execution of the previously mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously. Instant file initialization is enabled when the SQL Server (MSSQLSERVER) logon service account has been granted SE_MANAGE_VOLUME_NAME. This privilege is granted by default and no specific action is required to use this feature. Security Considerations Because the data file is not zeroed out on initialization and any previously deleted disk content is overwritten only as new data is written to the files, the deleted content might potentially get accessed by an unauthorized user. While the database file is attached to the instance of SQL Server, this information disclosure threat is reduced by the discretionary access control list (DACL) on the file. This DACL allows file access only to the SQL Server service account and the local administrator. However, when the file is detached, it may be accessed by a user or service that does not have SE_MANAGE_VOLUME_NAME privilege. A similar threat exists when the database is backed up. The deleted content can become available to an unauthorized user or service if the backup file is not protected with an appropriate DACL. If the potential for disclosing deleted content is a concern, you should do one or both of the following: Disable instant file initialization for the instance of SQL Server by revoking SE_MANAGE_VOLUME_NAME from the SQL Server service logion account Always make sure that any detached data files and backup files have restrictive DACLs. Note: Disabling instant file initialization only affects files that are created or increased in size after the user right is revoked. For PeopleSoft applications, instant file initialization is recommended from a performance perspective. However, evaluate the performance gain against the possible security risk. If the security policy does not allow for this possible risk, do not use instant file initialization. You can disable it by revoking SE_MANAGE_VOLUME_NAME from the SQL Server service account.

5.4.4 Long I/O Requests


In SQL Server 2008, the buffer manager reports on any I/O request that has been outstanding for at least 15 seconds. This helps the system administrator distinguish between SQL Server problems and I/O subsystem problems. Error message 833 is reported and appears in the SQL Server error log as follows:

122

SQL Server has encountered %d occurrence(s) of I/O requests taking longer than %d seconds to complete on file [%ls] in database [%ls] (%d). The OS file handle is 0x%p. The offset of the latest long I/O is: %#016I64x. A long I/O may be either a read or a write. While long I/O messages are just warning messages, not errors, they are often symptomatic of some functional issues in the disk subsystem or loads far exceeding the reasonable service capabilities of the disk subsystem. For PeopleSoft applications, I/O error message 833 can be very useful from a reactive I/O maintenance and monitoring perspective. It is recommended that you monitor the SQL Server error log for these messages. However, for I/O performance tuning, it is recommended that you use the I/O-related dynamic management views and System Monitor counters. For more information, see section 5.2.3 Using Dynamic Management Views and section 5.2.1 Using System Monitor in this document. For recommended I/O performance range, see section 2.1.2 Typical I/O Performance Recommended Range in this paper.

123

Appendix A - SQLAPI and TRC2API


Description
SQLAPI is provided as a tool to reproduce PeopleTools database API calls generated when running SQL scripts. TRC2API is a tool to convert the PeopleTools SQL trace files into input scripts for SQLAPI. Trace files should be generated using the TraceFlag=63 in the configuration files for Process Scheduler/Application server and using the following flags in Configuration Manager: SQL Statement SQL Statement Variables Connect, Disconnect, Rollback and Commit Row Fetch All Other API calls besides SSBs Set Select Buffers

Utilization
When debugging database issues with PeopleTools, use this tools to discard overhead or problems introduced by other PeopleTools modules. Users can give their PeopleTools SQL trace files to a developer to reproduce and analyze the problem without having to run long and complicated application scenarios or to reproduce the full environment set up. SQLAPI Syntax SQLAPI uses the following command line parameters: sqlapi <SQL platform> [ <input file> [ <output file> ] ] sqlapi help ODBC should be used for the SQL platform when connecting to SQL Server. If the output file is omitted, standard output is used. If the input file is omitted, standard input is used. One may also use a hyphen to specify standard I/O. For example, "sqlapi odbc - test.out" would read commands from standard input and write the results to a file called test.out. The "help" argument will show the syntax for each PeopleTools database API routine. SQLAPI also includes eight non-API routines to aid in debugging. A script can PAUSE for x seconds, which can be helpful with deadlocking and other resource issues. A script

124

can also contain REMarks, which can help clarify what the script is doing and can also be used to temporarily disable one or more calls to API routines. A script can IGNORE the error status of the following command and continue execution. A script can REPEAT fetches until end of data or an error occurs. A script can INCLUDE another script name that contains additional SQLAPI commands. TIMEON and TIMEOFF toggle whether timing information is displayed. A WHILE/ENDW loop may be placed around a group of statements to have them repeated until the first statement fails. The first call after the while must be a SQLFET call. The UNICODE flag tells SQLAPI to display in Unicode mode values read by SQLRLO. SQLAPI uses the concept of "cursors" where each connection has a separate handle number to track which SQL statements are associated with which connection. SQLAPI represents each cursor with a dollar sign followed by the cursor handle number. String data in SQLAPI scripts can span mutliple lines and can also include binary data. A binary byte in a data string is preceded by '\x'. If you need to include the backslash character in a data string, use two back-to-back backslashes (ie "here is how one includes a backslash \\") One can use the '\x22' binary value to include a double quote '"' in string data. Unicode 2 byte characters can represented with the '\U0000' syntax. This is helpful when using nonascii characters in scripts that are run on multiple platforms.

TRACE to API
TRC2API reads a PeopleTools SQL trace and writes a SQLAPI compatible script. TRC2API Syntax: trc2api [ -u ] [ input_file [ output_file ] ] If the input_file is not specified, standard input is used. The same applies for the output. The -u option specifies that the input file is in UNICODE format. A few trace statements are not yet supported and will be reported to standard error as TRC2API encounters them. Three known problems exist from the trace file output that can alter the SQLAPI behavior from the original scenario: 1. First, if a process binds a variable once to a SQL statement and then executes that statement multiple times while changing the variable internally, the trace file does not show the changed variable data. 2. Second, the PeopleTools SQL trace does not show the password used for the database connection.

125

3. Third, SQL Server now requires a sqlset call before it's first connection is made, but this is not traced. Using database specific tracing can help resolve the first issue and manually adding the password will resolve the second. The third can be solved by manually adding a "sqlset $0 1 0" line before the first sqlcnc call. The executables are now part of the regular builds as of PeopleTools 8.4 and can be found under %PSHOME%\bin\client\winx86.

Examples
The following is an example of an input script for SQLAPI: rem Every script must start with a sqlini and end with a sqldon sqlini rem This is our first connection, rem so it will use the cursor handle # '1' rem In 8.4 and beyond, if the password is omitted, the user will be rem prompted for it via stdin/stdout. sqlcnc "DBNAME/OPRID/PASSWORD" rem Here we show an SQL statement split over mulitple lines, how to rem bind select and WHERE clause variables, and how to fetch until rem end-of-data. sqlcex $1 "SELECT EMPLID, COMPANY, EMPL_RCD#, PLAN_TYPE from PS_LEAVE_ACCRUAL WHERE EMPLID = :1" sqlbnn $1 1 "FG1202" 6 0 1 sqlssb $1 1 2 20 0 sqlssb $1 2 2 20 0 sqlssb $1 3 18 2 0 sqlssb $1 4 2 20 0 repeat sqlfet $1 rem Since the EMPL_RCD# column is a 2 byte integer (data type 18), rem we use the binary representation of 7 in the data string. sqlcom $1 "UPDATE PS_LEAVE_ACCRUAL SET EMPL_RCD# = :1 WHERE EMPLID = :2" sqlbnn $1 1 "\x07\x00" 2 0 18 sqlbnn $1 2 "FG1202" 6 0 1 rem Doing a Fetch followed by reading the BLOB column. This also

126

rem shows how a while loop is used to fetch mutilple rows with LOB rem columns. sqlcex $1 "SELECT STMT_TEXT FROM PS_SQLSTMT_TBL WHERE PGM_NAME='PSPACCPR' AND STMT_TYPE='S' AND STMT_NAME='ACCRUAL'" while sqlfet $1 sqlgls $1 1 sqlrlo $1 1 761 sqlelo $1 endw rem The following statement might fail because the table doesn't rem exist, but we wish to continue after the failure, so we use rem IGNORE sqlset $1 3018 0 ignore sqlcom $1 "SELECT 'PS_DOES_TABLE_EXIST' FROM SOME_TABLE" sqlset $1 3018 2 rem Commit, disconnect and end the session. sqlcmt $1 sqldis $1 sqldon rem End of script!\\

Example. Reproducing a problem with SQLAPI.


The following example shows how to reproduce an Application Engine problem utilizing TRC2API and SQLAPI. If running the program on the client, turn on SQL tracing in Configuration Manager using the following options: SQL Statement SQL Statement Variables Connect, disconnect, rollback, and commit Row Fetch All Other API Calls besides SSBs Set Select Buffers (SSBs) If running the program on the server, turn on SQL tracing in the application server configuration file using the following value: TraceSql=63. In the following example the trace file is called: pssql_trace.txt Run the AE program, in this case AEMINITEST. Running it from the command line will look like this:

127

T:\bin\client\winx86> psae -CT ODBC -CD DB_NAME -CO OPR_ID -CP OPR_PSWD -R 0 -AI AEMINITEST -I 0 PeopleTools 8.49 - Application Engine Copyright (c) 1988-2009 PeopleSoft, Inc. All Rights Reserved PeopleTools SQL Trace value: 63 (0x3f): pssql_trace.txt Application Engine program AEMINITEST ended normally One will end up with a trace file similar to this: T:\bin\client\winx86> head -15 pssql_trace.txt PeopleTools 8.49 Client Trace - 2009-03-03 PID-Line Time Elapsed Trace Data... -------- -------- ---------- --------------------> 1-1 17.38.19 Tuxedo session opened {oprid='QEDMO', appname='TwoTier', addr='//TwoTier:7000', open at 024A6950, pid=772} 1-2 17.38.19 0.035000 Cur#0.772.QE849TS RC=0 Dur=0.035000 --- router PSORA load succeeded 1-3 17.38.20 0.149000 Cur#0.772.QE849TS RC=0 Dur=0.149000 INI 1-4 17.38.20 0.148000 Cur#1.772.QE849TS RC=0 Dur=0.147000 Connect=Primary/DB_NAME/CONN_ID/ 1-5 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 GET type=1003 dbtype=4 1-6 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 GET type=1004 release=10 1-7 17.38.20 0.001000 Cur#1.772.QE849TS RC=0 Dur=0.000000 COM Stmt=SELECT OWNERID FROM PS.PSDBOWNER WHERE DBNAME=:1 1-8 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 SSB column=1 type=2 length=9 scale=0 1-9 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 Bind-1 type=2 length=7 value=DB_NAME 1-10 17.38.20 0.001000 Cur#1.772.QE849TS RC=0 Dur=0.001000 EXE 1-11 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 Fetch Convert the output trace using trc2api: T:\bin\client\winx86> trc2api < pssql_trace.txt > sqlapi_input.txt Say what? PeopleTools 8.49 Client Trace - 2009-03-03 ... Ignore the lines that begin with 'Say what?'. These are extra trace lines that trc2api doesn't understand and are not important for SQLAPI. The resulting file should look like folowing: t:\bin\client\winx86>head sqlapi_input.txt sqlini unicode 0 sqlcnc "Primary/DB_NAME/CONN_ID/" sqlget $1 1003

128

sqlget $1 1004 sqlcom $1 "SELECT OWNERID FROM PS.PSDBOWNER WHERE DBNAME=:1" sqlssb $1 1 2 9 0 sqlbnn $1 1 "DB_NAME" 7 0 2 sqlexe $1 sqlfet $1 Open the sqlapi_input.txt file in an editor: T:\bin\client\winx86> notepad sqlapi_input.txt First, search for any lines that have consist of 'rem ignore'. Look at the line after the 'rem ignore' line. If that line is a command whose failure should be ignored, remove the 'rem' in front of the 'ignore'. "Select 'PS_DOES_TABLE_EXIST' from table_XXX" is a good example of this. Ignoring duplicates in an Insert is another. If youre using a SQL Server database, added the following line after the sqlini command at the top of the file: sqlset $0 1 0 Traces file do not include database passwords so one has a choice here. One may leave the 'sqlcnc' commands unmodified and provide the passwords at runtime. One can also modify the 'sqlcnc' commands adding the passwords after the last slash: Before: sqlcnc Primary/DB_NAME/ACCESSID/ After: sqlcnc Primary/DB_NAME/ACCESSID/PASSWORD Run the SQLAPI script. The following command shows how to run SQLAPI on a SQL Server database using the sqlapi_input.txt file as an input and sqlapi_output.txt as an output. t:\bin\client\winx86> sqlapi ODBC sqlapi_input.txt sqlapi_output.txt Your results should be like this: t:\bin\client\winx86>head -13 sqlapi_output.txt REM SQLAPI, Unicode version SQLINI SQLSET $0 1 0 UNICODE 0 SQLCNC "Primary/DN_NAME/CONN_ID/PASSWORD" REM cursor = 1 SQLGET $1 1003 REM dbtype=4 SQLGET $1 1004 REM dbver=10

129

SQLCOM $1 "SELECT OWNERID FROM PS.PSDBOWNER WHERE DBNAME=:1" SQLSSB $1 1 2 9 0 SQLBNN $1 1 "DB_NAME" 7 0 2 SQLEXE $1 SQLFET $1 REM Row found REM Column 1: 6 ACCESSID "ACCESSID" The preceding is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracles products remains at the sole discretion of Oracle.

130

You might also like