PeopleSoft On SQL 2008
PeopleSoft On SQL 2008
Including: Setup procedures Microsoft SQL Server 2008 new features and performance optimizations Maintaining a high performance database Performance monitoring and troubleshooting
September 2008
Authors: Sudhir Gajre & Burzin Patel Contributors: Miguel Lerma & Ganapathi Sadasivam
Table of Contents
1 INTRODUCTION ...................................................................................................... 5 1.1 Structure of This White Paper .............................................................................. 5 1.2 Related Materials.................................................................................................. 5 2 Setup and Configuration ............................................................................................. 6 2.1 Input/Output (I/O) Configuration ......................................................................... 6 2.1.1 RAID Type Recommendations ..................................................................... 6 2.1.2 Typical I/O Performance Recommended Range .......................................... 7 2.2 Files, Filegroups, and Object Placement Strategies ............................................. 8 2.3 Tempdb Placement and Tuning............................................................................ 8 2.4 Data and Log File Sizing .................................................................................... 10 2.5 Recovery Models................................................................................................ 10 2.5.1 Simple Recovery Model ............................................................................. 10 2.5.2 Full Recovery Model .................................................................................. 10 2.5.3 Bulk-Logged Recovery Model ................................................................... 11 2.6 Database Options................................................................................................ 11 2.6.1 Read-Committed Snapshot ......................................................................... 11 2.6.2 Asynchronous Statistics Update ................................................................. 12 2.6.3 Parameterization ......................................................................................... 13 2.6.4 Auto Update Statistics................................................................................. 14 2.6.5 Auto Create Statistics .................................................................................. 14 2.7 SQL Server Configurations ................................................................................ 15 2.7.1 Installation Considerations.......................................................................... 15 2.7.2 Hyper-Threading ......................................................................................... 15 2.7.3 Memory Tuning .......................................................................................... 15 2.7.4 Important sp_configure Parameters ............................................................ 18 2.8 Network Protocols and Pagefile ......................................................................... 20 2.9 SQL Native Client .............................................................................................. 21 2.10 Application Setup ........................................................................................... 22 2.10.1 Dedicated Temporary Tables ......................................................................... 22 2.10.2 Statement Compilation................................................................................... 23 2.10.3 Statistics at Runtime for Temporary Tables .................................................. 27 2.10.4 Disabling Update Statistics ............................................................................ 28 2.11 Batch Server Placement .................................................................................. 28 3 SQL Server 2008 Performance and Compliance Optimizations for PeopleSoft Applications ...................................................................................................................... 30 3.1 Resource Management ....................................................................................... 30 3.1.1 Resource Governor ..................................................................................... 30 3.2 Backup and Storage Optimization...................................................................... 34 3.2.1 Backup Compression .................................................................................. 34 3.2.2 Data Compression ....................................................................................... 36 3.3 Auditing and Compliance................................................................................... 38 3.3.1 Transparent Data Encryption (TDE) ........................................................... 39 3.3.2 SQL Server Audit ....................................................................................... 40 3.4 Performance Monitoring and Data Collection ................................................... 42
3.4.1 Data Collector and Management Data Warehouse ..................................... 43 3.4.2 Memory Monitoring DMVs ........................................................................ 47 3.4.3 Extended Events.......................................................................................... 48 3.4.4 Query and Query Plan Hashes .................................................................... 50 3.5 Query Performance Optimization ...................................................................... 53 3.5.1 Plan Freezing .............................................................................................. 54 3.5.2 Optimize for Ad hoc Workloads Option ..................................................... 54 3.5.3 Lock Escalation ........................................................................................... 55 3.6 Hardware Optimizations .................................................................................... 55 3.6.1 Hot Add CPU .............................................................................................. 55 3.6.2 NUMA ........................................................................................................ 56 4 Database Maintenance .............................................................................................. 57 4.1 Table and Index Partitioning .............................................................................. 57 4.2 Managing Indexes .............................................................................................. 59 4.2.1 Parallel Index Operations ............................................................................ 60 4.2.2 Index-Related Dynamic Management Views ............................................. 60 4.2.3 Disabling Indexes........................................................................................ 63 4.3 Detecting Fragmentation .................................................................................... 64 4.4 Reducing Fragmentation .................................................................................... 65 4.4.1 Online Index Reorganization ...................................................................... 67 4.4.2 Program to Defragment............................................................................... 69 4.5 Statistics ............................................................................................................. 69 4.5.1 AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS ... 70 4.5.2 Disabling AUTO_UPDATE_STATISTICS at the Table Level ................. 70 4.5.3 User-Created Statistics ................................................................................ 71 4.5.4 Updating Statistics ...................................................................................... 71 4.5.5 Viewing Statistics ....................................................................................... 72 4.6 Controlling Locking Behavior ........................................................................... 73 4.6.1 Isolation Levels ........................................................................................... 74 4.6.2 Lock Granularity ......................................................................................... 74 4.6.3 Lock Escalations ......................................................................................... 75 4.6.4 Lock Escalation Trace Flags ....................................................................... 77 4.6.5 Deadlocks .................................................................................................... 77 4.7 Dedicated Administrator Connection (DAC) ..................................................... 82 5 Performance Monitoring and Troubleshooting ......................................................... 83 5.1 PeopleSoft Architecture ..................................................................................... 83 5.2 Narrowing Down the Cause of a Performance Issue ......................................... 83 5.2.1 Using System Monitor ................................................................................ 83 5.2.2 Capturing Traces ......................................................................................... 87 5.2.3 Using Dynamic Management Views .......................................................... 93 5.2.4 Finding a Showplan .................................................................................... 97 5.2.5 Finding Current Users and Processes ....................................................... 101 5.2.6 Decoding the Object Blocking a Process .................................................. 102 5.2.7 Selected DBCC Commands ...................................................................... 103 5.2.8 Using Hints ............................................................................................... 103 5.2.9 Correlating a Trace with Windows Performance Log Data...................... 114
5.3 Common Performance Problems ..................................................................... 115 5.3.1 High Processor Utilization ........................................................................ 115 5.3.2 Disk I/O Bottlenecks ................................................................................. 117 5.3.3 Memory Bottlenecks ................................................................................. 118 5.3.4 Blocking and Deadlocking Issues ............................................................. 118 5.3.5 ODBC API Server Cursor Performance Enhancements ........................... 119 5.4 Database I/O ..................................................................................................... 121 5.4.1 SQLIO Disk Performance Test Tool ..................................................... 121 5.4.2 SQLIOSim Disk Stress Test Tool .......................................................... 121 5.4.3 Instant File Initialization ........................................................................... 121 5.4.4 Long I/O Requests .................................................................................... 122 Appendix A - SQLAPI and TRC2API............................................................................ 124 Description .................................................................................................................. 124 Utilization ................................................................................................................... 124 TRACE to API ............................................................................................................ 125 Examples ..................................................................................................................... 126 Example. Reproducing a problem with SQLAPI. .................................................. 127
1 INTRODUCTION
This white paper is a practical guide for database administrators and programmers who implement, maintain, or develop PeopleSoft applications. It outlines guidelines for improving the performance of PeopleSoft applications running on Microsoft SQL Server 2008. Much of the information presented in this document is based on findings from real-world customer deployments and from PeopleSoft benchmark testing. The issues discussed in this document represent problems that prove to be the most common or troublesome for PeopleSoft customers.
available. To maintain continuous operation, you must implement fault tolerance for these objects. Note: You should isolate the database transaction log from all other I/O activity no other files should exist on the drives that contain the log file. This ensures that, with the exception of transaction log backup and the occasional rollback, nothing disturbs the sequential nature of transaction log activity. Overall RAID 10 affords the best performance, making it the preferred choice for all database files. The following table summarizes the RAID level recommendations for PeopleSoft applications: RAID type Data files Log files tempdb1 System databases and SQL Server binaries RAID 10 Recommended Recommended Recommended. N/A for PeopleSoft for log files. database data Isolate from all files. More other I/O spindles will activity. yield better performance. RAID 1 N/A N/A N/A Recommended.
Refer to section 2.3 Tempdb Placement and Tuning for more information about tempdb. 7
FILEGROWTH for tempdb to 50 MB. This prevents tempdb from expanding too frequently, which can affect performance. Set the tempdb database to auto grow, but use this option to increase disk space for unplanned exceptions. When the READ_COMMITTED_SNAPSHOT database option is ON, logical copies are maintained for all data modifications performed in the database. Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb until the transaction that modified the row is committed. The tempdb database should be sized to have sufficient capacity to store these row versions as well as the other objects that are usually stored in tempdb. Set the file growth increment to a reasonable size to avoid the tempdb database files from growing by too small a value. If the file growth is too small, compared to the amount of data that is being written to tempdb, tempdb may have to constantly expand. This will affect performance. See the following general guidelines for setting the FILEGROWTH increment for tempdb files. tempdb file size Less than 1 GB Greater than 1 GB FILEGROWTH increment 50 MB 10%
Note: Monitor and avoid automatic file growth, as it impacts performance. Every time SQL Server is started, the tempdb file is re-created with the default size. While tempdb can grow, it does take resources to perform this task. To reduce this overhead of tempdb growing, you may want to permanently increase the default size of tempdb after carefully monitoring its growth. Also, consider adding multiple data files to the tempdb filegroup2. Using multiple files reduces tempdb contention and yields significantly better scalability. As a general rule of thumb, create one data file for each processor core on the server (accounting for any affinity mask settings). For example, a 4-processor dual-core server would be configured with 8 tempdb data files. To add multiple data files, use the ALTER DATABASE statement with the ADD FILE clause. For example:
ALTER DATABASE tempdb ADD FILE ( NAME = tempdev2, FILENAME = 'C:\tempdb2.ndf', SIZE = 100MB, FILEGROWTH = 50MB ) ;
In SQL Server the tempdb database can only have a single filegroup. 9
Make each data file the same size; this allows for optimal proportional-fill performance.
10
11
Use the following query to identify whether a database is currently set to use the readcommitted snapshot isolation level:
select name, is_read_committed_snapshot_on from sys.databases where name = <YourDatabaseName>
A value of 1 in the is_read_committed_snapshot_on column indicates that the readcommitted snapshot isolation level is set. For PeopleSoft applications, the recommendation is to enable the read-committed snapshot isolation level. PeopleSoft workloads typically have concurrent online and batch processing activities. There are possible blocking and deadlocking issues due to contention from the online and batch activity. This usually manifests as performance degradation issues due to lock contention. The read-committed snapshot isolation level will alleviate most of the lock contention and blocking issues. Warning! Please check the version of PeopleTools you are using supports the Read Committed Snapshot Isolation Level. You can only use it if it is supported by PeopleTools.
12
2.6.3 Parameterization
When his option is set to FORCED, the database engine parameterizes any literal value that appears in a SELECT, INSERT, UPDATE, or DELETE statement submitted in any form. The exception is when a query hint of RECOMPILE or OPTIMIZE FOR is used in the query. Use the following ALTER DATABASE statement to enable forced parameterization:
ALTER DATABASE <YourDatabaseName> SET PARAMETERIZATION FORCED ;
To determine the current setting of this option, examine the is_parameterization_forced column in the sys.databases catalog view as follows:
select name, is_parameterization_forced from sys.databases where name = <YourDatabaseName>
The default value for is_parameterization_forced is 0 (OFF); a value of 1 indicates that forced parameterization is enabled. For a PeopleSoft workload, it is recommended that you set this parameter to 1 (forced). Some PeopleSoft application queries pass in literals instead of parameters. For such workloads you may want to experiment with enabling the forced parameterization option and seeing if it has a positive effect on the workload by way of a reduced number of query compilations and reduced processor utilization. An example query from the PeopleSoft Financials online workload follows:
SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = 'MFG' AND CUST_ID = 'Z00000000022689';
In this example, the literal value Z00000000022689 is passed to the query. When the forced parameterization option is enabled, the hard-coded literal is automatically substituted with a parameter during the query compilation. The query plan would be cached and reused when this query is submitted again, with a different literal value for CUST_ID. Because the plan could be reused, the compilation overhead is eliminated, thereby reducing the processor utilization. NOTE: Queries that contain both literal and parameter values are not FORCED parameterized by the database engine. Example:
SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = @P1 AND CUST_ID = 'Z00000000022689';
13
However, note that in some cases when the data in the database table is highly skewed forced parameterization may cause a suboptimal plan to be reused, thus degrading performance. If the parameter value in the query changes significantly, when it warrants a different execution plan for better performance, the older plan would be reused from the cache, which may not be the most optimal from a performance perspective. It is best to experiment with this setting and use only if necessary. Parameterization can also be specified at the query level using a query hint specified via a Plan Guide (explained later in this paper).
To determine the current setting of this option, examine the is_auto_update_stats_on column in the sys.databases catalog view as follows:
select name, is_auto_update_stats_on from sys.databases where name = <YourDatabaseName> ;
A value of 1 for is_auto_update_stats_on indicates that auto update statistics is enabled. For optimal performance in PeopleSoft applications, it is recommended that you leave the auto update statistics option enabled.
To determine the current setting of this option, examine the is_auto_create_stats_on column in the sys.databases catalog view as follows:
select name, is_auto_create_stats_on from sys.databases where name = <YourDatabaseName>
A value of 1 for is_auto_create_stats_on indicates that the auto create statistics option is enabled.
14
For optimal performance in PeopleSoft applications, it is recommended that you leave the auto create statistics option enabled.
2.7.2 Hyper-Threading
Hyper-threading is Intels implementation of simultaneous multithreading technology. The performance benefits of using hyper-threading are dependent upon workload. For PeopleSoft applications, it is recommended that you disable hyper-threading for the database server via the BIOS as our lab testing has shown little or no improvement.
PAE is an Intel-provided memory address extension that enables support of up to 64 GB of physical memory for applications running on most 32-bit (IA-32) Intel Pentium Pro and later platforms. Support for PAE is provided on Windows 2000, and later versions of the Advanced Server and Datacenter Server operating systems. PAE enables most processors to expand the number of bits that can be used to address physical memory from 32 bits to 36 bits through support in the host operating system for applications using the Address Windowing Extensions (AWE) API. PAE is enabled by specifying the /PAE switch in the boot.ini file. AWE Memory SQL Server 2008 can use as much memory as Windows Server allows. To use AWE memory, you must run the SQL Server 2008 database engine under a Windows account on which the Windows policy Lock Pages in Memory option has been enabled. SQL Server Setup will automatically grant the SQL Server (MSSQLServer) service account permission to use the Lock Pages in Memory option. To enable the use of AWE memory by an instance of SQL Server 2008, use SQL Server Management Studio or sp_configure command. PeopleSoft applications consume relatively large amounts of lock memory, so many deployments will benefit from enabling a combination of /3GB and AWE memory. It is recommended to set the max server memory option when using AWE memory. To set memory options: 1. If your installation of Microsoft Windows Server 2008, Windows Server 2003 or Windows 2000 has more than 4 GB of memory but less than 16 GB of memory, add the /3GB switch to boot.ini. 2. To enable Physical Address Extension, add the /PAE switch to boot.ini. 3. Use sp_configure to enable AWE; sp_configure 'awe enabled', 1. 4. Set the maximum amount of memory SQL Server can use with sp_configure 'max server memory'. 5. Enable the configuration changes using RECONFIGURE WITH OVERRIDE. 6. Restart the SQL Server instance. Note: Some services such as antivirus software have caused instability when used on systems that have /3GB enabled, and servers are constrained to no more than 16 GB if both /3GB and /PAE are enabled.
16
SQL Server 2008 64-bit editions can take full advantage of the large memory address space, thus eliminating the 4 GB virtual space limit imposed by 32-bit systems. The 64bit systems bring linear memory addressability to SQL Server, implying that no internal memory mapping is needed for large memory access and the buffer pool and all other memory structures of SQL Server can fully utilize the memory. For large PeopleSoft applications with high user concurrency in the range of thousands of users and a large database size, 64-bit systems can provide scalability and high performance. Such complex and highly concurrent PeopleSoft applications typically make heavy use of memory and can benefit from 64-bit systems in the following areas: Plan cache The ad hoc and dynamic SQL from PeopleSoft applications can fully utilize the large memory space. The plan generated can stay in memory longer thus promoting more reuse and fewer compilations. Workspace memory Index builds and complex concurrent hash joins can be done in memory. Connection memory Large numbers of concurrent connections can be easily handled. Thread memory High concurrency load can be easily handled. Lock memory Concurrent PeopleSoft workloads can utilize large amounts of lock memory. For PeopleSoft applications with large scalability and memory requirements the 64-bit platform is highly recommended. It is worth mentioning that Windows Server 2008 Standard Edition (64-bit) can only address 32GB of memory. To address memory beyond 32 GB, all the way up to 2TB, you should use Windows Server 2008 Enterprise Edition (64-bit). If the 32-bit platform is under memory pressure and memory is proving to be a bottleneck, migration to a 64-bit platform may help.
17
2. On the Group Policy console, expand Computer Configuration, and then expand Windows Settings. 3. Expand Security Settings, and then expand Local Policies. 4. Select the User Rights Assignment folder. The policies will be displayed in the details pane. 5. In the pane, double-click Lock pages in memory. 6. In the Local Security Policy Setting dialog box, click Add. 7. In the Select Users or Groups dialog box, add an account with privileges to run sqlservr.exe. It is recommended that you set the Lock Pages in Memory option when using 64-bit operating systems. This keeps data in physical memory, preventing the system from paging the data to virtual memory on disk.
18
cycles from other users during high online usage periods. Set this parameter to 1 during peak OLTP periods. Increase the value of this parameter during periods of low OLTP and high batch processing, reporting, and query activity. Note: Index creation and re-creation can take advantage of parallelism, so it is advisable to enable parallelism through this setting when planning to build or rebuild indexes. The OPTION hint in the index creation or rebuild statements can also be used to set max degree of parallelism. Performance tests on some of the batch processes showed that parallelism could result in very good performance. If you do not want to toggle this value based on the type of load, you can set the value to 1 to disable this setting. However, you may want to explore some middle ground by setting this option to 2, which may help some complex batch jobs as well as online performance. Specifies the cost threshold in seconds that needs to be met before a query is eligible to be executed with a parallel query execution plan. The default value is 5. Most of the PeopleSoft online SQL statements are simple in nature and do not require parallel query execution plans. Consider increasing the value to 60, so only true complex queries will be evaluated for parallel query execution plans. This option is used to control how cursors are populated. It is strongly recommended to leave this setting to its default setting of -1. Enable this parameter to take advantage of memory above 4 GB. This is primarily applicable for 32-bit operating systems, but is recommended for 64-bit servers as well. Specifies the maximum memory in
cursor threshold
awe enabled
19
megabytes allocated to a SQL Server instance. The default value is 2,147,483,647 MB. If you are enabling AWE, remember that AWE memory is statically allocated for Windows 2000 and non-pageable. AWE memory is dynamically allocated for Windows Server 2003 and Windows Server 2008 operating systems. For a dedicated database server, plan to leave at least 1 to 2 GB for the operating system and other services on the database server. For example, if the database server has 16 GB, set max server memory to 14 GB. Monitor the memory: available bytes to determine if max server memory should be reduced or increased. You may need to leave additional memory if you are running other 3rd party applications like performance monitoring or backup. Specifies the minimum server memory to guarantee a minimum amount of memory available to the buffer pool of an instance of SQL Server. SQL Server will not immediately allocate the amount of memory specified in min server memory on startup. However, after memory usage has reached this value due to client load, SQL Server cannot free memory from the allocated buffer pool unless the value of min server memory is reduced. The default value is 0 MB. For dedicated database servers you should set the min server memory to 50-90% of the max server memory value.
20
Configuring the correct network protocol is critical from a performance and stability perspective. The following section discusses recommendations for configuring network protocols for PeopleSoft applications. For TCP/IP, data transmissions are more streamlined and have less overhead than Named Pipes. Data transmissions can also take advantage of TCP/IP Sockets performance enhancement mechanisms, such as windowing, delayed acknowledgements, and so on. This can be very helpful in high network traffic scenarios. For PeopleSoft applications, such performance differences can be significant. For best performance, install TCP/IP on the server and configure SQL Server TCP/IP to communicate with clients. The Named Pipes network protocol can be installed and used only when the application server or process scheduler is running on the same physical computer as the database engine. For the application to use Named Pipes as the first choice network protocol, make sure that in the Client Configuration section of SQL Server Configuration Manager that Named Pipes has a higher order than TCP/IP. Ensure that SQL Server uses Named Pipes in addition to TCP/IP. For this to work, you must also configure native ODBC connections to use Named Pipes. The VIA (Virtual Interface Adapter) protocol works only with specialized VIA hardware. Enable this protocol only if you have VIA hardware installed in the server and plan to use it. If you do not have VIA hardware installed in the server you should keep this protocol disabled. Shared Memory is a non-routable protocol and is not useful for PeopleSoft applications.
21
22
23
AE Trace: -- 16.46.00 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = 10000498 WHERE PROCESS_INSTANCE = 419 AND BUSINESS_UNIT = 'US004' AND PROJECT_ID = 'PRICINGA1' AND ACTIVITY_ID = 'ACTIVITYA1' AND RESOURCE_ID = 'VUS004VA10114050' AND LINE_NO = 1 / -- Row(s) affected: 1
SQL Statement BL6100.10000001.S Compile Count Time 252 0.6 Execute Count Time 252 1.5 Fetch Count Time 0 0.0 Total Time 2.1
AE Trace: -- 16.57.57 ......(PC_PRICING.BL6100.10000001) (SQL) UPDATE PS_PC_RATE_RUN_TAO SET RESOURCE_ID = :1 WHERE PROCESS_INSTANCE = 420 AND BUSINESS_UNIT = :2 AND PROJECT_ID = :3 AND ACTIVITY_ID = :4 AND RESOURCE_ID = :5 AND LINE_NO = :6 / -- Bind variables: -- 1) 10000751 -- 2) US004 -- 3) PRICINGA1 -- 4) ACTIVITYA1 -- 5) VUS004VA10114050 -- 6) 1
24
-- Row(s) affected: 1
SQL Statement BL6100.10000001.S Compile Count Time 1 0.0 Execute Count Time 252 0.4 Fetch Count Time 0 0.0 Total Time 0.4
Restrictions on Enabling the ReUse Option It is acceptable to enable ReUse if %Bind is used to supply a value to a column in a WHERE predicate, SET clause, or INSERT VALUES list. For example: UPDATE PS_PF_DL_GRP_EXEC SET PF_ODS_STATUS = 'C', PROCESS_INSTANCE = %Bind(PROCESS_INSTANCE) WHERE PF_DL_GRP_ID = %Bind(PF_DL_GRP_ID) AND PF_DL_ROW_NUM = %Bind(PF_DL_ROW_NUM) Do not enable ReUse if %Bind is used to supply a column name or portion of a table name. For example: SELECT DISTINCT KPI_ID , CALC_ID ,'' ,0 ,0 ,KP_CALC_SW ,KP_OFFCYCLE_CALC FROM PS_%Bind(KP_CALC_AET.KP_KPI_LST1,NOQUOTES) %Bind(EPM_CORE_AET.FACT_TABLE_APPEND ,NOQUOTES) WHERE LOOP_CNT = %Bind(KP_CALC_AET.LOOP_CNT) AND LOOP_PROGRESSION='B' Do not enable ReUse if %Bind appears in the SELECT list. For example: SELECT DISTINCT %Bind(EPM_CORE_AET.PROCESS_INSTANCE) , %Bind(EPM_CORE_AET.ENGINE_ID) , %CurrentDateTimeIn , 10623 , 31 , 'GL_ACCOUNT' ,'' ,'' ,'' ,''
25
, A.MAP_GL_ACCOUNT ,'' ,'' ,'' ,'' , 'LEDMAP_SEQ' FROM Do not enable ReUse if %Bind is being used to resolve to a value other than a standard Bind value and the contents of the Bind will change each time the statement executes. For example: %Bind(GC_EQTZ_AET.GC_SQL_STRING,NOQUOTES) In this case, the SQL is different each time (at least from the database perspective) and therefore cannot be reused. If NOQUOTES modifier is used inside %Bind, it is implied to be STATIC. For dynamic SQL substitution, the %Bind has a CHAR field and NOQUOTES to insert SQL rather than a literal value. If you enable ReUse, the value of the CHAR field is substituted inline, instead of using a Bind marker (as in :1, :2, and so on). The next time the same Application Engine action executes, the SQL that it executes will be the same as before, even if the value of a static bind has changed. For example: INSERT INTO PS_PF_ENGMSGD_S %Bind(EPM_CORE_AET.TABLE_APPEND,NOQUOTES) (PROCESS_INSTANCE , ENGINE_ID , MESSAGE_DTTM , MESSAGE_SET_NBR , MESSAGE_NBR , FIELDNAME1 , FIELDNAME2 , FIELDNAME3 , FIELDNAME4 , FIELDNAME5 , FIELDVAL1 , FIELDVAL2 , FIELDVAL3 , FIELDVAL4 , FIELDVAL5 , SOURCE_TABLE)
26
Use the %ClearCursor function to recompile a reused statement and reset any STATIC %Bind variables. Refer to the PeopleSoft Application Engine documentation for usage.
PeopleTools 8.14: Application Engine. Advanced Development: Re-Using Statements: Bulk Insert. Peoplesoft, Inc. http://ps8dweb1.vccs.edu:6001/sa80books/eng/psbooks/tape/chapter.htm?File=tape/htm/aecomt04.htm%23 H4011 27
%UpdateStats(INTFC_BI_HTMP) This meta-SQL issues the following command to the database at runtime: UPDATE STATISTICS PS_INTFC_BI_HTMP Make sure the temporary table statistics have been handled as shown above. If you find that statistics created by AUTO UPDATESTATS is sufficient, you can disable %UpdateStats in the program.
because the application servers are memory-intensive processes and co-locating the batch server on the same system leaves less memory for the application server. Note: If the process scheduler is installed on a separate batch server and not on the database server, use a high bandwidth connection such as 1 Gbps between the batch server and database server. If a particular batch process uses extensive row-by-row processing, having the process scheduler on the database server may offer increased performance.
29
3 SQL Server 2008 Performance and Compliance Optimizations for PeopleSoft Applications
SQL Server 2008 introduces many new optimizations and enhancements at every layer of the database engine. Although a complete discussion of all the SQL Server 2008 changes is out of the scope of this white paper, the important ones related to PeopleSoft applications are discussed in the following sections. Many of these optimizations and enhancements are specific to the PeopleSoft applications and can be effectively leveraged to enhance performance and manageability.
30
For more information on classifier function and considerations for writing one, please review the Considerations for Writing a Classifier Function topic in SQL Server 2008 Books Online. Workload Groups: A workload group serves as a container for session requests that are similar according to the classification criteria that are applied to each request. Two workload groups, Internal and Default, pre-exist when Resource Governor is enabled. User defined workload groups can be created. For instance, for PeopleSoft applications, you can create two user-defined workload groups, Batch and Online. The classifier function can then use the APP_NAME() system function to allocate the connections to these workload groups. Resource Pool: The Resource Pool represents the allocation of the physical resources of the server (CPU and memory). You can think of it as a virtual SQL Server instance. The internal and default resource pool are created when Resource Governor is enabled. User defined resource pools can be created as required. For instance, for PeopleSoft applications, you can create two user-defined resource pools, Batch and Online and allocate the appropriate CPU and memory limits to it. For more information on Classifier Function, Workload Groups, and Resource Pool, review the Resource Governor Concepts topic in SQL Server 2008 Books Online.
Classifier UDF ier UDF UDF Internal l GroupI Internal l PoolR User-Defined GroupD Resource ce PoolD Default GroupI Default PoolC
Using Resource Governor, the incoming requests from PeopleSoft applications (batch, online, etc), can be classified by using a classifier function. The classifier function assigns the request to a workload group. The workload group is associated with a resource pool. The resource pool is allocated minimum and maximum CPU and memory resource limits. In the diagram below, the classifier function has assigned the SQR and Batch workload from PeopleSoft to the Batch workgroup and the PeopleSoft Online activity to the OLTP workgroup. Each of these workgroups are then respectively assigned to their resource pools, Batch Pool or OLTP pool. The resources (CPU and memory) are governed for the resource pool.
31
SQL
Min Max CPU Memory 90%O 10% Batch Pool OLTP PoolB Max Memory The example script below illustrates the use of Resource Governor, to govern the CPU 20% resource for the PeopleSoft batch process. In this example the workload from the batch Max process is CPU classified using the classifier function. If the workload is batch and it is normal business hours (8:00 AM to 5:00 PM), the batch gets assigned to a production 20%B workload group which is associated with a production resource pool. In this pool the
CPU is governed to a max of 50% during production hours, thus giving the online workload more CPU. If it is outside the business hours, the batch is assigned an off Hours workload group and an off hours resource pool in which up to 80% CPU resources are allocated for the batch. The script below presents the steps to Create and configure the new resource pools and workload groups. Assign each workload group to the appropriate resource pool.
--- Create a resource pool for batch processing USE master GO CREATE RESOURCE POOL rpBatchProductionHours WITH ( MAX_CPU_PERCENT = 50, MIN_CPU_PERCENT = 0 ) GO --- Create a corresponding workload group for batch production processing and configure --- the relative importance. CREATE WORKLOAD GROUP wgBatchProductionHours WITH ( IMPORTANCE = LOW )
32
--- Assign the workload group to the batch production processing --- resource pool. USING rpBatchProductionHours GO --- Create a resource pool for off-hours batch processing --- and set limits. CREATE RESOURCE POOL rpBatchOffHours WITH ( MAX_CPU_PERCENT = 80, MIN_CPU_PERCENT = 50 ) GO --- Create a workload group for off-hours processing --- and configure the relative importance. CREATE WORKLOAD GROUP wgBatchOffHours WITH ( IMPORTANCE = MEDIUM ) --- Assign the workload group to the off-hours processing --- resource pool. USING rpBatchOffHours GO --- Use the new configuration ALTER RESOURCE GOVERNOR RECONFIGURE GO
Create a classifier function to classify batch based on app name and current time:
CREATE FUNCTION fnBatchClassifier() RETURNS sysname WITH SCHEMABINDING AS BEGIN DECLARE @EightAM time DECLARE @FivePM time DECLARE @loginTime time SET @EightAM = '8:00 AM' SET @FivePM = '5:00 PM' SET @loginTime = CONVERT(time,GETDATE()) IF APP_NAME() = 'PFSTBATCH' AND (@loginTime BETWEEN @EightAM AND @FivePM) BEGIN
33
RETURN N'wgBatchProductionHours' END -- Its not production hours RETURN N'wgBatchOffHours' END GO
For more information on Resource Governor please refer to the Introducing Resource Governor section in SQL Server 2008 Books Online. It is also possible to use SQL Server Management Studio to configure Resource Governor. For more information, refer to Resource Governor How To topics in SQL Server 2008 Books Online. Resource Governor is only supported in SQL Server 2008 Enterprise and Developer editions.
34
Configuration Backup compression is disabled by default for new installs. You can change the default at a server level by setting the value of backup compression default to 1, as shown below:
USE master; GO EXEC sp_configure 'backup compression default', '1'; RECONFIGURE WITH OVERRIDE ;
Alternatively, you can use SQL Server Management Studio to change this setting as well. Use the Database Settings page of the Server Properties dialog to set the backup compression default. Changing this server level option will cause all backups taken to be compressed, by default. However, you can also override the default backup compression setting for an individual backup by using keywords in the BACKUP command itself, as shown in the example below.
BACKUP DATABASE HCM849 TO DISK='Z:\PSFTBackups\HCM849.bak' WITH COMPRESSION ;
If you are using SQL Server Management studio to backup, you can use the Set Backup Compression option in the Back Up database options page. Compression Ratio To view the compression ratio achieved by the backup compression, you can query the backup-set history table, as shown below:
SELECT backup_size/compressed_backup_size as 'compression ratio', database_name FROM msdb..backupset ;
Performance Impact Compression is a CPU intensive operation and may increase CPU usage. It is important to consider the impact on concurrent operations when executing a BACKUP command with Compression option. We recommend running the backup operation during off peak hours. If concurrent execution is required and there is noticeable impact on CPU usage, you can consider leveraging the Resource Governor feature as discussed in section 3.1.1 to govern and limit the CPU usage by the BACKUP command.
35
It is important to note that you can only compress a backup when using SQL Server 2008 Enterprise or Developer editions, however you can restore a compressed backup on any SQL Server 2008 or later edition. For more information on Backup Compression and the factors that can impact compression, please refer to the Backup Compression topic in SQL Server 2008 Books Online.
36
The replacement of the repeating values by the reference to the CI structure results in the space savings and thereby the compression. The following illustration from SQL Server 2008 Books Online shows a sample page of a table before and after prefix compression. Before prefix compression After Prefix Compression
Dictionary compression is the next step after prefix compression. Dictionary compression works on the entire page and replaces repeated values. The following illustration again taken from SQL Server 2008 Book Online shows the above page after dictionary compression.
Note that unlike prefix compression, in dictionary compression the value 4b has been referenced from different columns of the page. For more information on Page Compression, please refer to the Page Compression Implementation topic in SQL Server 2008 Books Online. Performance Considerations
37
Data Compression can be CPU intensive. Comparatively, row compression has lower overhead than page compression. We recommend testing your PeopleSoft application for CPU overhead when using compression. Before using compression, you can estimate the size and compression savings by using the sp_estimate_data_compression_savings system stored procedure. Please refer to SQL Server 2008 Books Online for further information on this. Configuration PeopleSoft applications can use the Data Compression option by using the ALTER TABLE command, as shown below. (SQL Server 2008 supports data compression during table creation time as well, however, this may not be natively available in PeopleTools).
ALTER TABLE PS_LEDGER REBUILD WITH (DATA_COMPRESSION = PAGE) ; GO
This will rebuild the entire table. It does not rebuild the related non-clustered indexes. We recommend verifying the support of Data Compression for your particular PeopleSoft application with PeopleSoft support before enabling it. For more information on Compression commands and syntax, please refer to the Creating Compressed Tables and Indexes topic in SQL Server 2008 Books Online. Benchmarks We tested Data Compression on 9.446 GB PeopleSoft database. The compression results are as follows: Database Original database ROW compressed database PAGE compressed database Size (GB) 9.45 3.78 2.11 Time taken to Compress -30 min 39 sec 33 min 27 sec
Data compression is only supported in SQL Server 2008 Enterprise and Developer editions.
38
The PeopleSoft applications may also require Auditing and encryption for compliance. For such applications, it is not possible to modify the application and it is most desired to provide auditing and encryption at the database layer, without requiring any change to the application itself. In order to meet these requirements, SQL Server 2008 introduced the Transparent Data Encryption and Auditing features described in the section below.
39
CREATE CERTIFICATE MyServerCert WITH SUBJECT = 'My DEK Certificate' ; GO USE HCM849 GO CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE MyServerCert ; GO ALTER DATABASE HCM849 SET ENCRYPTION ON ; GO
Performance Considerations In some of the lab tests we conducted, the typical performance impact for TDE was found to be about 3-5%. TDE is CPU intensive and is performed at the I/O level. Applications which are I/O intensive may have a higher CPU impact than applications that access data mostly from memory. If your PeopleSoft application has high CPU usage and is very I/O intensive, TDE may adversely affect performance. For low CPU usage applications, the performance degradation may not be adversely affected by the TDE operations. We ran some tests to measure the one-time durations taken to encrypt and decrypt a 11 GB sample PeopleSoft database. The results are as follows: Test Encrypt database Decrypt database Time taken 19 min 30 sec 22 min 30 sec
We highly encourage you to test your application before implementing TDE in production. For more information on TDE, please refer to the Understanding Transparent Data Encryption topic in SQL Server 2008 Books Online and the TDE section in the Database Encryption in SQL Server 2008 Enterprise Edition whitepaper available at: http://msdn.microsoft.com/en-us/library/cc278098.aspx. TDE is only supported in SQL Server 2008 Enterprise and Developer editions.
40
change data can be captured and stored by using SQL Server Audit. The database activity can be captured and stored in the following destinations: File Windows Application Log Windows Security Log SQL Server Audit Components SQL Server Audit consists of several components such as the SQL Server Audit, Server Audit Specification, Database Audit Specification, and the Target. SQL Server Audit: The SQL Server audit object is defined at a SQL Server instance level and is a collection of server or database-level actions and groups of actions to monitor at a database or instance level. The audit destination needs to be defined as part of the audit. Server Audit Specification: Server-level action groups raised by the Extended Events feature are collected by the Server Audit Specification. These actions include server operations, such as management changes and logon and logoff operations. Database Audit Specification: The database level audit actions such as DML and DDL changes are part of the database Audit specification. Target: The audit results are sent to a file, called the target. The target can be a file, the Windows Security event log, or the Windows Application event log. Configuration The process for creating and using an audit is as follows: 1. Create an audit and define the target. 2. Create either a server audit specification or database audit specification that maps to the audit. Enable the audit specification. 3. Enable the audit. 4. Read the audit events by using the Windows Event Viewer, Log File Viewer, or the fn_read_audit_file function. The following example illustrates the use of audit to capture a SELECT statement against the PSEMPLOYEE table. Create an Audit object and define the target:
-- Create the SQL Server Audit object, and send the results to a file. CREATE SERVER AUDIT PSFT_SQL_Server_Audit TO FILE ( FILEPATH='C:\PSFTAudit\Audit\' ) -- The Queue Delay is set to 1000, meaning one second -- intervals to write to the target.
41
ON_FAILURE = CONTINUE) ;
Create the database audit specification and map it to the Audit object:
-- Create the Database Audit Specification object using an Audit event for the HCM849.PSEMPLOYEE table. USE HCM849 ; GO CREATE DATABASE AUDIT SPECIFICATION PSFT_Database_Audit_Specification FOR SERVER AUDIT PSFT_SQL_Server_Audit ADD (SELECT ON PSFT.EMPLOYEE BY PSFTUSER) WITH (STATE = ON) ; GO
Performance Implications The SQL Server auditing architecture is based on the Extended Events. The extended events are fired internally in the engine and are usually low overhead. The overhead and performance implications of Auditing are directly related to the type and quantity of events configured for monitoring, so it is highly advisable to be selective while configuring them, especially for high-throughput systems. For more information on Auditing, please refer to the Understanding SQL Server Audit and SQL Server Audit How-to Topics topic in SQL Server 2008 Books Online.
performance data reporting. SQL Server 2008 further enhances this concept and introduces new features such as Data Collectors, Management Data Warehouse, and preconfigured reports to analyze and warehouse performance data and troubleshoot performance issues. This information can also be used for trend analysis and capacity planning. The sections below discuss the key new features and their applications, with regards to PeopleSoft applications.
43
44
Data Collector Components and Architecture The Data Collector architecture can be broken down in the following components: Client Components: This is the user Interface (UI) to configure the data collector. SQL Server Management Studio is the main UI for configuring and managing data collector though all the actions can also be performed via T-SQL commands. API Components: They enable the interaction between the UI and the data collector. Execution Components: Components used for data collection and storage such as SSIS and SQL Server Agent. Storage Components: The database that contains the configuration information and collected data. The collected data is stored in a user defined Management Data Warehouse (MDW) database while the configuration is stored in the msdb system database.
PeopleSoft Application Performance Management and Tuning You can use the Data Collector and the Management Data Warehouse to troubleshoot some typical PeopleSoft performance issues such as Blocking Analysis, CPU Usage by query, Missing Indexes, I/O contention issues. To create a Blocking Analysis collection set, you can use a T-SQL query collector type to create a custom collection set. The T-SQL query will leverage the appropriate DMVs to gather the blocking information such as the blocker and blockee query text, lock modes, wait types etc. To configure this collector set, please refer to the How to: Create a Custom Collection Set That Uses a T-SQL Query Collector Type topic in SQL Server 2008 Books Online. For a sample T-SQL query to gather blocking information, please visit the SQL Server scripts center at: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true In addition to the custom collection sets you can create for specific troubleshooting requirements, the default system collection sets can provide valuable insight and information as well. The data collector installs three System Data collection sets during the SQL Server 2008 setup process. The system collection sets provide the following information: Disk Usage: disk and log usage data Server Activity: SQL and windows server processor and memory utilization Query Statistics: Collects query statistics, individual query text, query plans, and specific queries. The System Data collection also provides pre-built reports to view and analyze the data: Server Activity History Report: Overview of resource utilization and consumption and server activity as shown in the snapshot below.
45
Disk Usage Summary Report: Overview of disk space used for all databases on the server and the growth trends for the data and log file for each database.
46
Configuration The full configuration of the Data Collector and the MDW is outside the scope of this paper. Please refer to the Managing Data Collection How-to Topics in SQL Server 2008 Books Online.
47
page_fault_count: Number of page faults that are incurred by the SQL Server process. A large number can indicate memory pressure. process_physical_memory_low: Process is responding to low physical memory notification. This can be good indicator for low memory conditions. Please refer to the sys.dm_os_process_memory topic in SQL Server 2008 Books Online for a full description of this DMV. sys.dm_os_sys_memory This DMV reports the overall system memory usage. Specific columns such as the total and available physical_memory_kb are good indicators of the total and available memory. The system_low_memory_signal_state can be used to determine a low memory condition. Please refer to the sys.dm_os_sys_memory topic in SQL Server 2008 Books Online for a full description of this DMV.
48
Events: They are monitoring points of interests in the execution of a SQL Server code path. When the point of interest is encountered the event is fired and the state information from the event is captured. Events can be used for tracing purposes or for triggering actions. The actions can be triggered synchronously or asynchronously. Targets: The event consumers are targets. After the event is fired, the event data is consumed by the target. Targets can process data, either synchronously, or asynchronously. Extended Event can have the following targets: Event bucketing Event pairing Event Tracing for Windows (ETW) Event file Synchronous event counter Ring buffer. Actions: The programmatic response or series of responses are called Actions. Some examples of actions are: Stack dumper Execution plan detection (SQL Server only) T-SQL Stack collection (SQL Server only) Run time statistics calculation Gather user input on exception. Types: The Type object encapsulates the information required to interpret the event data. Predicates: Predicates are a set of logical rules used to evaluate events when they are processed. They can be used to selectively capture event data based on specific criteria. Maps: A table that maps internal values to a descriptive string. Configuration Extended Events can be very useful for troubleshooting PeopleSoft application performance issues. The following example illustrates a code sample to: 1. Create an event session 2. Write the target output to a file 3. Select the event data from the file. Create an event session and write to a target file
create event session xsession_HighCpu on server ADD EVENT sqlserver.sql_statement_completed (action (sqlserver.sql_text) WHERE duration > 0), ADD EVENT sqlserver.sp_statement_completed (action (sqlserver.sql_text) WHERE duration > 0) add target package0.asynchronous_file_target
49
(SET filename=N'C:\temp\wait_stats.xel', metadatafile=N'C:\temp\wait_stats.xem') ; --- Start the session alter event session xsession_HighCpu on server state = start ;
For more information on Extended Events, please refer to the SQL Server Extended Events topic in SQL Server 2008 Books Online.
50
If queries differ by structure in any other way expect for literals or parameter values, the hash value will be different. In the example below, the two queries will have different hash values, since the first query has AND and the second query has OR:
SELECT 'x' FROM PS_CUST_CONVER WHERE SETID = 'MFG' AND CUST_ID = 'Z00000000022689' ;
Query Plan Hash Query Plan Hash is a binary hash value computed on the query execution plan during the query compilation phase. The query plan hash is calculated based on the logical and physical operators, and other important operator attributes. Query plan hash values will be the same for queries that have the same physical and logical operator tree structure, and identical attribute values for the important operator attributes. It is likely, that for some queries with varied parameter values, the query hash could be same, but the query plan hash is different. In the example below, the two queries have the same query hash value, but different query plan hash value. This is due to the fact that the optimizer chooses a different execution plan for each query, based on the cardinality of the data distribution for the parameter values:
SELECT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = PO001 AND RECEIPT_DT BETWEEN '2006-01-01' AND '2006-01-05' ;
51
FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = PO001 AND RECEIPT_DT BETWEEN '2006-01-01' AND '2008-01-05' ;
In the queries above, since the RECEIPT_DT values are so vastly different for the two queries, the optimizer may choose different execution plans for each. You can use the following code to find the query hash and the query plan hash for the above two queries:
-- Show the query_hash and query plan hash SELECT ST.text AS "Query Text", QS.query_hash AS "Query Hash", QS.query_plan_hash AS "Query Plan Hash" FROM sys.dm_exec_query_stats QS CROSS APPLY sys.dm_exec_sql_text (QS.sql_handle) ST WHERE ST.text = 'SELECT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = ''PO001'' AND RECEIPT_DT BETWEEN ''2006-01-01'' AND ''2006-01-05'';' OR ST.text = 'SELECT BUSINESS_UNIT, RECEIVER_ID, BILL_OF_LADING FROM PS_RECV_INQ_SRCH WHERE BUSINESS_UNIT = ''PO001'' AND RECEIPT_DT BETWEEN ''2006-01-01'' AND ''2008-01-05'';' ; GO
Performance Tuning using Query Hash and Query Plan Hash The Query Hash and Query Plan Hash can be a very powerful and effective performance tuning technique. Some practical applications for performance tuning are as follows: Cumulative Query Cost: Many a times, you may face a high CPU utilization issue on your database server. It is highly possible, that one large query may not be responsible for it, but many small queries may be causing a cumulative high CPU utilization. In this scenario, the query hash can be used to group those queries together as shown in the code below, taken from SQL Server 2008 Books Online:
-- Aggregated view of top-5 queries according to average CPU time. SELECT TOP 5 query_stats.query_hash AS "Query Hash", SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time", MIN(query_stats.statement_text) AS "Statement Text" FROM (SELECT QS.*, SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
52
((CASE statement_end_offset WHEN -1 THEN DATALENGTH(ST.text) ELSE QS.statement_end_offset END - QS.statement_start_offset)/2) + 1) AS statement_text FROM sys.dm_exec_query_stats AS QS CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats GROUP BY query_stats.query_hash ORDER BY 2 DESC ; GO
The following example returns information about the top five query plans according to average CPU time. This example aggregates the queries according to their query plan hash so that queries with the same query plan hash are grouped by their cumulative resource consumption:
SELECT TOP 5 query_plan_hash AS "Query Plan Hash", SUM(total_worker_time)/SUM(execution_count) AS "Avg CPU Time", MIN(CAST(query_plan as varchar(max))) AS "ShowPlan XML" FROM sys.dm_exec_query_stats AS QS CROSS APPLY sys.dm_exec_query_plan(QS.plan_handle) GROUP BY query_plan_hash ORDER BY 2 DESC ; GO
Baseline Query Plan Benchmarks: The Query Hash and Query Plan Hashes could be used as an effective tool to benchmark baseline query plans. You can run a stress test and capture the query plan hashes for the important and frequently executing queries. These hash values could then be compared to hash values on the production server, if a performance issue is noticed. Another application of this is also to monitor plan changes due to configuration or hardware changes. The baseline hash value of important queries can be recorded before the change and can be compared with the hash values after the change. This would help determine if any plans got changed.
53
A plan guide created by either means has a database scope and are stored in the sys.plan_guides table. Plan guides are only used to influence the query plan selection process of the optimizer and do not eliminate the need for the query to be compiled. A new function sys.fn_validate_plan_guide has also been introduced to validate existing plan guides which you may have created for your PeopleSoft workloads running on SQL server 2005 and ensure their compatibility with SQL Server 2008. Plan freezing is available in the SQL Server 2008 Standard, Enterprise, and Developer editions.
54
Both these enhancements help improve the scalability and performance without having negative side-effects on other objects in the instance. Lock escalation is supported in all editions of SQL Server 2008.
64-bit edition of Windows Server 2008 Datacenter or the Windows Server 2008 Enterprise Edition for Itanium-Based Systems operating system Inherent hardware capability to support Hot Add CPU
For SQL Server 2008 to be able to Hot Add CPU, it cannot be configured to use softNUMA.
3.6.2 NUMA
Non-Uniform Memory Access (NUMA) is a memory design technique used with multiprocessor servers. In NUMA, each CPU can access memory associated with the other groups in a coherent way, thereby reducing memory latency and improving scalability. The main benefit of NUMA is scalability, especially for large multiprocessor machines. Full discussion of NUMA is beyond the scope of this whitepaper. Please refer to the SQL Server 2008 Books Online topics on Understanding Non-uniform Memory Access and How SQL Server Supports NUMA for an in-depth NUMA discussion. SQL Server 2008 and some earlier versions (SQL Server 2000 SP3 and beyond) are NUMA aware. Some key changes were introduced in SQL Server 2005 for NUMA support. SQL Server has been designed for NUMA hardware and no configuration changes are required. It performs well on NUMA hardware without special configuration. Hardware and Soft-NUMA Support SQL Server supports hardware and soft-NUMA. For hardware-NUMA, SQL Server configures itself during startup based on underlying operating system and hardware configuration. For soft-NUMA, SQL Server needs to be physically configured, before it can use softNUMA. Please refer to the SQL Server 2008 Books Online topic on How to: Configure SQL Server to Use Soft-NUMA for soft-NUMA configuration.
56
4 Database Maintenance
The following sections discuss issues of database maintenance, such as managing indexes, detecting and reducing fragmentation, using database statistics, and controlling locking behavior.
57
partitions. The choice of tables and the columns to partition on will depend on your PeopleSoft application and specific scenario. Step 1 - Create a Partition Function A partition function specifies how the table or index is partitioned. The function helps divide the data into a set of partitions. The following example maps the rows of a table or index into partitions based on the values of a specified column:
CREATE PARTITION FUNCTION AcctRangePF1 (char(10)) AS RANGE LEFT FOR VALUES ( '1000', '2000', '3000', '4000') ;
Based on this function, the table to which this function is applied will be divided into five partitions as shown below: Partition Values 1 col1<= '1000' 2 col1 > '1000' AND col1 <= '2000' 3 col1 > '2000' AND col1 <= '3000' 4 col1 > '3000' AND col1 <= '4000' 5 col1 > '4000'
Step 2 - Create a Partition Scheme A partition scheme maps the partitions produced by a partition function to a set of filegroups that you define. The following example creates a partition scheme that specifies the filegroups to hold each one of the five partitions. This example assumes the filegroups already exist in the database.
CREATE PARTITION SCHEME AcctRangePS1 AS PARTITION AcctRangePF1 TO (HR1fg, HR2fg, HR3fg, HR4fg, HR5fg) ;
Step 3. Create a Table or Index Using the Partition Scheme The example below creates the PS_LEDGER table using the partition scheme defined in Step 2.
CREATE TABLE PS_LEDGER ( [BUSINESS_UNIT] [char](5) COLLATE Latin1_General_BIN NOT NULL, [LEDGER] [char](10) COLLATE Latin1_General_BIN NOT NULL, [ACCOUNT] [char](10) COLLATE Latin1_General_BIN NOT NULL,
58
The PS_LEDGER table will be created on the five partitions based on the partitioning function and the scheme created in Steps 1 and 2, respectively. The following table shows the partitions for PS_LEDGER based on the previous examples:
It is strongly recommended that you evaluate the choice to partition or not, and what tables to partition based on your specific PeopleSoft application scenario and requirements. Partitioning for most scenarios is most beneficial for management and maintenance. For some specific scenarios, partitioning can yield some performance improvement as well. If the tables involved in a query are not joined on the partitioning key and are not partitioned by the same partitioning function, or for a single table query, if all the data required for the query is not co-located on the same partition, performance may be negatively impacted. For more information about table and index partitioning in SQL Server 2008, refer to the topic Partitioned Tables and Indexes in SQL Server 2008 Books Online.
59
of all or part of a table, or an index. The access by a range of values is no longer sequential, limiting the ability of the storage engine to issue large I/O requests.
As mentioned in Chapter 2, for PeopleSoft applications it is recommended to set the MAXDOP server wide setting to 1. However, for better performance and CPU resource utilization during index maintenance operations, this setting either be temporarily increased or should be overridden by using the MAXDOP query hint as shown above.
information on the frequency of seeks, scans, and lookups by a user query on all indexes for all user tables in a specific database:
select db_name(database_id)as 'DB Name', object_name(isu.object_id) as 'Table Name' , si.name as 'Index Name', user_seeks as 'Seeks', user_scans as 'Scans' , user_lookups as 'Lookups' from sys.dm_db_index_usage_stats isu inner join sys.indexes si on si.index_id = isu.index_id and si.object_id = isu.object_id inner join sys.objects so on so.object_id = si.object_id and so.type = 'U' ;
For more information on this dynamic management view, see the topic sys.dm_db_index_usage_stats in SQL Server 2008 Books Online. The sys.dm_db_index_usage_stats dynamic management view or the query using it can be used in PeopleSoft applications to retrieve information on the index usage statistics of the database. This information can help to evaluate index usage and plan for index maintenance operations as well. It is important to note that the information in this dynamic management view is cleared out when the SQL Server service is started.
61
The information in this dynamic management view is reset or deleted when SQL Server service is started. For more details about these dynamic management views and the Missing Indexes feature, see the topic About the Missing Indexes Feature in SQL Server 2008 Books Online. The following example query can be used to identify missing index information for PeopleSoft applications.
select d.* , s.avg_total_user_cost , s.avg_user_impact , s.last_user_seek ,s.unique_compiles from sys.dm_db_missing_index_group_stats s ,sys.dm_db_missing_index_groups g ,sys.dm_db_missing_index_details d where s.group_handle = g.index_group_handle and d.index_handle = g.index_handle order by s.avg_user_impact desc go --- suggested index columns & usage declare @handle int select @handle = d.index_handle from sys.dm_db_missing_index_group_stats s ,sys.dm_db_missing_index_groups g ,sys.dm_db_missing_index_details d where s.group_handle = g.index_group_handle and d.index_handle = g.index_handle select * from sys.dm_db_missing_index_columns(@handle) order by column_id ;
It is highly recommended for PeopleSoft applications that you do a thorough analysis of the missing index data before creating any new indexes. Adding indexes may help improve query performance, however you should keep in mind that adding indexes, especially on highly volatile columns can have a significant negatively impact performance due to the extra processing that needs to be done to maintain them. It is recommended to only us PeopleSoft Application Designer to create any new indexes.
from sys.indexes i, sys.objects o where i.index_id NOT IN (select s.index_id from sys.dm_db_index_usage_stats s where s.object_id=i.object_id and i.index_id=s.index_id and database_id = db_id(PSFTDB) ) and o.type = 'U' and o.object_id = i.object_id order by object_name(i.object_id) asc ;
For PeopleSoft applications, it is important to understand that some indexes could be used quite infrequently; however, they still could be quite critical for performance of some specific functionality. For example, a batch process could run monthly, quarterly, or even annually. The index identified by the previous example query may never be used, but a batch process that is scheduled to be run could be using it. Deleting such an index could have adverse effects on performance for scheduled (but un-run) batch processes. Though it may lower overhead to delete indexes that are never used, thorough analysis should be made before deleting any index. If in doubt, it is best to disable an unused index, as explained below, rather than delete it. Note: The dynamic management view used to identify the unused index information gets cleared-out when SQL Server is restarted, therefore the information revealed by the previous query is from the last known SQL Server instance was restarted to the point in time the dynamic management view query is executed. To get an accurate view of the indexes that are not used over a longer period of time, and across SQL Server restarts, it is recommended to store snapshots of the output of the query over period of time and then analyze the aggregated data.
63
o To temporarily remove the index for performance troubleshooting purposes. o To optimize space while rebuilding other indexes. The following example shows how to disable an index:
ALTER INDEX PSCLEDGER ON dbo.PS_LEDGER DISABLE ; GO
or
CREATE INDEX WITH DROP_EXISTING ; GO
Make sure to evaluate each index carefully before disabling it. Some indexes may be required for monthly, quarterly, or year-end processes. Disabling infrequently used indexes could cause performance issues for those processes.
The sys.dm_db_index_physical_stats dynamic management function replaces the DBCC SHOWCONTIG statement in earlier versions of SQL Server. Unlike DBCC SHOWCONTIG, the fragmentation calculation algorithms in both cases consider storage that spans multiple files and, therefore, are more accurate. As a result, the fragmentation values may appear to be higher. For PeopleSoft applications, the value for avg_fragmentation_in_percent should ideally be as close to zero as possible for maximum performance. However, values up to 15 percent are acceptable.
64
For more information about sys.dm_db_index_physical_stats, refer to the topic sys.dm_db_index_physical_stats in SQL Server 2008 Books Online.
Use the DROP_EXISTING option to change the characteristics of an index or to rebuild indexes without having to drop the index and re-create it. The benefit of using the DROP_EXISTING option is that you can modify indexes created with PRIMARY KEY or UNIQUE constraints. This option performs the following: Removes all fragmentation. Reestablishes FILLFACTOR/PAD_INDEX. Recalculates index statistics. The second method to reduce fragmentation is to reorganize the index. To reorganize the index, use the ALTER INDEX REORGANIZE statement. It is the replacement for DBCC INDEXDEFRAG, and it will reorder the leaf-level pages of the index in a logical order. Use this option to perform online logical index defragmentation. This operation can be interrupted without losing work that has already been completed. The drawback in this method is that it does not do as good a job of reorganizing the data as an index rebuild operation and it does not update statistics. The following example demonstrates the ALTER INDEX REORGANIZE statement:
ALTER INDEX PS_LEDGER ON PS_LEDGER REORGANIZE; GO
65
The third method to reduce fragmentation is to rebuild the index. To do so, use the ALTER INDEX REBUILD statement. It is the replacement for DBCC DBREINDEX and it will rebuild the index online or offline. Use this option to: o Remove heavy defragmentation o Rebuild the physical index online or offline. The ALTER INDEX REBUILD statement requires a statistics update. The following example demonstrates the ALTER INDEX REBUILD statement:
ALTER INDEX PS_LEDGER ON PS_LEDGER REBUILD; GO
In general, when the avg_fragmentation_in_percent value is between 5 and 30 percent, the ALTER INDEX REORGANIZE statement can be used to remove fragmentation. For heavy fragmentation (more than 30 percent) the ALTER INDEX REBUILD or CREATE INDEX DROP EXISTING statements are recommended. Use the following guidelines to decide between the two options. Functionality Index definition can be changed by adding or removing key columns, changing column order, or changing the column sort order.* Index options can be set or modified. More than one index can be rebuilt in a single transaction. Most index types can be rebuilt online without blocking running queries or updates. Partitioned index can be repartitioned. Index can be moved to another filegroup. Additional temporary disk space is required. Rebuilding a clustered index rebuilds associated non-clustered indexes. ALTER INDEX CREATE INDEX WITH REBUILD DROP_EXISTING No Yes Yes Yes No No Yes Yes** Yes No Yes Yes Yes Yes
No No Unless the Unless the index definition keyword ALL is changed. specified. Yes No
Indexes enforcing PRIMARY KEY and UNIQUE constraints can be rebuilt without Yes dropping and re-creating the constraints. Single index partition can be rebuilt. Yes
66
* A non-clustered index can be converted to a clustered index type by specifying CLUSTERED in the index definition. This operation must be performed with the ONLINE option set to OFF. Conversion from clustered to non-clustered is not supported regardless of the ONLINE setting. ** If the index is re-created by using the same name, columns and sort order, the sort operation may be omitted. The rebuild operation checks that the rows are sorted while building the index.4 Fragmentation alone is not a sufficient reason to reorganize or rebuild an index. The main effect of fragmentation is that it slows down page read-ahead throughput during index scans. This causes slower response times. It is also not recommended to remove fragmentation for fragmentation of 5 percent or less, since depending on the index size, the cost may outweigh the benefit.
Some text in this section, including the table and associated notes, are taken from the following source: Microsoft SQL Server 2008 Books Online. Reorganizing and Rebuilding Indexes. Microsoft Corporation. http://msdn2.microsoft.com/en-us/library/ms189858.aspx 67
In the following example, all indexes on the PS_LEDGER table are rebuilt online.
ALTER INDEX ALL ON PS_LEDGER REBUILD WITH (ONLINE = ON) ; GO
When you perform online index operations, the following guidelines apply: The underlying table cannot be modified, truncated, or dropped while an online index operation is in process Clustered indexes must be created, rebuilt, or dropped offline when the underlying table contains large object (LOB) data types: image, ntext, text, varchar(max), nvarchar(max), varbinary(max), and xml Non-unique non-clustered indexes can be created online when the table contains LOB data types, but none of these columns are used in the index definition as either key or non-key (included) columns. Non-clustered indexes defined with LOB data type columns must be created or rebuilt offline. You can perform concurrent online index operations on the same table only when doing the following: Creating multiple non-clustered indexes. Reorganizing different indexes on the same table. Reorganizing different indexes while rebuilding non-overlapping indexes on the same table. All other online index operations performed at the same time fail. For example, you cannot rebuild two or more indexes on the same table concurrently, or create a new index while rebuilding an existing index on the same table.
68
4.5 Statistics
Statistics are details about the uniqueness (or density) of the data values, including a histogram consisting of an even sampling of the values for the index key (or the first column of the key for a composite index) based on the current data. It also includes the number of pages in the table or index. SQL Server uses a cost-based optimizer, which means that if the statistics are not relatively current, it is misleading to the optimizer and can result in poor execution plans.
69
For example, to disable automatic updating of statistics for a specific index on the table PS_BO:
sp_autostats PS_BO, 'OFF', PSABO ; GO
70
Alternatively, use the UPDATE STATISTICS statement with the WITH NORECOMPUTE option. This indicates that statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates again. For example:
UPDATE STATISTICS PS_BO WITH NORECOMPUTE ; GO
Note: Setting the AUTO_UPDATE_STATISTICS database option to FALSE overrides any individual table settings.
Statistics are used by the query optimizer to estimate the selectivity of expressions, and thus the size of intermediate and final query results. Good statistics allow the optimizer to accurately assess the cost of different query plans and choose a better query plan. User-created statistics are required for very few advanced performance tuning scenarios. In the majority of cases the statistics created by SQL Server are usually sufficient for the optimizer to produce efficient execution plans. For a detailed discussion about statistics, refer to the white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2005 available from Microsoft TechNet at http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx. Even though this paper is targeted to SQL Server 2005, the content is relevant and accurate for SQL Server 2008.
71
The following example updates statistics for all the indexes on PS_BO table, using a default sampling:
UPDATE STATISTICS PS_BO ; GO
Usually a default sampling is good enough. However, there were few occasions during tuning and benchmarking of a PeopleSoft application that the SQL Server optimizer failed to produce the best execution plan for some SQL statements. Further testing showed that updating statistics with FULLSCAN improved the situation. The following example updates statistics for all the indexes on PS_BO table, using a specific sampling:
UPDATE STATISTICS PS_BO WITH SAMPLE 40 PERCENT ; GO
The following example updates statistics for all the indexes on PS_BO table, using all the rows:
UPDATE STATISTICS PS_BO WITH FULLSCAN ; GO
Note: Use UPDATE STATISTICS with FULLSCAN as an exceptional situation, only if you believe the optimizer is not selecting a good execution plan due to inaccuracies in the sampled statistics on the index or table. Statistics can also be updated on all user-defined tables in the current database, using the sp_updatestats stored procedure as shown below. This may take a very long time to complete, especially when run against large databases.
USE PSFTDB GO EXEC sp_updatestats ; GO
72
o Average key length. o All density. o Distribution histogram. The following is an example of the DBCC SHOW_STATISTICS command and its output:
The statistics information can also be viewed using the sys.dm_db_index_physical_stats dynamic management view using the command shown below.
SELECT * FROM sys.dm_db_index_physical_stats (DB_ID('TVP'), NULL, NULL, NULL, 'DETAILED') ; GO
73
74
However, it reduces concurrency. When lock grain is lower (key or row), the reverse is true. In SQL Server 2008, the ALTER INDEX statement with the ALLOW_ROW_LOCKS and ALLOW_PAGE_LOCKS options can be used to customize the initial lock grain for an index or an entire table, including indexes. These options will allow (or disallow) row or page locks on the specified object. The default for these options is ON, that is, row and page locks are allowed. Note: Row locks on non-clustered indexes refer to the key or row locator entries in the indexs leaf pages. By disallowing page locks, you can increase write concurrency, and can reduce writer writer deadlocks. For example:
ALTER INDEX PS_BO ON PS_BO SET (ALLOW_PAGE_LOCKS = OFF); GO
Note: In SQL Server 2008, when using the read-committed snapshot isolation level, lock contention or blocking issues due to writers blocking readers are eliminated. Therefore, the need to use the ALLOW_ROW_LOCKS and ALLOW_PAGE_LOCKS options for controlling locking to avoid blocking and improving concurrency is greatly reduced for writer reader blocking scenarios. If an index (or table) is dropped and re-created, ALTER INDEX will need to be reexecuted to reestablish the customized locking for the table or index.
75
When the threshold is reached (that is, lock memory is greater than 40 percent of SQL Server memory), SQL Server attempts to escalate locks to control the amount of memory used for locks. It identifies a table within the transaction that holds the maximum amount of locks as a good candidate for lock escalation. However, if it finds any incompatible locks held on the table, it skips that table. If the table lock requested cannot be granted, escalation is not blocked; the transaction continues and escalation will be requested when the next multiple of 1250 locks have been acquired by the transaction. Lock Escalation Hierarchy Lock escalation never converts row locks to page locks, but always converts them to partition (if the table is partitioned and the option to escalate to partition locks is selected) or table locks. The escalation is always directly from row or page to a partition or table lock. Lock Escalation and Performance Though lock escalation may at times result in blocking and deadlocks, it is not the only cause of blocking and deadlocks. Often blocking and deadlocks happens because of the application and the nature of the usage, even without any lock escalation. If lock escalation is causing performance issues through excessive blocking or deadlocking, lock escalation can be prevented. Use the following techniques to prevent lock escalation: o Use SQL Server Profiler and monitor lock escalations to find out how frequently lock escalation occurs and on what tables. o Ensure that SQL Server has sufficient memory. Use the SQL Server Performance Monitor to monitor Total Server Memory (KB) and Lock Memory. o Determine if the transaction provides a way to control the commit frequency. If yes, increase the commit frequency. A good example is Global Payroll PAYCALC. o Selectively disable lock escalation on one or more tables using the method explained below. Controlling Lock Escalation SQL Server 2008 introduces a new option to control table lock escalation. Using the ALTER TABLE command, as explained in section 3.5.3, locks can be specified to not escalate, or to escalate to the partition level for partitioned tables. Both these enhancements help improve the scalability and performance without having negative side-effects on other objects in the instance. Controlling the lock escalation is done at the database table level and does not require any PeopleSoft application change. In SQL Server 2008, the read-committed snapshot isolation level is the recommended setting for PeopleSoft applications. This isolation level has no direct effect on lock escalation; however, it does alleviate lock contention or blocking problems caused by lock escalation. For instance, if an UPDATE statement causes lock escalation and the
76
entire table is locked, under read-committed snapshot isolation level, a concurrent read transaction on the table would not be blocked. Warning! Ensure that the version of PeopleTools you are using supports the readcommitted snapshot isolation level. You can only use it if it is supported by PeopleTools.
4.6.5 Deadlocks
In SQL Server 2008, with the introduction of the new read-committed snapshot isolation level, lock contention or blocking problems are greatly reduced. Read-committed snapshot isolation eliminates writers blocking readers and readers blocking writers. This, in turn, would also eliminate the read write deadlock scenarios, where an UPDATE, INSERT or DELETE and a SELECT transaction deadlock. However, deadlocks caused by two concurrent UPDATE, INSERT or DELETE transactions and other writer writer scenarios still may exist. The information that follows will help you analyze and debug these deadlocks. A deadlock occurs when two processes are waiting for a resource and neither process can advance because the other process prevents it from getting the resource. Knowing the following information is a starting point to resolve a deadlock: Processes that caused the deadlock. Deadlock trace. SQL statements that caused the deadlock. SQL statements within the transaction, where the deadlock happened. Execution plan on each SQL statement that resulted in a deadlock.
Deadlocks can be monitored in one of three ways: using trace flag 1204, using trace flag 1222, or using SQL Server Profiler. Using Trace Flag 1204 or Trace Flag 1222 When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is captured in the SQL Server 2008 error log. Trace flag 1204 reports deadlock information formatted by each node involved in the deadlock. Trace flag 1222 formats deadlock
77
information, first by processes, and then by resources in an XML formatted output. It is possible to enable both trace flags to obtain two representations of the same deadlock event. SQL Server can be started with trace flag 1204 or 1222 enabled. To do so: 1. Open SQL Server Configuration Manager, and select SQL Server Services 2. Right-click on the SQL Server service corresponding to the PeopleSoft database instance and select your Properties 3. In the Advanced tab enter the following traceflags to end of the Startup Parameters entry. -T1204 -T3605 Make sure to not accidentally change any of the existing Startup Parameter values that exist. The output of the deadlock trace will be logged into the SQL Server error log specified by the -e parameter in Startup Parameters. The 3605 flag causes the output to go to the error log rather than the screen. This is set as the default in SQL Server 2008. Alternatively, deadlock tracing can be enabled with the following command:
DBCC TRACEON (1204, 3605,-1) ; GO
The output contains the following information: The object involved in the deadlock. In this example, it is 10:2045302396:2, where 10 is the database ID, 2045302396 is the object ID, and the final 2 is the index ID. Entering the following gives you the name of the database where the deadlock occurred:
SELECT DB_NAME(captured_database_id) ;
78
From the deadlocked database, entering the following shows the table involved:
SELECT OBJECT_NAME(captured_object_id) ;
The statement type shows what kind of statement it is (for example, INSERT or UPDATE). Input buf: (input buffer) shows the actual statement. However, in a PeopleSoft environment you see either sp_prepexec or sp_cursorexec. This is not very useful in identifying the SQL statement.
For more information about the trace flags, see Detecting and Ending Deadlocks in SQL Server 2008 Books Online. Using SQL Server Profiler A better alternate is to enable SQL Server Profiler. The list of events and data columns required is specified in Troubleshooting Tips within section 5.2.3 Using SQL Server Profiler. The following image represents sample output captured by the profiler:
To use this SQL Profiler output to determine the cause of a deadlock: 1. Save the output into a trace table. From the File menu, select Save As, and choose Trace Table. 2. Use the following T-SQL statement to find the list of Deadlock Chain events.
SELECT * FROM DLTRACE1 WHERE EventClass=59 ; GO
79
DLTRACE1 is the trace table and EventClass 59 is for deadlocks. From the output you can determine which SPID is involved in the deadlock and note down the row number for this Deadlock Chain event. 3. Substitute the values in the following query and you will find all the T-SQL statements used by that process as part of the deadlocked transaction.
DECLARE @LastCommitPoint int, @DLSpid int, @DLChainRowNumber int /* Set the Deadlock SPID and the Deadlock Chain's rownumber */ SET @DLSpid = 134 SET @DLChainRowNumber = 159501 SELECT @LastCommitPoint = max(RowNumber) FROM DLTRACE1 WHERE SPID = @DLSpid AND RowNumber < @DLChainRowNumber AND EventClass = 41 -- SQL:StmtCompleted AND TextData like 'COMMIT TRAN%' SELECT * FROM DLTRACE1 WHERE SPID = @DLSpid AND RowNumber < @DLChainRowNumber AND RowNumber > @LastCommitPoint AND EventClass = 45; -- SP:StmtCompleted GO
4. Repeat the previous steps for the other Deadlock Chain events. These SQL statements will present a clear picture of how the deadlock happened. The following EventClass classes and their corresponding IDs are relevant to the PeopleSoft environment.
/* RPC:Completed - 10 Show Plan Text -96 Execution Plan - 68 RPC:Starting - 11 Lock:Escalation - 60 Lock:Deadlock - 25 Lock:Deadlock Chain - 59 SP:StmtStarting - 44 SP:StmtCompleted - 45 SQL:StmtStarting - 40 SQL:StmtCompleted - 41 (COMMIT TRAN) */
This Appendix section in this document includes an example of a procedure that automates the process. You can use this procedure as a model and modify it for your purposes.
80
Determine whether read-committed snapshot isolation is enabled. (Only applicable to those versions of PeopleTools that support read-committed snapshot isolation level). Determine whether the table contains up to date statistics. Check the missing index dynamic management views explained in section 4.2.2.2 to check for any missing indexes that may exist on the tables involved in the deadlock. Create any additional indexes that could help resolve the deadlock. Review the execution plans of the SQL statements that caused the deadlock and determine if they do an index scan. If they do, see if creating an additional index changes the access path for the SQL statement from index scan to index seek. For example, examine the following SQL statement:
SELECT DISTINCT EOEW_MAP_OBJ FROM PS_EOEW_RUN_PROC WHERE RUN_CNTL_ID LIKE :1 %CONCAT .%
The SQL statement does a clustered index scan because the leading key of the existing index is OPRID, but the SQL statement does not use OPRID as part of the WHERE clause. The solution is to add another index with RUN_CNTL_ID as a leading key:
Note: PeopleSoft applications are delivered with the necessary indexes that are required for an application and its performance for typical usage. They are not delivered with all the possible indexes because an index creates unnecessary overhead (on INSERT, UPDATE, and DELETE) if it is not useful for an implementation. Your implementation (data and business processes) may warrant some additional indexes. Adding an index cover for a non-clustered index to cover the query could help resolve the deadlock. In the previous example, the SQL statement would use the new index
81
first, but to get the EOEW_MAP_OBJ, it has to go to the table. It would use the available clustered index to perform this task. If EOEW_MAP_OBJ is also added to the new non-clustered index, the query becomes a covered query. In other words, SQL Server could build the result set entirely by reading the index. If the column you are trying to add to a non-clustered index is part of clustered index, there is no need to add that column to the non-clustered index for the purpose of index cover. Pay attention to lock escalations. If the deadlocks are being reported at the PAGE level in the SQL errorlog output, use the INDEXPROPERTY statement to determine whether page locks are disallowed on a table. For example:
SELECT INDEXPROPERTY(OBJECT_ID('PS_BO'), 'PS0BO', 'IsPageLockDisallowed'); GO
A return value of 0 means that page locks are allowed; a value of 1 means page locks are disallowed. If needed you can use ALTER INDEX to disallow page locks.
ALTER INDEX PS_BO ON PS_BO SET (ALLOW_PAGE_LOCKS = OFF); GO
82
83
o System Monitor enables you to capture system as well as SQL Serverspecific information. The following table summarizes some of the useful System Monitor counters. Processor Counters Performance Counter Object Processor % Privileged Time
Description % Privileged Time is the percentage of non-idle processor time spent in privileged mode. (Privileged mode is a processing mode designed for operating system components and hardware-manipulating drivers. It allows direct access to hardware and all memory.) % Privileged Time includes time servicing interrupts and DPCs. A high rate of privileged time might be attributable to a large number of interrupts generated by a failing device. This counter displays the average busy time as a percentage of the sample time. % User Time is the percentage of non-idle processor time spent in user mode. (User mode is a restricted processing mode designed for applications, environment subsystems, and integral subsystems.) This counter displays the average busy time as a percentage of the sample time.
Processor
% User Time
Memory Counters Performance Counter Description Object Memory Available MBytes Available MBytes is the amount of physical memory available to processes running on the computer, in megabytes (bytes/1,048,576). Memory Committed Bytes Committed Bytes is the amount of committed virtual memory, in bytes. (Committed memory is physical memory for which space has been reserved on the disk paging file, in case it needs to be written back to disk.) This counter displays the last observed value only; it is not an average. Memory Page Faults/Sec Page Faults/sec is the overall rate that faulted pages are handled by the processor. It is measured in numbers of pages faulted per second. A page fault occurs when a process requires code or data that is not in its working set (its space in physical memory). This counter includes both hard faults (those that require disk access) and soft faults (where the faulted page is found elsewhere in physical
84
memory). Most processors can handle large numbers of soft faults without consequence. However, hard faults can cause significant delays. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Physical Disk Counters Performance Counter Object Logical Disk Avg. Disk Queue Length
Description Avg. Disk Queue Length is the average number of both read and write requests that were queued for the selected disk during the sample interval. This value should be <= two per disk. This counter measures how busy a physical array is (not logical partitions or individual disks in an array); it is a good indicator of the I/O for each array on your server. Avg. Disk Sec/Read is the average time in seconds of a read of data from the disk. This value should ideally be less than 11 milliseconds at all times. Avg. Disk Sec/Write is the average time in seconds of a write of data to the disk. This value should ideally be less than 11 milliseconds at all times for the disks hosting the data files. For the SQL Server database transaction log the value should preferably be less than 5 msec.
Logical Disk
% Disk Time
Logical Disk
Logical Disk
Network Counters Performance Counter Object Network Bytes Total/Sec Interface Network Interface Network Interface Bytes Sent/Sec Bytes Received/Sec
Description Bytes Total/sec is the rate that bytes are sent and received on the interface, including framing characters. Bytes Sent/sec is the rate that bytes are sent on the interface, including framing characters. Bytes Received/sec is the rate that bytes are received on the interface, including framing characters.
SQL Server Counters Performance Counter Object SQL Server: Buffer Cache Hit Buffer Ratio Manager
Description Percentage of pages that were found in memory, thus not requiring a physical I/O operation. This is your indicator of how well the SQL Server buffer cache is performing. The higher the number the better. You
85
SQL Server: Buffer Manager SQL Server: Databases SQL Server: Databases
should typically see this value to be greater than 95%. Estimated number of seconds a page will stay in the buffer pool before it is written out (if not referenced). Low values (less than 180) may be a sign of an insufficient memory condition. Number of active transactions currently executing in the database. Number of transactions per second for this database. This counter shows how much activity is occurring in the system. The higher the value, the more activity is occurring. Total amount of memory, in kilobytes, that is allocated to locks. Total amount of dynamic memory, in kilobytes, that the server is currently consuming. SQL Server dynamically allocates and de-allocates memory based on how much memory is available in the system. This counter offers you a view of the memory that is currently being used. Number of full table or index scans per second. Since PeopleSoft applications do not use heap tables, you will not see any explicit table scans. Clustered index scans should be treated as full table scans. If this counter shows a non-zero value (>1), it is an indication that some queries can be optimized. This could be an opportunity for efficient indexing. This counter shows the number of user connections, not the number of users that currently are connected to SQL Server. If this counter exceeds 255, you may want to increase the SQL Server configuration setting max worker threads to a number higher than 255. If the number of connections exceeds the number of available worker threads, SQL Server begins to share worker threads, which can hurt performance. Number of SQL Server batch requests executed per second. A batch can be a single T-SQL statement or a group of T-SQL statements. For most PeopleSoft applications the batches are executed as single TSQL statements. Number of SQL Server query compilations per second. This value should be lower than 20. For values higher than that you may want to consider enabling the PARAMETERIZATION FORCED
Full Scans/Sec
User Connections
Batch Requests/sec
SQL Compilations/sec
86
SQL ReCompilations/sec
option. Number of SQL Server query re-compilations per second. For PeopleSoft applications, recompilations are primarily caused by the statistics on the table changing and thereby invalidating existing cached plans. This number is usually less than 10.
87
Online Trace Modify psappsrv.cfg as follows: ;---------------------------------------------------------------------; SQL Tracing Bitfield ; ; Bit Type of tracing ; --- --------------; 1 - SQL statements ; 2 - SQL statement variables ; 4 - SQL connect, disconnect, commit and rollback ; 8 - Row Fetch (indicates that it occurred, not data) ; 16 - All other API calls except ssb ; 32 - Set Select Buffers (identifies the attributes of columns to be selected). ; 64 - Database API specific calls ; 128 - COBOL statement timings ; 256 - Sybase Bind information ; 512 - Sybase Fetch information ; 4096 - Manager information ; 8192 - Mapcore information ; Dynamic change allowed for TraceSql and TraceSqlMask TraceSql=3 TraceSqlMask=12319 Note: TraceSql=3 captures the SQL information with relatively low overhead. PeopleTools development uses a value of 63 for SQL debugging. ;---------------------------------------------------------------------; PeopleCode Tracing Bitfield ; ; Bit Type of tracing ; --- --------------; 1 - Trace entire program ; 2 - List the program ; 4 - Show assignments to ; 8 - Show fetched values variables ; 16 - Show stack ; 64 - Trace start of programs ; 128 - Trace external function calls ; 256 - Trace internal function calls ; 512 - Show parameter values ; 1024 - Show function return value ; 2048 - Trace each statement in program ; Dynamic change allowed for TracePC and TracePCMask TracePC=456 TracePCMask=0
88
o Allows you to look for specific information, such as queries involving a particular table, with the filters (note the search is case-sensitive). For example: ObjectID - Equals - 1977058079 OR TextData - Like - %PS_BO_REL_CAT_ITEM% o Allows you to search upward for a cursor number to find the SQL statement. For example, you see a command such as sp_cursorexecute 41992. If this step shows performance problem, such as high reads or a higher duration, search upward for the cursor number 41992 to find the appropriate prepare statement.
89
To monitor SQL statements in the PeopleSoft environment, some specific events need to be captured. You can include these events and save them as a trace template for future use. The following table summarizes some potentially useful events on a PeopleSoft database. Lock Events Category of Event Lock
Lock
Deadlock Chain
Lock
Escalation
Explanation/Remarks Indicates that two concurrent transactions have deadlocked each other by trying to obtain incompatible locks on the resources that the other transaction owns. Produced for each of the events leading up to the deadlock. For example, if three transactions are involved in a deadlock, three processes corresponding to the three transactions are listed as a Deadlock Chain. A finer-grained lock has been converted to a coarser-grained lock. SQL Server lock escalation will always convert row or page level locks into table level locks.
Database
Explanation/Remarks Indicates that the data file grew automatically. This event is not generated if data file is grown explicitly through ALTER DATABASE. Performance is severely impacted during the autogrowth of a database. The database should be sized properly so that this event never occurs on a production database. Capturing this event has very low overhead. Indicates that the log file grew automatically. This event is not generated if the log file is grown explicitly through ALTER DATABASE. Performance is severely impacted during the autogrowth of a log file. The log should be sized properly so that this event never occurs on a production database. Capturing this event has very low overhead.
Performance Events Category of Event Specific Event Performance Showplan All Performance Showplan
Explanation/Remarks Displays the query plan of the SQL statement with full compile-time details. Displays the query-plan with full run-time
90
Statistics Profile details (including actual and estimated number of rows passing through each operation) of the statement that was executed. It requires that the Binary Data column be included. Stored Procedure Events Category of Event Specific Event Stored Procedures RPC:Starting
Explanation/Remarks Occurs when a remote procedure call has started. PeopleSoft applications extensively use the stored procedure (type) ODBC calls, such as sp_cursorprepare, sp_cursorexecute, and sp_cursorprepexec. They all fall under the Stored Procedures event category. RPC:Completed Indicates when the stored procedure is completed. SP:StmtStarting Indicates when a statement within the stored procedure is starting. SP:StmtCompleted Indicates when a statement within the stored procedure has completed.
Explanation/Remarks Occurs when a Transact-SQL statement is starting. SQL:StmtCompleted Occurs when a Transact-SQL statement is completed.
Data Columns The following SQL Profiler data columns are required in order to capture the relevant information for the events suggested above. Columns EventClass SPID processor Duration TextData Explanation/Remarks Type of event class captured. Server process ID assigned by SQL Server to the process associated with the client. Amount of processor time (in milliseconds) used by the event. Amount of time (in milliseconds) used by the event. Text value dependent on the event class captured in the trace. This column is important if you want to apply a filter based on the query text, or if you save the file into a table and run TransactSQL queries against the table. Binary value dependent on the event class captured in the trace. For some events such as Performance. ShowPlan Statistics, it is necessary to include this data column. This column is readable only using SQL Server Profiler as it stores the binary form of the
91
BinaryData
StartTime
EndTime
IndexID
ObjectID Reads
Writes
data. Time at which the event started, when available. For filtering, expected formats are YYYY-MM-DD and YYYY-MM-DD HH:MM:SS. Time at which the event ended. This column is not populated for starting event classes, such as SP:StmtStarting or RPC:Starting. For filtering, expected formats are YYYY-MM-DD and YYYY-MM-DD HH:MM:SS. ID for the index on the object affected by the event. To determine the index ID for an object, use the index_id column of the sys.indexes system table. System-assigned ID of the object. Number of logical reads performed by the server on behalf of the event. This column is not populated for starting event classes, such as SP:StmtStarting or RPC:Starting. Number of physical writes performed by the server on behalf of the event. This column is not populated for starting event classes, such as SP:StmtStarting or RPC:Starting.
Note: Though SQL Server Profiler captures trace files with these events and data columns may provide comprehensive information, trace files or tables may become huge. Troubleshooting Tips The following table summarizes of common problems and the possible causes that can result in SQL Server Profiler not producing the desired output. . Issue Possible Cause Event is captured but no relevant data Correct columns are not selected, for displayed. example, Performance. ShowPlan Statistics without BinaryData column. In SQL Server 2008, the relevant columns are automatically selected for an event in SQL Server Profiler. Setting a filter does not filter out all Rows with NULL values are not filtered. unrelated data. Search yields no matches even when values Profiler search is case-sensitive. exist. Heavy processor usage and disk activity. Reduce amount of data being captured; log to faster disks. Warning message about events not Unable to write all the trace information in captured appears. time. Reduce amount of data being captured; log to faster disk sub-system.
92
93
Top 10 Processor Consumers The dynamic management views sys.dm_exec_query_stats and sys.dm_exec_sql_text can be used to identify the top processor consumers. The following dynamic management view query will retrieve the top 10 processor consumers:
SELECT TOP 10 qs.total_worker_time/qs.execution_count as [Avg CPU Time], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg CPU Time] DESC ; GO
Top 10 I/O Consumers The dynamic management views sys.dm_exec_query_stats and sys.dm_exec_sql_text can be used to identify the top I/O consumers. The following dynamic management view query will retrieve the top 10 I/O consumers:
SELECT TOP 10 (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count as [Avg IO], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid, qs.sql_handle, qs.plan_handle FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg IO] DESC ; GO
The dynamic management views sys.dm_exec_query_stats, sys.dm_exec_sql_text, sys.dm_exec_query_plan can be joined to retrieve the top 10 queries and their execution plans by elapsed duration. The elapsed duration is in microseconds. The query plan is in an XML format.
SELECT TOP 10 qs.last_elapsed_time as 'Elapsed Time' , SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid, qp.query_plan FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt cross apply sys.dm_exec_query_plan (qs.plan_handle) as qp ORDER BY [Elapsed Time] DESC ; GO
System Waits The dynamic management view sys.dm_os_waiting_tasks lists all the current waiting tasks and the wait types associated with the tasks. This dynamic management view is useful to get an overall feel for the current system waits. The following code retrieves the current waiting tasks:
select session_id , exec_context_id , wait_type , wait_duration_ms , blocking_session_id from sys.dm_os_waiting_tasks where session_id > 50 order by session_id, exec_context_id ; GO
For historical wait statistics in the system, or in other words, for statistical information on waits that have already been completed, use the sys.dm_os_wait_stats dynamic management view as follows:
SELECT * from sys.dm_os_wait_stats ORDER BY wait_time_ms DESC ; GO
95
It is important to note that these statistics are not persisted across SQL Server restarts, and all data is cumulative since the last time the statistics were reset or the server was started. To reset wait counts you can use the following command:
DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR) ; GO
Blocking-Related Dynamic Management Views The dynamic management views sys.dm_tran_locks, sys.dm_os_waiting_tasks, sys.dm_exec_requests, sys.dm_exec_sql_text, and sys.sysprocesses can be used to retrieve the blocker and blocked SQL text and the lock requested modes as follows:
select t1.resource_type ,db_name(resource_database_id) as [database] ,t1.resource_associated_entity_id as [blk object] ,t1.request_mode ,t1.request_session_id -- spid of waiter ,(select text from sys.dm_exec_requests as r --- get sql for waiter cross apply sys.dm_exec_sql_text(r.sql_handle) where r.session_id = t1.request_session_id) as waiter_text ,t2.blocking_session_id -- spid of blocker ,(select text from sys.sysprocesses as p --get sql for blocker cross apply sys.dm_exec_sql_text(p.sql_handle) where p.spid = t2.blocking_session_id) as blocker_text from sys.dm_tran_locks as t1, sys.dm_os_waiting_tasks as t2 where t1.lock_owner_address = t2.resource_address ; GO
I/O-Related Dynamic Management Views Average I/O Stalls The dynamic management view sys.dm_io_virtual_file_stats can be used to identify the I/O stalls as follows:
select database_id, file_id ,io_stall_read_ms ,num_of_reads ,cast(io_stall_read_ms/(1.0+num_of_reads) as numeric(10,1)) as 'avg_read_stall_ms' ,io_stall_write_ms ,num_of_writes ,cast(io_stall_write_ms/(1.0+num_of_writes) as numeric(10,1)) as 'avg_write_stall_ms' ,io_stall_read_ms + io_stall_write_ms as io_stalls ,num_of_reads + num_of_writes as total_io
96
,cast((io_stall_read_ms+io_stall_write_ms)/(1.0+num_of _reads + num_of_writes) as numeric(10,1)) as 'avg_io_stall_ms' from sys.dm_io_virtual_file_stats(null,null) order by avg_io_stall_ms desc ; GO
SHOWPLAN_TEXT or SHOWPLAN_XML shows all of the steps involved in processing the query, including the order of table access, mode of access, types of joins used, and so on, as in the following example.
SET SHOWPLAN_TEXT ON ; -- (or you can use SET SHOWPLAN_XML ON) GO SELECT EMPLID FROM PS_DEDUCTION_BAL B1 WHERE B1.EMPLID = 'PA100000001' AND B1.COMPANY = 'GBI' AND B1.BALANCE_ID = 'CY' AND B1.BALANCE_YEAR = 2000 AND B1.BALANCE_PERIOD = ( SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2
97
WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR AND DB2.DEDCD = B1.DEDCD AND DB2.DED_CLASS = B1.DED_CLASS AND DB2.BENEFIT_RCD_NBR = B1.BENEFIT_RCD_NBR AND DB2.BALANCE_PERIOD = 4 ) AND B1.DED_YTD <> 0 ; GO SET SHOWPLAN_TEXT OFF ; GO
Here, the inner query is resolved first by Clustered Index Seek on PS_DEDUCTION_BAL. The outer query is resolved next using Clustered Index Seek on PS_DEDUCTION_BAL. The two result sets are merged using a Hash Match join. SHOWPLAN_ALL provides the same information as SHOWPLAN_TEXT, plus estimates of number of rows that are expected to meet the search criteria, estimated size of the result rows, estimated processor time, total cost estimate, and so on, as in the following example:
SET SHOWPLAN_ALL ON ; GO SELECT EMPLID FROM PS_DEDUCTION_BAL B1 WHERE B1.EMPLID = 'PA100000001' AND B1.COMPANY = 'GBI' AND B1.BALANCE_ID = 'CY' AND B1.BALANCE_YEAR = 2000 AND B1.BALANCE_PERIOD = ( SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2 WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR
98
A graphical Showplan can be obtained in SQL Query Analyzer by selecting Display Estimated Execution Plan from the Query menu or by pressing Ctrl+L. In this case, the query is not executed. Alternately, the query can be executed and the actual execution plan can be obtained by selecting Show Execution Plan from the Query menu or by pressing Ctrl+K. The following is the graphical plan for the same SQL:
SELECT MAX(DB2.BALANCE_PERIOD) FROM PS_DEDUCTION_BAL DB2 WHERE DB2.EMPLID = B1.EMPLID AND DB2.COMPANY = B1.COMPANY AND DB2.BALANCE_ID = B1.BALANCE_ID AND DB2.BALANCE_YEAR = B1.BALANCE_YEAR AND DB2.DEDCD = B1.DEDCD AND DB2.DED_CLASS = B1.DED_CLASS AND DB2.BENEFIT_RCD_NBR = B1.BENEFIT_RCD_NBR AND DB2.BALANCE_PERIOD = @P5 ) AND B1.DED_YTD <> 0 ; GO SET SHOWPLAN_TEXT OFF ; GO
Note: You can use sp_help nameOfYourTable to determine the data types for the required columns.
Note: In this example, @P2 defines the cursor type, @P3 defines cursor concurrency, @P7, @P8, and @P9 are the user-defined parameters used in the query.
In the PeopleSoft environment, because most of the SQL statements are executed as RPCs, neither sp_who nor DBCC would help find the actual command. Also, because the application server masks the actual user ID, it is difficult to find the SPID corresponding to a user. The context_info and sql_handle columns of the sys.dm_exec_requests dynamic management view, and the sys.dm_exec_sql_text dynamic management view can be used to get the actual SQL, as in the following example:
select session_id, cast(context_info as varchar(max)), qt.text from sys.dm_exec_requests cross apply sys.dm_exec_sql_text(sql_handle) qt ; GO
Zero cost plans are not cached. If you would like to retrieve the SQL statement for those plans, use trace flag 2861. Trace flag 2861 instructs SQL Server to keep zero cost plans cached, which SQL Server would typically not cache. However, this trace flag should only be used on development or test systems because it may add significant overhead and memory pressure due to the caching of all query plans, and should be disabled as soon as your investigation is complete. To temporarily enable zero cost plan caching with the DBCC TRACEON statement as follows:
DBCC TRACEON (2861); GO
To disable zero cost plan caching use the TRACEOFF statement as follows:
DBCC TRACEOFF (2861); GO
101
You can use DBCC TRACESTATUS to determine the status of a particular trace flag or all the trace flags enabled in the SQL Server instance using of the following commands:
DBCC TRACESTATUS (2861) -- Used to view status of traceflag 2861.
or
DBCC TRACESTATUS (-1) -- Used to view status of all traceflags.
Note: If you turn on the trace flag 2861 instance-wide with DBCC TRACEON ( 2861, -1), the system performance can be affected severely. You can use this on test servers, but it is recommended that you never use it on a production server. Check Appendix B, SP_PSWHO for a sample stored procedure that reveals much more information, and can be used as an alternate to sp_who in PeopleSoft environments.
This produces the following output and indicates that process 87 is waiting on object id 1359395962.
spid dbid ObjId IndId Type Resource Status ------ ------ ----------- ------ ---- -------------------87 7 0 0 DB S 87 7 0 0 PAG 3:6086 IS 87 7 1359395962 0 RID 3:6086:0 S Mode -------GRANT GRANT GRANT -
102
87 87 87
7 7 7
9 0 0
1:569718 3:6086:1
IS S Sch-S
3. To decode the object name from the object id, issue the following command:
select name, type from sys.objects where object_id = 1359395962; GO
This produces the following output, indicating that you are waiting for PS_TSE_JHDR_FLD. The type is U, indicating it is a user table.
Name ------PS_TSE_JHDR_FLD Type ---------U
For more information about DBCC commands and their usage, see DBCC in SQL Server 2008 Books Online.
103
As an alternate to query hints, you can also directly fix the query execution plan using the Plan Freezing feature explained in Chapter 3.
104
WHERE B.ABC_DRIVER_ID = C.ABC_DRIVER_ID AND C.ABC_OBJ_ID= A.ABC_OBJ_ID AND B.ABC_DRIVER_SOURCE = 'A') AND EXISTS (SELECT 'X' FROM PS_DRIVER_T2 B1 , PS_DRIVER_TAR2_S2 C1 WHERE B1.ABC_DRIVER_ID = C1.ABC_DRIVER_ID AND C1.ABC_OBJ_ID = A.ABC_OBJ_ID AND B1.ABC_DRIVER_TARGET = 'A') OPTION (MERGE JOIN) ; GO
If you need to use query hints and cannot directly modify the query, you can use the plan guides features explained in the next section 5.2.8.4.
If you need to use query hints and cannot directly modify the query, you can use the plan guides features explained in the next section 5.2.8.4.
WHERE BUSINESS_UNIT = @BU AND PO_ID LIKE 'MPO%' AND RECEIPT_DT BETWEEN 2006-01-01 AND 2006-08-25 ORDER BY PO_ID, RECEIPT_DT, BUSINESS_UNIT, RECEIVER_ID DESC OPTION ( OPTIMIZE FOR (@BU = ' PO001') ) ; GO
The OPTIMIZE FOR hint can also be used with plan guides explained in the next section.
106
You can use plan guides to specify any query hint individually, or in valid combinations. Plan guides are administered using two stored procedures: sp_create_plan_guide creates plan guides, and sp_control_plan_guide drops, disables, or enables them. Even though you can view the plan guides in the sys.plan_guides table if you have the correct access privileges, they should never be modified directly; you should always use the stored procedures provided to administer them. See Designing and Implementing Plan Guides in SQL Server 2008 Books Online for more information. The fabricated example below depicts a plan guide created for an SQL statement originating in the PeopleSoft Enterprise Human Capital Management (HCM) application that is used to inject an OPTIMIZE FOR query hint without modifying the query in the application. Note: This example is simply provided to explain the plan guides feature. The query hint specified is not really required by the query and does not help resolve any issue. The original query contained in the API cursor-based query is shown in the following Microsoft SQL Server Profiler output:
declare @P1 int set @P1=15 declare @P2 int set @P2=180150008 declare @P3 int set @P3=8 declare @P4 int set @P4=1 declare @P5 int set @P5=1 exec sp_cursorprepexec @P1 output, @P2 output, N'@P1 decimal(4,1)', N'SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1 ', @P3 output, @P4 output, @P5 output, 322.0; GO
107
You can create a plan guide to inject the OPTIMIZE FOR query hint using the DDL that follows to optimize the query for a value of @P1 = 14.0.
sp_create_plan_guide @name = N'MSGS_SEQ_PlanGuide', @stmt = N'SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1', @type = N'SQL', @module_or_batch = NULL, @params = N'@P1 decimal(4,1)', @hints = N'OPTION (OPTIMIZE FOR (@P1 = 14.0))' ; GO
Plan guides can also be used to match a set of queries, where the only difference among them is the value of the literal being passed in. This is done using a plan guide template. Once you create a plan guide template, the template will match all invocations of the specific query irrespective of the literal values. For example, to specify the PARAMETERIZATION FORCED query hint for all invocations of the following sample query:
SELECT MAX(MESSAGE_SEQ) PROCESS_INSTANCE = 28 ; GO FROM PS_MESSAGE_LOG WHERE
108
You can create the following plan guide template using the sp_get_query_template stored procedure.
DECLARE @stmt nvarchar(max) ; DECLARE @params nvarchar(max) ; EXEC sp_get_query_template N'SELECT MAX(MESSAGE_SEQ) PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = 28', @stmt OUTPUT, @params OUTPUT EXEC sp_create_plan_guide N'TemplateBased_PG', @stmt, N'TEMPLATE', NULL, @params, N'OPTION(PARAMETERIZATION FORCED)' ; GO
FROM
Note: For more information about this stored procedure, see sp_get_query_template in SQL Server 2008 Books Online. Plan guides are scoped to a particular database and can be viewed by querying the sys.plan_guides table. For example, the following statement lists all the plan guides in the HR90 database, as shown in the figure:
USE HR90 ; GO SELECT * FROM sys.plan_guides ; GO
109
The creation of a plan guide does not guarantee its use for a particular query. You should always make sure that the plan guides you create are applied to the particular query, and that the actions specified in the query hint have the desired effects.
110
PS_MESSAGE_LOG WHERE
The following is the same query with the USE PLAN query hint specified:
SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1 OPTION (USE PLAN N' <ShowPlanXML xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showp lan" Version="1.0" Build="9.00.2047.00"> <BatchSequence> <Batch> <Statements> <StmtSimple StatementText="DECLARE @P1 decimal(4,1)
SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1" StatementId="1" StatementCompId="1" StatementType="SELECT" StatementSubTreeCost="0.00328942" StatementEstRows="1" StatementOptmLevel="FULL" StatementOptmEarlyAbortReason="GoodEnoughPlanFound"> <StatementSetOptions QUOTED_IDENTIFIER="false" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="false" ANSI_NULLS="false" ANSI_PADDING="false" ANSI_WARNINGS="false" NUMERIC_ROUNDABORT="false" /> <QueryPlan CachedPlanSize="9"> <RelOp NodeId="0" PhysicalOp="Stream Aggregate" LogicalOp="Aggregate" EstimateRows="1" EstimateIO="0" EstimateCPU="1.1e-006" AvgRowSize="11" EstimatedTotalSubtreeCost="0.00328942" Parallel="0" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Column="Expr1004" /> </OutputList> <StreamAggregate> <DefinedValues> <DefinedValue> <ColumnReference Column="Expr1004" /> <ScalarOperator ScalarString="MAX([HR90].[dbo].[PS_MESSAGE_LOG].[MESSAGE_SE Q])"> <Aggregate Distinct="0" AggType="MAX"> <ScalarOperator> <Identifier> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="MESSAGE_SEQ" /> </Identifier> </ScalarOperator> </Aggregate>
111
</ScalarOperator> </DefinedValue> </DefinedValues> <RelOp NodeId="1" PhysicalOp="Table Scan" LogicalOp="Table Scan" EstimateRows="1" EstimateIO="0.003125" EstimateCPU="0.0001614" AvgRowSize="20" EstimatedTotalSubtreeCost="0.0032864" Parallel="0" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="MESSAGE_SEQ" /> </OutputList> <TableScan Ordered="0" ForcedIndex="0" NoExpandHint="0"> <DefinedValues> <DefinedValue> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="MESSAGE_SEQ" /> </DefinedValue> </DefinedValues> <Object Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" /> <Predicate> <ScalarOperator ScalarString="[HR90].[dbo].[PS_MESSAGE_LOG].[PROCESS_INSTAN CE]=[@P1]"> <Compare CompareOp="EQ"> <ScalarOperator> <Identifier> <ColumnReference Database="[HR90]" Schema="[dbo]" Table="[PS_MESSAGE_LOG]" Column="PROCESS_INSTANCE" /> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Column="@P1" /> </Identifier> </ScalarOperator> </Compare> </ScalarOperator> </Predicate> </TableScan> </RelOp> </StreamAggregate> </RelOp> </QueryPlan> </StmtSimple> </Statements>
112
In this example, the USE PLAN query hint and <xml showplan> are specified via the OPTION clause following the original SELECT query. While this is a somewhat trivial example shown in order to introduce the feature, the true power of this feature lies in being able to force the query plan for more complex queries that involve multiple table joins with multiple predicates and aggregate clauses. While the USE PLAN query hint provides a powerful option to influence the execution of a query, it should be used selectively and only by experienced users as a last resort in query tuning. Once specified, it locks down the query plan and prevents the optimizer from adapting to changing data shapes, new indexes, and improved query execution algorithms in successive SQL Server releases, service packs, and quick-fix engineering (QFE) changes. The USE PLAN query hint should always be specified via a plan guide and never be directly coded into the PeopleSoft application code. The corresponding plan guide that specifies the USE PLAN query hint for the previous SQL statement is as follows:
sp_create_plan_guide @name = N'UsePlan_PG', @stmt = N'SELECT MAX(MESSAGE_SEQ) FROM PS_MESSAGE_LOG WHERE PROCESS_INSTANCE = @P1', @type = N'SQL', @module_or_batch = NULL, @params = N'@P1 decimal(4,1)', @hints = N'OPTION (USE PLAN N'' <ShowPlanXML xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showp lan" Version="1.0" Build="9.00.2047.00"> <BatchSequence> <Batch> <Statements> ... </Statements> </Batch> </BatchSequence> </ShowPlanXML>'')';
For readability purposes, a large section of the XML query plan has been replaced by the ellipsis. For an in-depth explanation for USE PLAN, including the procedure to capture the XML Showplan and its usage restrictions, see Plan Forcing Scenarios and Examples in SQL Server 2008 Books Online.
113
114
115
A wait type of CXPACKET associated with a high wait_time_ms could be an indication of this problem. For currently running tasks, use the following query to identify CXPACKET wait types. High wait_duration_ms associated with the CXPACKET wait type is an indication of processor utilization and performance degradation due to parallelism.
Select session_id , exec_context_id , wait_type , wait_duration_ms , blocking_session_id from sys.dm_os_waiting_tasks where session_id > 50 order by session_id, exec_context_id ; GO
To resolve this issue you should set the MAXDOP server configuration setting to a lower value. For PeopleSoft applications it is recommended to set this value to 1, see section 2.7.4 for additional details. Inefficient Query Plan An inefficient query plan leading to a full table scan or excessive reads can cause high processor utilization. To solve this problem, first it is important to identify the query that causes excessive processor consumption. You can use the following dynamic management view code to identify the query that is consuming the most processor resources:
select cpu_time, st.text from sys.dm_exec_requests er cross apply sys.dm_exec_sql_text(er.sql_handle) st order by cpu_time desc ; GO
Run the previous query a few times and if you notice that the value of cpu_time is constantly increasing, it is most likely the culprit statement(s). For cached queries you can use the following code to find high CPU consumers:
SELECT TOP 50 qs.total_worker_time/qs.execution_count as [Avg CPU Time], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text))*2 else qs.statement_end_offset end qs.statement_start_offset)/2) as query_text,
116
qt.dbid, dbname=db_name(qt.dbid), qt.objectid FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg CPU Time] DESC; GO
Identify and analyze the query and add appropriate indexes as required. Excessive Compilations Excessive SQL compilations can cause high CPU usage in PeopleSoft applications. The key data counters to look for in System Monitor are as follows: SQL Server: SQL Statistics: Batch Requests/sec SQL Server: SQL Statistics: SQL Compilations/sec SQL Server: SQL Statistics: SQL Recompilations/sec If the SQL Compilations/sec or SQL Recompilations/sec are excessively high, the processor usage could be elevated. And you may want to consider parameterizing the queries as explained in sections 2.6.3 and 2.10.2.2.
117
,io_stall_write_ms ,num_of_writes ,cast(io_stall_write_ms/(1.0+num_of_writes) as numeric(10,1)) as 'avg_write_stall_ms' ,io_stall_read_ms + io_stall_write_ms as io_stalls ,num_of_reads + num_of_writes as total_io ,cast((io_stall_read_ms+io_stall_write_ms)/(1.0+num_of _reads + num_of_writes) as numeric(10,1)) as 'avg_io_stall_ms' from sys.dm_io_virtual_file_stats(null,null) order by avg_io_stall_ms desc ; GO
Refer to the section 2.1.2 Typical I/O Performance Recommended Range for the recommended I/O ranges. If the I/O values are not in the range as described, you most likely have an I/O bottleneck. Please engage the storage team to troubleshoot the I/O bottleneck. 3. You can also use the following System Monitor counters to detect I/O bottlenecks as explained in sections 2.1.2 and 5.2.1.
118
o No optimal index available to perform index seek operation o Missing or stale statistics o Suboptimal hardware configuration, such as disks or processor, etc. It is best to eliminate these possible causes before investigating other possibilities in depth. Refer to sections 4.6.5 Deadlocks and 5.2.6 Decoding the Object Blocking a Process for more information about resolving blocking and deadlocking.
to occur have been eliminated should result in a more consistent application experience when using ODBC API server cursors in SQL Server 2008. Since this is an optimizer improvement in SQL Server 2008, no manual steps are required to leverage it.
To find information about a specific cursor, replace the 0 with a session_id as the input parameter for the sys.dm_exec_cursors dynamic management view. By using the sys.dm_exec_cursors dynamic management view you have significantly improved capabilities to diagnose cursor-based applications over previous versions of SQL Server. For example, you can determine whether the cursors are truly the cursor type that application requested. You can also see if a keyset or static cursor is currently being asynchronously populated, and so on. For additional information on the sys.dm_exec_cursors dynamic management view, refer to the SQL Server 2008 Books Online topic, sys.dm_exec_cursors.
120
In SQL Server 2008, data files can be initialized instantaneously for fast execution of the previously mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously. Instant file initialization is enabled when the SQL Server (MSSQLSERVER) logon service account has been granted SE_MANAGE_VOLUME_NAME. This privilege is granted by default and no specific action is required to use this feature. Security Considerations Because the data file is not zeroed out on initialization and any previously deleted disk content is overwritten only as new data is written to the files, the deleted content might potentially get accessed by an unauthorized user. While the database file is attached to the instance of SQL Server, this information disclosure threat is reduced by the discretionary access control list (DACL) on the file. This DACL allows file access only to the SQL Server service account and the local administrator. However, when the file is detached, it may be accessed by a user or service that does not have SE_MANAGE_VOLUME_NAME privilege. A similar threat exists when the database is backed up. The deleted content can become available to an unauthorized user or service if the backup file is not protected with an appropriate DACL. If the potential for disclosing deleted content is a concern, you should do one or both of the following: Disable instant file initialization for the instance of SQL Server by revoking SE_MANAGE_VOLUME_NAME from the SQL Server service logion account Always make sure that any detached data files and backup files have restrictive DACLs. Note: Disabling instant file initialization only affects files that are created or increased in size after the user right is revoked. For PeopleSoft applications, instant file initialization is recommended from a performance perspective. However, evaluate the performance gain against the possible security risk. If the security policy does not allow for this possible risk, do not use instant file initialization. You can disable it by revoking SE_MANAGE_VOLUME_NAME from the SQL Server service account.
122
SQL Server has encountered %d occurrence(s) of I/O requests taking longer than %d seconds to complete on file [%ls] in database [%ls] (%d). The OS file handle is 0x%p. The offset of the latest long I/O is: %#016I64x. A long I/O may be either a read or a write. While long I/O messages are just warning messages, not errors, they are often symptomatic of some functional issues in the disk subsystem or loads far exceeding the reasonable service capabilities of the disk subsystem. For PeopleSoft applications, I/O error message 833 can be very useful from a reactive I/O maintenance and monitoring perspective. It is recommended that you monitor the SQL Server error log for these messages. However, for I/O performance tuning, it is recommended that you use the I/O-related dynamic management views and System Monitor counters. For more information, see section 5.2.3 Using Dynamic Management Views and section 5.2.1 Using System Monitor in this document. For recommended I/O performance range, see section 2.1.2 Typical I/O Performance Recommended Range in this paper.
123
Utilization
When debugging database issues with PeopleTools, use this tools to discard overhead or problems introduced by other PeopleTools modules. Users can give their PeopleTools SQL trace files to a developer to reproduce and analyze the problem without having to run long and complicated application scenarios or to reproduce the full environment set up. SQLAPI Syntax SQLAPI uses the following command line parameters: sqlapi <SQL platform> [ <input file> [ <output file> ] ] sqlapi help ODBC should be used for the SQL platform when connecting to SQL Server. If the output file is omitted, standard output is used. If the input file is omitted, standard input is used. One may also use a hyphen to specify standard I/O. For example, "sqlapi odbc - test.out" would read commands from standard input and write the results to a file called test.out. The "help" argument will show the syntax for each PeopleTools database API routine. SQLAPI also includes eight non-API routines to aid in debugging. A script can PAUSE for x seconds, which can be helpful with deadlocking and other resource issues. A script
124
can also contain REMarks, which can help clarify what the script is doing and can also be used to temporarily disable one or more calls to API routines. A script can IGNORE the error status of the following command and continue execution. A script can REPEAT fetches until end of data or an error occurs. A script can INCLUDE another script name that contains additional SQLAPI commands. TIMEON and TIMEOFF toggle whether timing information is displayed. A WHILE/ENDW loop may be placed around a group of statements to have them repeated until the first statement fails. The first call after the while must be a SQLFET call. The UNICODE flag tells SQLAPI to display in Unicode mode values read by SQLRLO. SQLAPI uses the concept of "cursors" where each connection has a separate handle number to track which SQL statements are associated with which connection. SQLAPI represents each cursor with a dollar sign followed by the cursor handle number. String data in SQLAPI scripts can span mutliple lines and can also include binary data. A binary byte in a data string is preceded by '\x'. If you need to include the backslash character in a data string, use two back-to-back backslashes (ie "here is how one includes a backslash \\") One can use the '\x22' binary value to include a double quote '"' in string data. Unicode 2 byte characters can represented with the '\U0000' syntax. This is helpful when using nonascii characters in scripts that are run on multiple platforms.
TRACE to API
TRC2API reads a PeopleTools SQL trace and writes a SQLAPI compatible script. TRC2API Syntax: trc2api [ -u ] [ input_file [ output_file ] ] If the input_file is not specified, standard input is used. The same applies for the output. The -u option specifies that the input file is in UNICODE format. A few trace statements are not yet supported and will be reported to standard error as TRC2API encounters them. Three known problems exist from the trace file output that can alter the SQLAPI behavior from the original scenario: 1. First, if a process binds a variable once to a SQL statement and then executes that statement multiple times while changing the variable internally, the trace file does not show the changed variable data. 2. Second, the PeopleTools SQL trace does not show the password used for the database connection.
125
3. Third, SQL Server now requires a sqlset call before it's first connection is made, but this is not traced. Using database specific tracing can help resolve the first issue and manually adding the password will resolve the second. The third can be solved by manually adding a "sqlset $0 1 0" line before the first sqlcnc call. The executables are now part of the regular builds as of PeopleTools 8.4 and can be found under %PSHOME%\bin\client\winx86.
Examples
The following is an example of an input script for SQLAPI: rem Every script must start with a sqlini and end with a sqldon sqlini rem This is our first connection, rem so it will use the cursor handle # '1' rem In 8.4 and beyond, if the password is omitted, the user will be rem prompted for it via stdin/stdout. sqlcnc "DBNAME/OPRID/PASSWORD" rem Here we show an SQL statement split over mulitple lines, how to rem bind select and WHERE clause variables, and how to fetch until rem end-of-data. sqlcex $1 "SELECT EMPLID, COMPANY, EMPL_RCD#, PLAN_TYPE from PS_LEAVE_ACCRUAL WHERE EMPLID = :1" sqlbnn $1 1 "FG1202" 6 0 1 sqlssb $1 1 2 20 0 sqlssb $1 2 2 20 0 sqlssb $1 3 18 2 0 sqlssb $1 4 2 20 0 repeat sqlfet $1 rem Since the EMPL_RCD# column is a 2 byte integer (data type 18), rem we use the binary representation of 7 in the data string. sqlcom $1 "UPDATE PS_LEAVE_ACCRUAL SET EMPL_RCD# = :1 WHERE EMPLID = :2" sqlbnn $1 1 "\x07\x00" 2 0 18 sqlbnn $1 2 "FG1202" 6 0 1 rem Doing a Fetch followed by reading the BLOB column. This also
126
rem shows how a while loop is used to fetch mutilple rows with LOB rem columns. sqlcex $1 "SELECT STMT_TEXT FROM PS_SQLSTMT_TBL WHERE PGM_NAME='PSPACCPR' AND STMT_TYPE='S' AND STMT_NAME='ACCRUAL'" while sqlfet $1 sqlgls $1 1 sqlrlo $1 1 761 sqlelo $1 endw rem The following statement might fail because the table doesn't rem exist, but we wish to continue after the failure, so we use rem IGNORE sqlset $1 3018 0 ignore sqlcom $1 "SELECT 'PS_DOES_TABLE_EXIST' FROM SOME_TABLE" sqlset $1 3018 2 rem Commit, disconnect and end the session. sqlcmt $1 sqldis $1 sqldon rem End of script!\\
127
T:\bin\client\winx86> psae -CT ODBC -CD DB_NAME -CO OPR_ID -CP OPR_PSWD -R 0 -AI AEMINITEST -I 0 PeopleTools 8.49 - Application Engine Copyright (c) 1988-2009 PeopleSoft, Inc. All Rights Reserved PeopleTools SQL Trace value: 63 (0x3f): pssql_trace.txt Application Engine program AEMINITEST ended normally One will end up with a trace file similar to this: T:\bin\client\winx86> head -15 pssql_trace.txt PeopleTools 8.49 Client Trace - 2009-03-03 PID-Line Time Elapsed Trace Data... -------- -------- ---------- --------------------> 1-1 17.38.19 Tuxedo session opened {oprid='QEDMO', appname='TwoTier', addr='//TwoTier:7000', open at 024A6950, pid=772} 1-2 17.38.19 0.035000 Cur#0.772.QE849TS RC=0 Dur=0.035000 --- router PSORA load succeeded 1-3 17.38.20 0.149000 Cur#0.772.QE849TS RC=0 Dur=0.149000 INI 1-4 17.38.20 0.148000 Cur#1.772.QE849TS RC=0 Dur=0.147000 Connect=Primary/DB_NAME/CONN_ID/ 1-5 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 GET type=1003 dbtype=4 1-6 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 GET type=1004 release=10 1-7 17.38.20 0.001000 Cur#1.772.QE849TS RC=0 Dur=0.000000 COM Stmt=SELECT OWNERID FROM PS.PSDBOWNER WHERE DBNAME=:1 1-8 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 SSB column=1 type=2 length=9 scale=0 1-9 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 Bind-1 type=2 length=7 value=DB_NAME 1-10 17.38.20 0.001000 Cur#1.772.QE849TS RC=0 Dur=0.001000 EXE 1-11 17.38.20 0.000000 Cur#1.772.QE849TS RC=0 Dur=0.000000 Fetch Convert the output trace using trc2api: T:\bin\client\winx86> trc2api < pssql_trace.txt > sqlapi_input.txt Say what? PeopleTools 8.49 Client Trace - 2009-03-03 ... Ignore the lines that begin with 'Say what?'. These are extra trace lines that trc2api doesn't understand and are not important for SQLAPI. The resulting file should look like folowing: t:\bin\client\winx86>head sqlapi_input.txt sqlini unicode 0 sqlcnc "Primary/DB_NAME/CONN_ID/" sqlget $1 1003
128
sqlget $1 1004 sqlcom $1 "SELECT OWNERID FROM PS.PSDBOWNER WHERE DBNAME=:1" sqlssb $1 1 2 9 0 sqlbnn $1 1 "DB_NAME" 7 0 2 sqlexe $1 sqlfet $1 Open the sqlapi_input.txt file in an editor: T:\bin\client\winx86> notepad sqlapi_input.txt First, search for any lines that have consist of 'rem ignore'. Look at the line after the 'rem ignore' line. If that line is a command whose failure should be ignored, remove the 'rem' in front of the 'ignore'. "Select 'PS_DOES_TABLE_EXIST' from table_XXX" is a good example of this. Ignoring duplicates in an Insert is another. If youre using a SQL Server database, added the following line after the sqlini command at the top of the file: sqlset $0 1 0 Traces file do not include database passwords so one has a choice here. One may leave the 'sqlcnc' commands unmodified and provide the passwords at runtime. One can also modify the 'sqlcnc' commands adding the passwords after the last slash: Before: sqlcnc Primary/DB_NAME/ACCESSID/ After: sqlcnc Primary/DB_NAME/ACCESSID/PASSWORD Run the SQLAPI script. The following command shows how to run SQLAPI on a SQL Server database using the sqlapi_input.txt file as an input and sqlapi_output.txt as an output. t:\bin\client\winx86> sqlapi ODBC sqlapi_input.txt sqlapi_output.txt Your results should be like this: t:\bin\client\winx86>head -13 sqlapi_output.txt REM SQLAPI, Unicode version SQLINI SQLSET $0 1 0 UNICODE 0 SQLCNC "Primary/DN_NAME/CONN_ID/PASSWORD" REM cursor = 1 SQLGET $1 1003 REM dbtype=4 SQLGET $1 1004 REM dbver=10
129
SQLCOM $1 "SELECT OWNERID FROM PS.PSDBOWNER WHERE DBNAME=:1" SQLSSB $1 1 2 9 0 SQLBNN $1 1 "DB_NAME" 7 0 2 SQLEXE $1 SQLFET $1 REM Row found REM Column 1: 6 ACCESSID "ACCESSID" The preceding is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracles products remains at the sole discretion of Oracle.
130