Tuning Optimization Concepts
Tuning Optimization Concepts
Agenda
Overview Analyzing Individual Objects : o Analyzing transaction steps o SQL Performance Analysis o ABAP Runtime Analysis: Overview o ABAP Debugger Database Accesses : o Overview o Unsuitable Access Path o Suitable Access Path
Agenda
Internal Tables R/3 system Analysis o Overview o R/3 Workload Analysis
DBMS Architecture
Transaction STAT
Tools>>Administration>>Monitor>>Performance>>Workload>>Statistics Record
Average database time (per record) for reference Sequential read 10 ms Direct read 5 ms Accesses that change data 20 ms Response time, Dispatcher wait time, CPU time, DB Req. time Helps decide whether SQL Performance Trace or ABAP Runtime analysis is required
ABAP Debugger
The debugger is used to identify errors in source code. There are some attributes of this tool that are used for runtime analyses as well. It is used to run through an object step by step. A commonly used function is to check for memory usage during statements that access large internal tables. For example, statements such as LOOP ATENDLOOP require extensive CPU time and increase object runtime while slowing down the system at the same time. Checking the memory use of an internal table is therefore useful. To see how much memory is allocated for and used by an internal table, choose Settings>>Table memory* from the ABAP Debugger screen. Place the cursor on the specific table and choose Table. To display a list of the properties of internal tables accessed in the current program, choose Goto>>System>>system areas , and in the Area field, enter ITAB. (not the internal table name) The resulting list displays information such as column width or the current number of rows. These can be used to calculate memory use. TIP: The debugger should only be used to investigate object performance in the development or test system and not in the production system. This is because using the debugger may cause errors such as implicit database commits resulting in database inconsistencies.
Database Accesses
Index
ROW ID MANDT VBELN POSNR MATNR 75892 001 0000163 04006149 055123 95883 002 0000646 03429737 310529 ROW ID MANDT VBELN POSNR MATNR 75892 001 0000163 04006149 055123 95883 002 0000646 03429737 310529 ROW ID MANDT VBELN POSNR MATNR 75892 001 0000163 04006149 055123 95883 002 0000646 03429737 310529
Index not used for full table scan Each table block is read once only Max. no. of logical read accesses per execution = no. of table blocks
Nested Loop: This strategy is relevant for database views and ABAP JOINs. First, the WHERE clause is used as a basis for selecting the (outer) table to be used for access. Next, starting from the outer table, the table records for the inner tables are selected according to the JOIN condition. Sort Merge Join: First, the WHERE clause is evaluated for all tables in the join, and a resulting set is produced for each table. Each resulting set is sorted according to the JOIN conditions and then merged, also according to the JOIN condition.
Aggregate Functions
Having Clause
Array Fetch
UP TO n ROWS
SELECTENDSELEC T
SELECTUP TO n ROWS
Implementation in ABAP
Nested Selects
Database View
SELECT vbeln kunnr adrnr FROM vbak_kna1 INTO TABLE g_itab_vvbak_kkna1 WHERE vbeln IN g_vbeln. SELECT STATEMENT (Estimated costs = )
NESTED LOOPS TABLE ACCESS BY INDEX ROW ID INDEX RANGE SCAN V V B A K
VVBAK
KKNA1
INTO TABLE g_itab_vbak_kna1 FROM vbak AS t1 INNER JOIN kna1 AS t2 ON t1~kunnr = t2~kunnr WHERE t1~vbeln IN g_vbeln.
MANDT VBELN KUNNR
001 0000120 0000100 001 0000121 0000100 001 0000122 0000101 001 0000123 0000101 001 0000124 0000102 001 0000125 0000103 001 0000126 0000103 001 0000127 0000104
VBAK
KNA1
VBAK
KNA1
KNA1
SORT g_itab_kna1 BY kunnr. DELETE ADJACENT DUPLICATES FROM g_itab_kna1. SELECT Kunnr adrnr
KNA1-0 KNA1
Subquery
SELECT kunnr adrnr INO TABLE g-itab_vvbak_kkna1 FROM kkna1 WHERE kunnr IN ( SELECT DISTINCT kunnr FROM vvbak WHERE vbeln IN g_vbeln). .
INDEX RANGE SCAN VVBAK-0 TABLE ACCESS BY INDEX ROWID KKNA1 INDEX UNIQUE SCAN KKNA1~0
Logical Database a database determines the node hierarchy, and thus the order The structure of
in which nodes are read. The read depth depends on the GET events specified in the program. The logical database reads all nodes on the direct access path until the deepest GET event. A GET event is like loop processing as they are processed several times in the program.Because it is very much like a nested SELECT, formulating a SELECT statement within a GET event can create problems. Program a GET event referring to a node lower down in the hierarchy only if the data above it is also required. The logical database reads the key fields of all superior nodes as well. In this situation, do not use logical databases, but program the SELECT statements yourself. If the fields in your table are wide and if you do not require all the fields in your programs, use a FIELDS addition when you formulate a GET statement. The statement GET dbtab FIELDS fields is like a SELECT field list and has the same level of performance
Pooled and Cluster Tables on how they are physically implemented, the ABAP dictionary has Depending
three categories of tables: Transparent, Pooled and Clustered
LOGICAL VIEW TRANSPARENT TABLES
TAB_B
CLUSTER TABLES
CLUST_A
POOLED TABLES
POOL_A POOL_B
CLUST_B
PHYSICAL VIEW
DATABASE TABLES
Pooled and clustered tables group several logically defined tables from the ABAP dictionary in a physical database table. In pooled tables, data is located in a table pool whereas in a clustered pool, data is located in a table cluster.
July 10, 2010 70
Data compression Less memory space Less network load Fewer tables and table fields Fewer different SQL statements Less load on database dictionary and database buffer Simpler administration For cluster tables Fewer database accesses
Limitations on database functionalities No views or ABAP JOINs No secondary indexes No GROUP BY, ORDER BY, No native SQL No table appends For cluster tables Limited selection on cluster key fields For pooled tables Longer keys than necessary
Cluster Tables
Cluster table TABA TIMESTAMP
Table cluster
PAGELG
A A
TABAB
B B
0 1
C G
D H
E I
F J
F A B E
Key fields
SELECT bukrs belnr SELECT bukrs belnr FROM bseg FROM bsid INTO TABLE g_itab_bsid INTO TABLE g_itab_bsid WHERE kunnr = 0000000100. WHERE KUNNR = 0000000100 SELECT MANDT,BUKRS,.. SELECT BURKS, MANDT, FROM RFBLG FROM BSID WHERE MANDT = :A0 WHERE MANDT = :A0 ORDER BY MANDT, BUKRS, AND KUNNR = :A1.
Pooled Tables
Pooled table TABA DATALN Table pool TABAB A B C D
A E
C F
D G H I
TABNAME
VARKEY
VARDATA
Key
The WHERE conditions in the SQL statement refer to key fields in table aa005. Therefore, all conditions are transferred to the database in field VARKEY. The database interface incorporates the ORDER BY clause for fields TABNAME and VARKEY which are key fields in the table pool.
Unselective access
AA005 fully buffered on application server Remove AA05 from table pool KAPOL and create index for MATNR Repeated database read is not necessary Efficient data read is possible
Selective access
NETWORK
DBMS processes
Database buffer
Database
Types of Buffering
Degree of Invalidation
In work area mode, changes on the database are made by accessing single records. You can perform single record accessing by using ABAP statements like UPDATE/INSERT/MODIFY/DELETE dbtab (FROM wa) or INSERT dbtab VALUES wa. You can do mass processing by changing the database table in set mode. To implement mass processing, use ABAP statements UPDATE/INSERT/MODIFY/DELETE dbab FROM itab, UPDATE dbtab SET field = value WHERE field = condition or DELETE dbtab WHERE field = condition. In fully buffered tables, all records are invalidated by database changes. In generically buffered tables, in work area mode, those records are invalidated that have the same specific instance in the generic area as the fields in the work area of the SQL statement executed. All data records are invalidated for a generically buffered table accessed in set mode In single record buffering, only the changed single record is invalidated by database changes in work area mode. In set mode, the whole table with single record buffering is invalidated by database changes. The degree of invalidation corresponds to the degree to which the table buffers are filled.
Generic Buffering
=> Database
SELECT * FROM t100 SELECT SINGLE * FROM t100 INTO TABLE g_itab_t100 INTO g_wa_t100 WHERE sprs1=D WHERE sprs1 = D AND argb = BC490 AND arbgb = BC490 AND msgnr = 050. AND msgnr = 050.
Buffering Strategy
Technical criteria Small, usually < 1 MB Read but hardly changed Temporary data inconsistency acceptable Access mainly from key fields Buffer tables>10 MB in exceptional cases only Table buffers cannot be accessed via secondary indexes Check available memory space before buffering more tables
Buffering Strategy
Semantic Criteria Do not buffer transaction data Big tables Frequently changed tables Avoid buffering master data Big tables Different access paths to data Buffer customizing data Small tables Few changes to tables
Define minimum requirements for entries on selection screens and search help (end user training, operation concept) Customer owned selection screen Define selective fields (PARAMETER, SELECT-OPTIONS) as required entry fields Check user entries for minimum requirements Implement appropriate SQL statements depending on user entries Customer-owned search help Create customer owned search help following SAP standard If necessary, define input fields as required entry fields
Internal Tables
Table Types
Improving performance Reduce number of lines used Reduce search area Implement mass processing Reduce copy costs in a work area or header line
No Field Transfer
Hashed Tables
Sorted Tables
Standard Tables
Thank You