Performance Tuning Overview
Performance Tuning Overview
By PenchalaRaju.Yanamala
If you tune all the bottlenecks, you can further optimize session performance by
increasing the number of pipeline partitions in the session. Adding partitions can
improve performance by utilizing more of the system hardware while processing
the session.
Optimize the target. Enables the Integration Service to write to the targets
1.efficiently. For more information, see Optimizing the Target.
Optimize the source. Enables the Integration Service to read source data
2.efficiently. For more information, see Optimizing the Source.
Optimize the mapping. Enables the Integration Service to transform and
3.move data efficiently. For more information, see Optimizing Mappings.
Optimize the transformation. Enables the Integration Service to process
transformations in a mapping efficiently. For more information, see Optimizing
4.Transformations.
Optimize the session. Enables the Integration Service to run the session
5.more quickly. For more information, see Optimizing Sessions.
Optimize the grid deployments. Enables the Integration Service to run on a
grid with optimal performance. For more information, see Optimizing Grid
6.Deployments.
Optimize the PowerCenter components. Enables the Integration Service and
Repository Service to function optimally. For more information, see Optimizing
7.the PowerCenter Components.
Optimize the system. Enables PowerCenter service processes to run more
8.quickly. For more information, see Optimizing the System.
Bottlenecks Overview
1.Target
2.Source
3.Mapping
4.Session
5.System
Run test sessions. You can configure a test session to read from a flat file
source or to write to a flat file target to identify source and target bottlenecks.
Analyze performance details. Analyze performance details, such as
performance counters, to determine where session performance decreases.
Analyze thread statistics. Analyze thread statistics to determine the optimal
number of partition points.
Monitor system performance. You can use system monitoring tools to view
the percentage of CPU use, I/O waits, and paging to identify system
bottlenecks. You can also use the Workflow Monitor to view system resource
usage.
Target Bottlenecks
The most common performance bottleneck occurs when the Integration Service
writes to a target database. Small checkpoint intervals, small database network
packet sizes, or problems during heavy loading operations can cause target
bottlenecks.
Configure a copy of the session to write to a flat file target. If the session
performance increases significantly, you have a target bottleneck. If a session
already writes to a flat file target, you probably do not have a target bottleneck.
Read the thread statistics in the session log. When the Integration Service
spends more time on the writer thread than the transformation or reader
threads, you have a target bottleneck.
Source Bottlenecks
Performance bottlenecks can occur when the Integration Service reads from a
source database. Inefficient query or small database network packet sizes can
cause source bottlenecks.
If the session reads from a relational source, use the following methods to
identify source bottlenecks:
Filter transformation
Read test mapping
Database query
If the session reads from a flat file source, you probably do not have a source
bottleneck.
You can use a Filter transformation in the mapping to measure the time it takes
to read source data.
Add a Filter transformation after each source qualifier. Set the filter condition to
false so that no data is processed passed the Filter transformation. If the time it
takes to run the new session remains about the same, you have a source
bottleneck.
You can create a read test mapping to identify source bottlenecks. A read test
mapping isolates the read query by removing the transformation in the mapping.
Run a session against the read test mapping. If the session performance is
similar to the original session, you have a source bottleneck.
To identify source bottlenecks, execute the read query directly against the source
database.
Copy the read query directly from the session log. Execute the query against the
source database with a query tool such as isql. On Windows, you can load the
result of the query in a file. On UNIX, you can load the result of the query in
/dev/null.
Measure the query execution time and the time it takes for the query to return the
first row.
Eliminating Source Bottlenecks
Set the number of bytes the Integration Service reads per line if the Integration
Service reads from a flat file source.
Have the database administrator optimize database performance by optimizing
the query.
Increase the database network packet size.
Configure index and key constraints.
If there is a long delay between the two time measurements in a database
query, you can use an optimizer hint.
Mapping Bottlenecks
If you determine that you do not have a source or target bottleneck, you may
have a mapping bottleneck.
Read the thread statistics and work time statistics in the session log. When the
Integration Service spends more time on the transformation thread than the
writer or reader threads, you have a transformation bottleneck. When the
Integration Service spends more time on one transformation, it is the bottleneck
in the transformation thread.
Analyze performance counters. High errorrows and rowsinlookupcache counters
indicate a mapping bottleneck.
Add a Filter transformation before each target definition. Set the filter condition
to false so that no data is loaded into the target tables. If the time it takes to run
the new session is the same as the original session, you have a mapping
bottleneck.
Session Bottlenecks
If you do not have a source, target, or mapping bottleneck, you may have a
session bottleneck. Small cache size, low buffer memory, and small commit
intervals can cause session bottlenecks.
After you tune the source, target, mapping, and session, consider tuning the
system to prevent system bottlenecks. The Integration Service uses system
resources to process transformations, run sessions, and read and write data. The
Integration Service also uses system memory to create cache files for
transformations, such as Aggregator, Joiner, Lookup, Sorter, XML, and Rank.
You can view system resource usage in the Workflow Monitor. You can use
system tools to monitor Windows and UNIX systems.
You can view the Integration Service properties in the Workflow Monitor to see
CPU, memory, and swap usage of the system when you are running task
processes on the Integration Service. Use the following Integration Service
properties to identify performance issues:
CPU%. The percentage of CPU usage includes other external tasks running on
the system.
Memory usage. The percentage of memory usage includes other external tasks
running on the system. If the memory usage is close to 95%, check if the tasks
running on the system are using the amount indicated in the Workflow Monitor
or if there is a memory leak. To troubleshoot, use system tools to check the
memory usage before and after running the session and then compare the
results to the memory usage while running the session.
Swap usage. Swap usage is a result of paging due to possible memory leaks or
a high number of concurrent tasks.
You can view the Performance and Processes tab in the Task Manager for
system information. The Performance tab in the Task Manager provides an
overview of CPU usage and total memory used. Use the Performance Monitor to
view more detailed information.
Use the Windows Performance Monitor to create a chart that provides the
following information:
Percent processor time. If you have more than one CPU, monitor each CPU
for percent processor time.
Pages/second. If pages/second is greater than five, you may have excessive
memory pressure (thrashing).
Physical disks percent time. The percent of time that the physical disk is busy
performing read or write requests.
Physical disks queue length. The number of users waiting for access to the
same disk device.
Server total bytes per second. The server has sent to and received from the
network.
Identifying System Bottlenecks on UNIX
top. View overall system performance. This tool displays CPU usage, memory
usage, and swap usage for the system and for individual processes running on
the system.
iostat. Monitor the loading operation for every disk attached to the database
server. Iostat displays the percentage of time that the disk is physically active. If
you use disk arrays, use utilities provided with the disk arrays instead of iostat.
vmstat. Monitor disk swapping actions. Swapping should not occur during the
session.
sar. View detailed system activity reports of CPU, memory, and disk usage. You
can use this tool to monitor CPU loading. It provides percent usage on user,
system, idle time, and waiting time. You can also use this tool to monitor disk
swapping actions.
If the CPU usage is more than 80%, check the number of concurrent running
tasks. Consider changing the load or using a grid to distribute tasks to different
nodes. If you cannot reduce the load, consider adding more processors.
If swapping occurs, increase the physical memory or reduce the number of
memory-intensive applications on the disk.
If you have excessive memory pressure (thrashing), consider adding more
physical memory.
If the percent of time is high, tune the cache for PowerCenter to use in-memory
cache instead of writing to disk. If you tune the cache, requests are still in
queue, and the disk busy percentage is at least 50%, add another disk device or
upgrade to a faster disk device. You can also use a separate disk for each
partition in the session.
If physical disk queue length is greater than two, consider adding another disk
device or upgrading the disk device. You also can use separate disks for the
reader, writer, and transformation threads.
Consider improving network bandwidth.
When you tune UNIX systems, tune the server for a major database system.
If the percent time spent waiting on I/O (%wio) is high, consider using other
under-utilized disks. For example, if the source data, target data, lookup, rank,
and aggregate cache files are all on the same disk, consider putting them on
different disks.
If you use a shared storage directory for flat file targets, you can optimize session
performance by ensuring that the shared storage directory is on a machine that is
dedicated to storing and managing files, instead of performing other tasks.
If the Integration Service runs on a single node and the session writes to a flat file
target, you can optimize session performance by writing to a flat file target that is
local to the Integration Service process node.
When you define key constraints or indexes in target tables, you slow the loading
of data to those tables. To improve performance, drop indexes and key
constraints before you run the session. You can rebuild those indexes and key
constraints after the session completes.
If you decide to drop and rebuild indexes and key constraints on a regular basis,
you can use the following methods to perform these operations each time you
run the session:
The Integration Service performance slows each time it waits for the database to
perform a checkpoint. To decrease the number of checkpoints and increase
performance, increase the checkpoint interval in the database.
Note: Although you gain performance when you reduce the number of
checkpoints, you also increase the recovery time if the database shuts down
unexpectedly.
You can use bulk loading to improve the performance of a session that inserts a
large amount of data into a DB2, Sybase ASE, Oracle, or Microsoft SQL Server
database. Configure bulk loading in the session properties.
When bulk loading, the Integration Service bypasses the database log, which
speeds performance. Without writing to the database log, however, the target
database cannot perform rollback. As a result, you may not be able to perform
recovery. When you use bulk loading, weigh the importance of improved session
performance against the ability to recover an incomplete session.
When bulk loading to Microsoft SQL Server or Oracle targets, define a large
commit interval to increase performance. Microsoft SQL Server and Oracle start
a new bulk load transaction after each commit. Increasing the commit interval
reduces the number of bulk load transactions, which increases performance.
Minimizing Deadlocks
For more information about configuring deadlock retries, see the PowerCenter
Workflow Administration Guide.
If you write to Oracle, Sybase ASE, or Microsoft SQL Server targets, you can
improve the performance by increasing the network packet size. Increase the
network packet size to allow larger packets of data to cross the network at one
time. Increase the network packet size based on the database you write to:
Oracle. You can increase the database server network packet size in
listener.ora and tnsnames.ora. Consult your database documentation for
additional information about increasing the packet size, if necessary.
Sybase ASE and Microsoft SQL Server. Consult your database
documentation for information about how to increase the packet size.
For Sybase ASE or Microsoft SQL Server, you must also change the packet size
in the relational connection object in the Workflow Manager to reflect the
database server packet size.
If the target database is Oracle, you can optimize the target database by
checking the storage clause, space allocation, and rollback or undo segments.
When you write to an Oracle database, check the storage clause for database
objects. Make sure that tables are using large initial and next values. The
database should also store table and index data in separate tablespaces,
preferably on different disks.
When you write to Oracle databases, the database uses rollback or undo
segments during loads. Ask the Oracle database administrator to ensure that the
database stores rollback or undo segments in appropriate tablespaces,
preferably on different disks. The rollback or undo segments should also have
appropriate storage clauses.
To optimize the Oracle database, tune the Oracle redo log. The Oracle database
uses the redo log to log loading operations. Make sure the redo log size and
buffer size are optimal. You can view redo log properties in the init.ora file.
If the Integration Service runs on a single node and the Oracle instance is local to
the Integration Service process node, you can optimize performance by using
IPC protocol to connect to the Oracle database. You can set up Oracle database
connection in listener.ora and tnsnames.ora.
If a session joins multiple source tables in one Source Qualifier, you might be
able to improve performance by optimizing the query with optimizing hints. Also,
single table select statements with an ORDER BY or GROUP BY clause may
benefit from optimization such as adding indexes.
Usually, the database optimizer determines the most efficient way to process the
source data. However, you might know properties about the source tables that
the database optimizer does not. The database administrator can create
optimizer hints to tell the database how to execute the query for a particular set
of source tables.
The query that the Integration Service uses to read data appears in the session
log. You can also find the query in the Source Qualifier transformation. Have the
database administrator analyze the query, and then create optimizer hints and
indexes for the source tables.
Use optimizing hints if there is a long delay between when the query begins
executing and when PowerCenter receives the first row of data. Configure
optimizer hints to begin returning rows as quickly as possible, rather than
returning all rows at once. This allows the Integration Service to process rows
parallel with the query execution.
You can also configure the source database to run parallel queries to improve
performance. For more information about configuring parallel queries, see the
database documentation.
A simple source filter on the source database can sometimes negatively impact
performance because of the lack of indexes. You can use the PowerCenter
conditional filter in the Source Qualifier to improve performance.
However, some sessions may perform faster if you filter the source data on the
source database. You can test the session with both the database filter and the
PowerCenter filter to determine which method improves performance.
If you read from Oracle, Sybase ASE, or Microsoft SQL Server sources, you can
improve the performance by increasing the network packet size. Increase the
network packet size to allow larger packets of data to cross the network at one
time. Increase the network packet size based on the database you read from:
Oracle. You can increase the database server network packet size in
listener.ora and tnsnames.ora. Consult your database documentation for
additional information about increasing the packet size, if necessary.
Sybase ASE and Microsoft SQL Server. Consult your database
documentation for information about how to increase the packet size.
For Sybase ASE or Microsoft SQL Server, you must also change the packet size
in the relational connection object in the Workflow Manager to reflect the
database server packet size.
FastExport is a utility that uses multiple Teradata sessions to quickly export large
amounts of data from a Teradata database. You can create a PowerCenter
session that uses FastExport to read Teradata sources quickly. To use
FastExport, create a mapping with a Teradata source database. In the session,
use FastExport reader instead of Relational reader. Use a FastExport connection
to the Teradata tables that you want to export in a session.
Optimizing Mappings
Generally, you reduce the number of transformations in the mapping and delete
unnecessary links between transformations to optimize the mapping. Configure
the mapping with the least number of transformations and expressions to do the
most amount of work possible. Delete unnecessary links between
transformations to minimize the amount of data moved.
If the session reads from a flat file source, you can improve session performance
by setting the number of bytes the Integration Service reads per line. By default,
the Integration Service reads 1024 bytes per line. If each line in the source file is
less than the default setting, you can decrease the line sequential buffer length in
the session properties.
If a source is a delimited flat file, you must specify the delimiter character to
separate columns of data in the source file. You must also specify the escape
character. The Integration Service reads the delimiter character as a regular
character if you include the escape character before the delimiter character. You
can improve session performance if the source flat file does not contain quotes or
escape characters.
XML files are usually larger than flat files because of the tag information. The
size of an XML file depends on the level of tagging in the XML file. More tags
result in a larger file size. As a result, the Integration Service may take longer to
read and cache XML sources.
If the session reads from a flat file source, you can improve session performance
by setting the number of bytes the Integration Service reads per line. By default,
the Integration Service reads 1024 bytes per line. If each line in the source file is
less than the default setting, you can decrease the line sequential buffer length in
the session properties.
If a source is a delimited flat file, you must specify the delimiter character to
separate columns of data in the source file. You must also specify the escape
character. The Integration Service reads the delimiter character as a regular
character if you include the escape character before the delimiter character. You
can improve session performance if the source flat file does not contain quotes or
escape characters.
XML files are usually larger than flat files because of the tag information. The
size of an XML file depends on the level of tagging in the XML file. More tags
result in a larger file size. As a result, the Integration Service may take longer to
read and cache XML sources.
Optimizing Pass-Through Mappings
You can optimize performance for pass-through mappings. To pass directly from
source to target without any other transformations, connect the Source Qualifier
transformation directly to the target. If you use the Getting Started Wizard to
create a pass-through mapping, the wizard creates an Expression transformation
between the Source Qualifier transformation and the target.
Optimizing Filters
If you filter rows from the mapping, you can improve efficiency by filtering early in
the data flow. Use a filter in the Source Qualifier transformation to remove the
rows at the source. The Source Qualifier transformation limits the row set
extracted from a relational source.
If you cannot use a filter in the Source Qualifier transformation, use a Filter
transformation and move it as close to the Source Qualifier transformation as
possible to remove unnecessary data early in the data flow. The Filter
transformation limits the row set sent to a target.
Avoid using complex expressions in filter conditions. To optimize Filter
transformations, use simple integer or true/false expressions in the filter
condition.
Note: You can also use a Filter or Router transformation to drop rejected rows
from an Update Strategy transformation if you do not need to keep rejected rows.
Optimizing Expressions
You can also optimize the expressions used in the transformations. When
possible, isolate slow expressions and simplify them.
If there is a significant difference in session run time, look for ways to optimize
the slow expression.
If the mapping performs the same task in multiple places, reduce the number of
times the mapping performs the task by moving the task earlier in the mapping.
For example, you have a mapping with five target tables. Each target requires a
Social Security number lookup. Instead of performing the lookup five times, place
the Lookup transformation in the mapping before the data flow splits. Next, pass
the lookup results to all five targets.
SUM(COLUMN_A) + SUM(COLUMN_B)
If you factor out the aggregate function call, as below, the Integration Service
adds COLUMN_A to COLUMN_B, then finds the sum of both.
SUM(COLUMN_A + COLUMN_B)
If you use the same expression multiple times in one transformation, you can
make that expression a local variable. You can use a local variable only within
the transformation. However, by calculating the variable only once, you speed
performance. For more information about using local variables, see the
PowerCenter Transformation Guide.
When you use a LOOKUP function, the Integration Service must look up a table
in a database. When you use a DECODE function, you incorporate the lookup
values into the expression so the Integration Service does not have to look up a
separate table. Therefore, when you want to look up a small set of unchanging
values, use DECODE to improve performance.
The Integration Service reads expressions written with operators faster than
expressions with functions. Where possible, use operators to write expressions.
For example, you have the following expression that contains nested CONCAT
functions:
CUSTOMERS.FIRST_NAME || ‘ ’ || CUSTOMERS.LAST_NAME
IIF functions can return a value and an action, which allows for more compact
expressions. For example, you have a source with three Y/N flags: FLG_A,
FLG_B, FLG_C. You want to return values based on the values of each flag.
If you take advantage of the IIF function, you can rewrite that expression as:
This results in three IIFs, two comparisons, two additions, and a faster session.
Evaluating Expressions
If you are not sure which expressions slow performance, evaluate the expression
performance to isolate the problem.
You might want to block input data if the external procedure needs to alternate
reading from input groups. Without the blocking functionality, you would need to
write the procedure code to buffer incoming data. You can block input data
instead of buffering it which usually increases session performance.
For example, you need to create an external procedure with two input groups.
The external procedure reads a row from the first input group and then reads a
row from the second input group. If you use blocking, you can write the external
procedure code to block the flow of data from one input group while it processes
the data from the other input group. When you write the external procedure code
to block data, you increase performance because the procedure does not need
to copy the source data to a buffer. However, you could write the external
procedure to allocate a buffer and copy the data from one input group to the
buffer until it is ready to process the data. Copying source data to a buffer
decreases performance.
For more information about blocking data, see the PowerCenter Transformation
Guide.
Optimizing Transformations
The Sorted Input option decreases the use of aggregate caches. When you use
the Sorted Input option, the Integration Service assumes all data is sorted by
group. As the Integration Service reads rows for a group, it performs aggregate
calculations. When necessary, it stores group information in memory.
The Sorted Input option reduces the amount of data cached during the session
and improves performance. Use this option with the Source Qualifier Number of
Sorted Ports option or a Sorter transformation to pass sorted data to the
Aggregator transformation.
You can increase performance when you use the Sorted Input option in sessions
with multiple partitions.
If you can capture changes from the source that affect less than half the target,
you can use incremental aggregation to optimize the performance of Aggregator
transformations.
When you use incremental aggregation, you apply captured changes in the
source to aggregate calculations in a session. The Integration Service updates
the target incrementally, rather than processing the entire source and
recalculating the same calculations every time you run the session.
You can increase the index and data cache sizes to hold all data in memory
without paging to disk.
Related Topics:
Filter the data before you aggregate it. If you use a Filter transformation in the
mapping, place the transformation before the Aggregator transformation to
reduce unnecessary aggregation.
Limit the number of connected input/output or output ports to reduce the amount
of data the Aggregator transformation stores in the data cache.
You can increase performance when the procedure receives a block of rows:
You can decrease the number of function calls the Integration Service and
procedure make. The Integration Service calls the input row notification function
fewer times, and the procedure calls the output notification function fewer times.
You can increase the locality of memory access space for the data.
You can write the procedure code to perform an algorithm on a block of data
instead of each row of data.
Use the following tips to improve session performance with the Joiner
transformation:
Designate the master source as the source with fewer duplicate key
values. When the Integration Service processes a sorted Joiner transformation,
it caches rows for one hundred unique keys at a time. If the master source
contains many rows with the same key value, the Integration Service must
cache more rows, and performance can be slowed.
Designate the master source as the source with fewer rows. During a
session, the Joiner transformation compares each row of the detail source
against the master source. The fewer rows in the master, the fewer iterations of
the join comparison occur, which speeds the join process.
Perform joins in a database when possible. Performing a join in a database
is faster than performing a join in the session. The type of database join you use
can affect performance. Normal joins are faster than outer joins and result in
fewer rows. In some cases, you cannot perform the join in the database, such
as joining tables from two different databases or flat file systems.
To perform a join in a database, use the following options:
Create a pre-session stored procedure to join the tables in a database.
Use the Source Qualifier transformation to perform the join.
Join sorted data when possible. To improve session performance, configure
the Joiner transformation to use sorted input. When you configure the Joiner
transformation to use sorted data, the Integration Service improves performance
by minimizing disk input and output. You see the greatest performance
improvement when you work with large data sets. For an unsorted Joiner
transformation, designate the source with fewer rows as the master source.
If the lookup table is on the same database as the source table in your mapping
and caching is not feasible, join the tables in the source database rather than
using a Lookup transformation.
If you use a Lookup transformation, perform the following tasks to increase
performance:
The Integration Service can connect to a lookup table using a native database
driver or an ODBC driver. Native database drivers provide better session
performance than ODBC drivers.
The result of the Lookup query and processing is the same, whether or not you
cache the lookup table. However, using a lookup cache can increase session
performance for smaller lookup tables. In general, you want to cache lookup
tables that need less than 300 MB.
Caches
Types of Caches
Shared cache. You can share the lookup cache between multiple
transformations. You can share an unnamed cache between transformations in
the same mapping. You can share a named cache between transformations in
the same or different mappings.
Persistent cache. To save and reuse the cache files, you can configure the
transformation to use a persistent cache. Use this feature when you know the
lookup table does not change between session runs. Using a persistent cache
can improve performance because the Integration Service builds the memory
cache from the cache files instead of from the database.
You can enable concurrent caches to improve performance. When the number of
additional concurrent pipelines is set to one or more, the Integration Service
builds caches concurrently rather than sequentially. Performance improves
greatly when the sessions contain a number of active transformations that may
take time to complete, such as Aggregator, Joiner, or Sorter transformations.
When you enable multiple concurrent pipelines, the Integration Service no longer
waits for active sessions to complete before it builds the cache. Other Lookup
transformations in the pipeline also build caches concurrently.
When the Lookup transformation matches lookup cache data with the lookup
condition, it sorts and orders the data to determine the first matching value and
the last matching value. You can configure the transformation to return any value
that matches the lookup condition. When you configure the Lookup
transformation to return any matching value, the transformation returns the first
value that matches the lookup condition. It does not index all ports as it does
when you configure the transformation to return the first matching value or the
last matching value. When you use any matching value, performance can
improve because the transformation does not index on all ports, which can slow
performance.
You can reduce the number of rows included in the cache to increase
performance. Use the Lookup SQL Override option to add a WHERE clause to
the default SQL statement.
The Lookup transformation includes three lookup ports used in the mapping,
ITEM_ID, ITEM_NAME, and PRICE. When you enter the ORDER BY statement,
enter the columns in the same order as the ports in the lookup condition. You
must also enclose all database reserved words in quotes.
If you include more than one lookup condition, place the conditions in the
following order to optimize lookup performance:
Equal to (=)
Less than (<), greater than (>), less than or equal to (<=), greater than or equal
to (>=)
Not equal to (!=)
Create a filter condition to reduce the number of lookup rows retrieved from the
source when the lookup cache is built.
The Integration Service needs to query, sort, and compare values in the lookup
condition columns. The index needs to include every column used in a lookup
condition.
To determine which Lookup transformations process the most data, examine the
Lookup_rowsinlookupcache counters for each Lookup transformation. The
Lookup transformations that have a large number in this counter might benefit
from tuning their lookup expressions. If those expressions can be optimized,
session performance improves.
Related Topics:
Optimizing Expressions
The partial pipeline is a separate target load order group in session properties.
You can configure multiple partitions in this pipeline to improve performance.
The Number of Cached Values property determines the number of values the
Integration Service caches at one time. Make sure that the Number of Cached
Value is not too small. Consider configuring the Number of Cached Values to a
value greater than 1,000.
If you do not have to cache values, set the Number of Cache Values to 0.
Sequence Generator transformations that do not use cache are faster than those
that require cache.
Allocating Memory
For optimal performance, configure the Sorter cache size with a value less than
or equal to the amount of available physical RAM on the Integration Service
machine. Allocate at least 16 MB of physical memory to sort data using the
Sorter transformation. The Sorter cache size is set to 16,777,216 bytes by
default. If the Integration Service cannot allocate enough memory to sort data, it
fails the session.
If the amount of incoming data is greater than the amount of Sorter cache size,
the Integration Service temporarily stores data in the Sorter transformation work
directory. The Integration Service requires disk space of at least twice the
amount of incoming data when storing data in the work directory. If the amount of
incoming data is significantly greater than the Sorter cache size, the Integration
Service may require much more than twice the amount of disk space available to
the work directory.
The Integration Service creates temporary files when it sorts data. It stores them
in a work directory. You can specify any directory on the Integration Service
machine to use as a work directory. By default, the Integration Service uses the
value specified for the $PMTempDir service process variable.
When you partition a session with a Sorter transformation, you can specify a
different work directory for each partition in the pipeline. To increase session
performance, specify work directories on physically separate disks on the
Integration Service nodes.
Use the Select Distinct option for the Source Qualifier transformation if you want
the Integration Service to select unique values from a source. Use Select Distinct
option to filter unnecessary data earlier in the data flow. This can improve
performance.
When you create an SQL transformation, you configure the transformation to use
external SQL queries or queries that you define in the transformation. When you
configure an SQL transformation to run in script mode, the Integration Service
processes an external SQL script for each input row. When the transformation
runs in query mode, the Integration Service processes an SQL query that you
define in the transformation.
Each time the Integration Service processes a new query in a session, it calls a
function called SQLPrepare to create an SQL procedure and pass it to the
database. When the query changes for each input row, it has a performance
impact.
When the transformation runs in query mode, construct= a static query in the
transformation to improve performance. A static query statement does not
change, although the data in the query clause changes. To create a static query,
use parameter binding instead of string substitution in the SQL Editor. When you
use parameter binding you set parameters in the query clause to values in the
transformation input ports.
When an SQL query contains commit and rollback query statements, the
Integration Service must recreate the SQL procedure after each commit or
rollback. To optimize performance, do not use transaction statements in an SQL
transformation query.
When you create the SQL transformation, you configure how the transformation
connects to the database. You can choose a static connection or you can pass
connection information to the transformation at run time.
When you configure the transformation to use a static connection, you choose a
connection from the Workflow Manager connections. The SQL transformation
connects to the database once during the session. When you pass dynamic
connection information, the SQL transformation connects to the database each
time the transformation processes an input row.
Optimizing Sessions
Grid
Pushdown Optimization
Concurrent Sessions and Workflows
Buffer Memory
Caches
Target-Based Commit
Real-time Processing
High Precision
Staging Areas
Log Files
Error Tracing
Post-Session Emails
Pushdown Optimization
You can configure the amount of buffer memory, or you can configure the
Integration Service to calculate buffer settings at run time. For information, see
the PowerCenter Workflow Administration Guide.
To increase the number of available memory blocks, adjust the following session
properties:
DTM Buffer Size. Increase the DTM buffer size on the Properties tab in the
session properties.
Default Buffer Block Size. Decrease the buffer block size on the Config Object
tab in the session properties.
Before you configure these settings, determine the number of memory blocks the
Integration Service requires to initialize the session. Then, based on default
settings, calculate the buffer size and the buffer block size to create the required
number of session blocks.
If you have XML sources or targets in a mapping, use the number of groups in
the XML source or target in the calculation for the total number of sources and
targets.
For example, you create a session that contains a single partition using a
mapping that contains 50 sources and 50 targets. Then, you make the following
calculations:
1.You determine that the session requires a minimum of 200 memory blocks:
100 * 2 = 200
Based on default settings, you determine that you can change the DTM Buffer
2.Size to 15,000,000, or you can change the Default Buffer Block Size to 54,000:
(session Buffer Blocks) = (.9) * (DTM Buffer Size) / (Default Buffer Block Size) *
(number of partitions)
or
Note: For a session that contains n partitions, set the DTM Buffer Size to at least
n times the value for the session with one partition. The Log Manager writes a
warning message in the session log if the number of memory blocks is so small
that it causes performance degradation. The Log Manager writes this warning
message even if the number of memory blocks is enough for the session to run
successfully. The warning message also gives a suggestion for the proper value.
If you modify the DTM Buffer Size, increase the property by multiples of the
buffer block size.
The DTM Buffer Size setting specifies the amount of memory the Integration
Service uses as DTM buffer memory. The Integration Service uses DTM buffer
memory to create the internal data structures and buffer blocks used to bring
data into and out of the Integration Service. When you increase the DTM buffer
memory, the Integration Service creates more buffer blocks, which improves
performance during momentary slowdowns.
Note: Reducing the DTM buffer allocation can cause the session to fail early in
the process because the Integration Service is unable to allocate memory to the
required processes.
To increase the DTM buffer size, open the session properties and click the
Properties tab. Edit the DTM Buffer Size property in the Performance settings.
The default for DTM Buffer Size is 12,000,000 bytes. Increase the property by
multiples of the buffer block size, and then run and time the session after each
increase.
Depending on the session source data, you might need to increase or decrease
the buffer block size.
If the machine has limited physical memory and the mapping in the session
contains a large number of sources, targets, or partitions, you might need to
decrease the buffer block size.
If you are manipulating unusually large rows of data, increase the buffer block
size to improve performance. If you do not know the approximate size of the
rows, determine the configured row size by completing the following steps.
1.In the Mapping Designer, open the mapping for the session.
2.Open the target instance.
3.Click the Ports tab.
4.Add the precision for all columns in the target.
If you have more than one target in the mapping, repeat steps 2 to 4 for each
5.additional target to calculate the precision for each target.
6.Repeat steps 2 to 5 for each source definition in the mapping.
Choose the largest precision of all the source and target precisions for the total
7.precision in the buffer block size calculation.
The total precision represents the total bytes needed to move the largest row of
data. For example, if the total precision equals 33,000, then the Integration
Service requires 33,000 bytes in the buffers to move that row. If the buffer block
size is 64,000 bytes, the Integration Service can move only one row at a time.
To increase the buffer block size, open the session properties and click the
Config Object tab. Edit the Default Buffer Block Size property in the Advanced
settings.
Increase the DTM buffer block setting in relation to the size of the rows. As with
DTM buffer memory allocation, increasing buffer block size should improve
performance. If you do not see an increase, buffer block size is not a factor in
session performance.
Caches
The Integration Service uses the index and data caches for XML targets and
Aggregator, Rank, Lookup, and Joiner transformations. The Integration Service
stores transformed data in the data cache before returning it to the pipeline. It
stores group information in the index cache. Also, the Integration Service uses a
cache to store data for Sorter transformations.
To configure the amount of cache memory, use the cache calculator or specify
the cache size. You can also configure the Integration Service to calculate cache
memory settings at run time. For more information, see the PowerCenter
Workflow Administration Guide.
If the allocated cache is not large enough to store the data, the Integration
Service stores the data in a temporary disk file, a cache file, as it processes the
session data. Performance slows each time the Integration Service pages to a
temporary file. Examine the performance counters to determine how often the
Integration Service pages to a file.
For transformations that use data cache, limit the number of connected
input/output and output only ports. Limiting the number of connected input/output
or output ports reduces the amount of data the transformations store in the data
cache.
If you run the Integration Service on a grid and only some Integration Service
nodes have fast access to the shared cache file directory, configure each session
with a large cache to run on the nodes with fast access to the directory. To
configure a session to run on a node with fast access to the directory, complete
the following steps:
If all Integration Service processes in a grid have slow access to the cache files,
set up a separate, local cache file directory for each Integration Service process.
An Integration Service process may have faster access to the cache files if it runs
on the same machine that contains the cache directory.
Note: You may encounter performance degradation when you cache large
quantities of data on a mapped or mounted drive.
You configure the cache size to specify the amount of memory allocated to
process a transformation. The amount of memory you configure depends on how
much memory cache and disk cache you want to use. If you configure the cache
size and it is not enough to process the transformation in memory, the Integration
Service processes some of the transformation in memory and pages information
to cache files to process the rest of the transformation. Each time the Integration
Service pages to a cache file, performance slows.
You can examine the performance details of a session to determine when the
Integration Service pages to a cache file. The Transformation_readfromdisk or
Transformation_writetodisk counters for any Aggregator, Rank, or Joiner
transformation indicate the number of times the Integration Service pages to disk
to process the transformation.
If the session contains a transformation that uses a cache and you run the
session on a machine with sufficient memory, increase the cache sizes to
process the transformation in memory.
Caching. With a 64-bit platform, the Integration Service is not limited to the 2
GB cache limit of a 32-bit platform.
Data throughput. With a larger available memory space, the reader, writer, and
DTM threads can process larger blocks of data.
Target-Based Commit
The commit interval setting determines the point at which the Integration Service
commits data to the targets. Each time the Integration Service commits,
performance slows. Therefore, the smaller the commit interval, the more often
the Integration Service writes to the target database, and the slower the overall
performance.
If you increase the commit interval, the number of times the Integration Service
commits decreases and performance improves.
When you increase the commit interval, consider the log file limits in the target
database. If the commit interval is too high, the Integration Service may fill the
database log file and cause the session to fail.
Therefore, weigh the benefit of increasing the commit interval against the
additional time you would spend recovering a failed session.
Click the General Options settings in the session properties to review and adjust
the commit interval.
Real-time Processing
Flush Latency
Flush latency determines how often the Integration Service flushes real-time data
from the source. The lower you set the flush latency interval, the more frequently
the Integration Service commits messages to the target. Each time the
Integration Service commits messages to the target, the session consumes more
resources and throughput drops.
High Precision
If a session runs with high precision enabled, disabling high precision might
improve session performance.
When you disable high precision, the Integration Service converts data to a
double. The Integration Service reads the Decimal row
3900058411382035317455530282 as 390005841138203 x 1013.
Staging Areas
When you use a staging area, the Integration Service performs multiple passes
on the data. When possible, remove staging areas to improve performance. The
Integration Service can read multiple sources with a single pass, which can
alleviate the need for staging areas.
Log Files
A workflow runs faster when you do not configure it to write session and workflow
log files. Workflows and sessions always create binary logs. When you configure
a session or workflow to write a log file, the Integration Service writes logging
events twice. You can access the binary logs session and workflow logs in the
Administration Console.
Error Tracing
If you need to debug the mapping and you set the tracing level to Verbose, you
may experience significant performance degradation when you run the session.
Do not use Verbose tracing when you tune performance.
When you attach the session log to a post-session email, enable flat file logging.
If you enable flat file logging, the Integration Service gets the session log file from
disk. If you do not enable flat file logging, the Integration Service gets the log
events from the Log Manager and generates the session log file to attach to the
email. When the Integration Service retrieves the session log from the log
service, workflow performance slows, especially when the session log file is large
and the log service runs on a different node than the master DTM. For optimal
performance, configure the session to write to log file when you configure post-
session email to attach a session log.
For example, you have 150,000 rows of data and seven Sequence Generator
transformations. The number of cached values is 10. The master and worker
DTM communicate 15,000 times. If you increase the number of cached values to
15,000, the master and worker DTM communicate ten times.
Slow disk access on source and target databases, source and target file
systems, and nodes in the domain can slow session performance. Have the
system administrator evaluate the hard disks on the machines.
After you determine from the system monitoring tools that you have a system
bottleneck, make the following global changes to improve the performance of all
sessions:
If you use flat file as a source or target in a session and the Integration Service
runs on a single node, store the files on the same machine as the Integration
Service to improve performance. When you store flat files on a machine other
than the Integration Service, session performance becomes dependent on the
performance of the network connections. Moving the files onto the Integration
Service process system and adding disk space might improve performance.
If you use relational source or target databases, try to minimize the number of
network hops between the source and target databases and the Integration
Service process. Moving the target database onto a server system might improve
Integration Service performance.
When you run sessions that contain multiple partitions, have the network
administrator analyze the network and make sure it has enough bandwidth to
handle the data moving across the network from all partitions.
Configure the system to use more CPUs to improve performance. Multiple CPUs
allow the system to run multiple sessions in parallel as well as multiple pipeline
partitions in parallel.
Reducing Paging
Paging occurs when the Integration Service process operating system runs out of
memory for a particular operation and uses the local disk for memory. You can
free up more memory or increase physical memory to reduce paging and the
slow performance that results from paging. Monitor paging activity using system
tools.
If you cannot free up memory, you might want to add memory to the system.
Using Processor Binding
In a Sun Solaris environment, the system administrator can create and manage a
processor set using the psrset command. The system administrator can then use
the pbind command to bind the Integration Service to a processor set so the
processor set only runs the Integration Service. The Sun Solaris environment
also provides the psrinfo command to display details about each configured
processor and the psradm command to change the operational status of
processors. For more information, see the system administrator and Sun Solaris
documentation.