CDI August2022 Transformations en
CDI August2022 Transformations en
August 2022
Transformations
Informatica Cloud Data Integration Transformations
August 2022
© Copyright Informatica LLC 2006, 2022
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial
computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the
extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.
Informatica, Informatica Cloud, Informatica Intelligent Cloud Services, PowerCenter, PowerExchange, and the Informatica logo are trademarks or registered trademarks
of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://
www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
[email protected].
Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
Chapter 1: Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Active and passive transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Transformation types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Licensed transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Incoming fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Field name conflicts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Field rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Data object preview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Variable fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Expression macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Macro types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Macro input fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Vertical macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Horizontal macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Hybrid macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
File lists. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Manually created file lists. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
File list commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Using a file list in a Source transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Using a file list in a Lookup transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Multibyte hierarchical data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Table of Contents 3
Custom queries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Source filtering and sorting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Web service sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Web service operations for sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Request messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Field mapping for web service sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Partitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Partitioning rules and guidelines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Partitioning examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Reading hierarchical data in an elastic mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Multibyte hierarchical data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Source fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Editing native data types in complex file sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Editing transformation data types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4 Table of Contents
Nested aggregate functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Conditional clauses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Advanced properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Hierarchical data in an elastic mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Table of Contents 5
Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Data Masking transformation example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6 Table of Contents
Selecting the fields to map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Advanced properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Multibyte hierarchical data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Hierarchy Builder transformation example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Table of Contents 7
Configure data sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Configure join conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Configure filter conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Configure group by fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Configure order by fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Running a mapping with JSON data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Multibyte hierarchical data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Field restrictions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Hierarchy Processor transformation examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Hierarchical to relational example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Relational to hierarchical example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Hierarchical to hierarchical example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Hierarchical to flattened example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
8 Table of Contents
Viewing the full class code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Troubleshooting a Java transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Finding the source of compilation errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Identifying the error type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Java transformation example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Create the source file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Configure the mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Configure the Java code snippets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Compile the code and run the mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Table of Contents 9
Lookup source filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Dynamic lookup cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Static and dynamic lookup comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Dynamic cache updates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Inserts and updates for insert rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Dynamic cache and lookup source synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Dynamic cache and target synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Field mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Ignore fields in comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Dynamic lookup query overrides. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Persistent lookup cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Rebuilding the lookup cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Unconnected lookups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Configuring an unconnected Lookup transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Calling an unconnected lookup from another transformation. . . . . . . . . . . . . . . . . . . . . . 254
Connected Lookup example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Dynamic Lookup example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Unconnected Lookup example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
10 Table of Contents
Chapter 21: Normalizer transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Normalized fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Occurs configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Unmatched groups of multiple-occurring fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Generated keys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Normalizer field mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Normalizer field mapping options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Advanced properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Target configuration for Normalizer transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Normalizer field rule for parameterized sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Mapping example with a Normalizer and Aggregator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Table of Contents 11
Defining rank groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Advanced properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Hierarchical data in an elastic mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Rank transformation example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
12 Table of Contents
Static SQL queries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Dynamic SQL queries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Passive mode configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
SQL statements that you can use in queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Rules and guidelines for query processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
SQL transformation configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Configuring the SQL type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
SQL transformation field mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
SQL transformation output fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Advanced properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Table of Contents 13
Chapter 34: Velocity transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Velocity transformation input format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Source configuration for file sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Velocity template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Testing the template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Velocity transformation output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Target configuration for file targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Velocity transformation parsers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
XML conversion example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
JSON conversion example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
14 Table of Contents
Preface
Refer to Transformations for information about the transformations that you can include in mappings and
mapplets. Learn how to transform your data when you move it from source to target.
Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.
Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.
If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at [email protected].
https://network.informatica.com/community/informatica-network/products/cloud-integration
Developers can learn more and share tips at the Cloud Developer community:
https://network.informatica.com/community/informatica-network/products/cloud-integration/cloud-
developers
https://marketplace.informatica.com/
15
Data Integration connector documentation
You can access documentation for Data Integration Connectors at the Documentation Portal. To explore the
Documentation Portal, visit https://docs.informatica.com.
To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
[email protected].
Subscribe to the Informatica Intelligent Cloud Services Trust Center to receive upgrade, maintenance, and
incident notifications. The Informatica Intelligent Cloud Services Status page displays the production status
of all the Informatica cloud products. All maintenance updates are posted to this page, and during an outage,
it will have the most current information. To ensure you are notified of updates and outages, you can
subscribe to receive updates for a single component or all Informatica Intelligent Cloud Services
components. Subscribing to all components is the best way to be certain you never miss an update.
For online support, click Submit Support Request in Informatica Intelligent Cloud Services. You can also use
Online Support to log a case. Online Support requires a login. You can request a login at
https://network.informatica.com/welcome.
The telephone numbers for Informatica Global Customer Support are available from the Informatica web site
at https://www.informatica.com/services-and-training/support-services/contact-us.html.
16 Preface
Chapter 1
Transformations
Transformations are a part of a mapping that represent the operations that you want to perform on data.
Transformations also define how data enters each transformation.
Each transformation performs a specific function. For example, a Source transformation reads data from a
source, and an Expression transformation performs row-level calculations.
An active transformation can change the number of rows that pass through the transformation. For example,
the Filter transformation is active because it removes rows that do not meet the filter condition.
A passive transformation does not change the number of rows that pass through the transformation.
You can connect multiple branches to a downstream passive transformation when all transformations in the
branches are passive.
You cannot connect multiple active transformations or an active and a passive transformation to the same
downstream transformation or transformation input group. You might not be able to concatenate the rows.
An active transformation changes the number of rows, so it might not match the number of rows from
another transformation.
For example, one branch in a mapping contains an Expression transformation, which is passive, and another
branch contains an Aggregator transformation, which is active. The Aggregator transformation performs
aggregations on groups, such as sums, and reduces the number of rows. If you connect the branches, Data
Integration cannot combine the rows from the Expression transformation with the different number of rows
from the Aggregator transformation. Use a Joiner transformation to join the two branches.
Transformation types
After you add a transformation to a mapping, you can define transformation details. Each transformation
type has a unique set of options that you can configure.
Note: The transformations that you can use in a mapping depend on the mapping type.
17
The following table provides a brief description of each transformation:
Transformation Description
Cleanse A passive transformation that adds a cleanse asset that you created in Data Quality to a
mapping or mapplet. Use a cleanse asset to standardize the form and content of your data.
Data Masking A passive transformation that masks sensitive data as realistic test data for nonproduction
environments.
Deduplicate An active transformation that adds a deduplicate asset that you created in Data Quality to a
mapping or mapplet. Use a deduplicate asset to find instances of duplicate identities in a data
set and optionally to consolidate the duplicates into a single record.
Filter An active transformation that filters data from the data flow.
Hierarchy Builder An active transformation that converts relational input into hierarchical output.
Hierarchy Parser A passive transformation that converts hierarchical input into relational output.
Hierarchy Processor An active transformation that converts hierarchical input into relational output, or relational
input into hierarchical output, or hierarchical output into hierarchical output of a different
schema, or hierarchical input into denormalized flattened output.
Input A passive transformation that passes data into a mapplet. Can be used in a mapplet, but not in
a mapping.
Labeler A passive transformation that adds a labeler asset that you created in Data Quality to a
mapping or mapplet. Use a labeler asset to identify the types of information in an input field
and to assign labels for each type to the data.
Lookup Looks up data from a lookup object. Defines the lookup object and lookup connection. Also
defines the lookup condition and the return values.
A passive lookup transformation returns one row. An active lookup transformation returns
more than one row.
Machine Learning Runs a machine learning model and returns predictions to the mapping.
Mapplet Inserts a mapplet into a mapping or another mapplet. A mapplet contains transformation logic
that you can create and use to transform data before it is loaded into the target.
Can be active or passive based on the transformation logic in the mapplet.
Normalizer An active transformation that processes data with multiple-occurring fields and returns a row
for each instance of the multiple-occurring data.
18 Chapter 1: Transformations
Transformation Description
Output A passive transformation that passes data from a mapplet to a downstream transformation.
Can be used in a mapplet, but not in a mapping.
Parse A passive transformation that adds a parse asset that you created in Data Quality to a mapping
or mapplet. Use a parse asset to parse the words or strings in an input field into one or more
discrete output fields based on the types of information that the words or strings contain.
Python Runs Python code that defines transformation functionality. Can be active or passive.
Router An active transformation that you can use to apply a condition to incoming data.
Rule Specification A passive transformation that adds a rule specification asset that you created in Data Quality
to a mapping or mapplet. Use a rule specification asset to apply the data requirements of a
business rule to a data set.
Sorter A passive transformation that sorts data in ascending or descending order, according to a
specified sort condition.
Structure Parser A passive transformation that analyzes unstructured data from a flat file source and writes the
data in a structured format.
Transaction Control An active transformation that commits or rolls back sets of rows during a mapping run.
Union An active transformation that merges data from multiple input groups into a single output
group.
Velocity A passive transformation that executes a Velocity script to convert JSON or XML hierarchal
input from one format to another without flattening the data.
Verifier A passive transformation that adds a verifier asset that you created in Data Quality to a
mapping or mapplet. Use a verifier asset to verify and enhance postal address data.
Web Services An active transformation that connects to a web service as a web service client to access,
transform, or deliver data.
Licensed transformations
The transformations that you can use in a mapping vary based on your organization's licenses.
In the Mapping Designer, you can use the licensed transformations icon to show all transformations available
in Data Integration or only those that your organization has licenses for. By default, the transformation
palette shows the licensed transformations. To see all transformations, click the licensed transformations
icon.
Licensed transformations 19
The licensed transformations icon appears at the bottom of the transformation palette as shown in the
following image:
• Show Licensed. The palette only displays transformations available for your organization.
• Show All. The palette displays all transformations available in Data Integration. Unlicensed
transformations are disabled and cannot be used the mapping.
If a transformation license expires, you must renew the license to validate, run, or import any mapping or task
that contains the transformation.
Incoming fields
An incoming field is a field that enters a transformation from an upstream transformation.
By default, a transformation inherits all incoming fields from an upstream transformation. However, you
might want to change the default. For example, you might not need all of the fields from an upstream
transformation, or you might need to rename fields from an upstream transformation.
A field rule defines how data enters a transformation from an upstream transformation. You can create field
rules to specify which incoming fields to include or exclude and to rename incoming fields as required.
A field name conflict occurs when fields come from multiple transformations and have the same name. To
resolve a field name conflict caused by fields from an upstream transformation, you can create a field name
conflict resolution to rename incoming fields in bulk.
20 Chapter 1: Transformations
The following list shows the order of events that take place as fields enter and move through a
transformation:
To resolve a field name conflict, you can create a field rule to rename fields. If you create a field rule to
resolve a field name conflict, you create the field rule in the upstream transformation.
Alternatively, field name conflict error messages contain a link that you can use to create a field name
conflict rule to resolve the field name conflict. A field name conflict rule renames all of the fields from the
upstream transformation, not just the fields that cause a conflict.
Field name conflict rules take effect before field rules take effect. Field name conflict rules are only
applicable to incoming fields from upstream transformations. Field name conflicts that occur after incoming
fields first enter a transformation cannot be corrected by field name conflict rules. For example, you cannot
use field name conflict rules to correct field name conflicts that occur due to field rules or activities such as
lookup fields. Instead, modify the field rules or transformations that cause the conflict.
1. Click the link in the error message to access the Resolve Field Name Conflict dialog box.
2. Select the upstream transformation that contains the fields you want to rename in bulk.
3. In the Bulk Rename Options column, specify whether you want to rename by adding a prefix or by adding
a suffix.
4. Enter the text to add to the field names, then click OK.
Field rules
Configure a field rule based on incoming fields from an upstream transformation. Then configure the field
selection criteria and naming convention for the fields.
When you configure a field rule, you perform the following steps:
1. Choose the incoming fields that you want to include or exclude. To improve processing time and keep a
clean set of data, you can include only the incoming fields that you need.
2. Configure the field selection criteria to determine which incoming fields apply to the rule. If you use the
Named Fields selection criteria, you can use a parameter for the incoming fields.
3. Optionally, choose to rename the fields. To distinguish fields that come from different sources or to
avoid field name conflicts, you can rename incoming fields. If you use the pattern option, you can create
a parameter to rename fields in bulk.
Incoming fields 21
4. Verify the order of execution. If you configure multiple rules, you can change the order in which the
mapping task applies them.
Note: You cannot configure field rules on Source transformations or Mapplet transformations that contain
sources.
The include/exclude operator works in conjunction with field selection criteria to determine which incoming
fields a field rule affects.
For example, you want a transformation to exclude all binary fields. You select the exclude operator to
indicate that the incoming fields that meet the field selection criteria do not pass into the current
transformation. Then you specify the binary data type for the field selection criteria.
All Fields
Includes all of the incoming fields. You can rename the incoming fields in bulk when you use this option
in combination with the Includes operator.
Named Fields
Includes or excludes the incoming fields that you specify. Use the Named Fields selection criteria to
specify individual incoming fields to rename or to include or exclude from the incoming transformation.
When you enter the field selection criteria details, you can review all of the incoming fields and select the
fields to include or exclude. You can add a field that exists in the source if it does not display in the list.
You can also create a parameter to represent a field to include or exclude.
Includes or excludes incoming fields with the data types that you specify. When you enter the field
selection criteria details, you can select the data types that you want to include or exclude.
Includes or excludes incoming fields by prefix, suffix, or pattern. You can use this option to select fields
that you renamed earlier in the data flow. When you enter the field selection criteria details, you can
select a prefix, suffix, or pattern, and define the rule to use.
When you select the prefix option or suffix option, you enter the text to use as the prefix or suffix. For
example, to find all fields that start with the string, "Cust," enter Cust as the prefix.
When you select the pattern option, you can enter a regular expression or you can use a parameter for
the pattern. The expression must use perl compatible regular expression syntax. For example, to find all
fields that start with the strings "Cust" or "Addr," enter the pattern Cust.*|Addr.*. To find all fields that
contain the string "Cust" or "CUST" anywhere in the field name, enter the pattern .*Cust.*|.*CUST.*. For
more information about perl compatible regular expression syntax, see the help for the REG_EXTRACT
function in Function Reference.
22 Chapter 1: Transformations
The following image shows the selection of the Fields by Data Types field selection criteria:
The following image shows the selection of the date/time data type for the field selection criteria details:
You can rename fields individually or in bulk. When you rename fields individually, you select the fields you
want to rename from a list of incoming fields. Then you specify the name for each of the selected fields.
When you rename in bulk, you can rename all fields by adding a prefix, suffix, or pattern. When you rename
fields with a prefix or suffix, you enter the text string to use as a prefix or suffix. For example, you can specify
to rename all fields as FF_<field name>.
When you rename fields by pattern, you enter a regular expression to represent the pattern or use a
parameter to define the pattern in the task. You can create a simple expression to add a prefix or suffix to all
field names or you can create an expression to replace a particular pattern with particular text.
To replace a pattern with text use a regular expression in the following syntax, where a forward slash
separates the pattern to match and the text the pattern will be replaced with:
Incoming fields 23
The following table provides a few examples of using regular expressions when you rename fields in bulk:
Goal Expression
The following image shows the Configure Field Rules dialog box with the Pattern bulk renaming option
selected and a pattern specified to use:
Carefully construct field renaming rules to ensure that the rules do not introduce issues such as field name
conflicts. If a field renaming rule causes field name conflicts, you can edit the rule.
Tip: If the upstream transformation is a source where you cannot rename in bulk, you can add an Expression
transformation to rename the fields.
To review the order in which the rules run, you can view the rules in the Field Rules area. The mapping task
runs the rules in the order in which the rules appear. If the order of the field rules is incorrect, you can
rearrange the order.
You also can preview the incoming fields for the transformation based on the rules that you have created in
the Preview Fields table. The Preview Fields table lists all included and excluded fields. For example, if you
24 Chapter 1: Transformations
create a field rule that excludes binary fields, the Excluded Fields list shows the binary fields as excluded
from the transformation.
If the Source transformation in the mapping uses a connection parameter or a data object parameter, the
Preview Fields table does not display the transformation incoming fields.
You learn that multiple fields from the upstream transformation have the same names as fields in a source
transformation. To avoid field name conflicts, you decide to change the field names for all incoming fields.
You decide to rename the fields so that the source is distinguishable throughout the mapping.
To increase performance, you want to ensure that the data set only includes required data. You determine
that information regarding transaction dates is not required, so you decide that the date fields are not
necessary for the mapping.
To change the names of all of the incoming fields, you create a field rule to rename all fields with the
SalesForce_ prefix.
To exclude date fields, you create a rule to exclude fields with a date/time data type.
You review the order in which the rules appear. You realize that you want the rule to rename the fields to run
after the rule to exclude the date/time fields. You move the rule to remove date/time fields so that it appears
before the renaming rule.
1. On the Incoming Fields tab, in the Field Rules area, insert a row for the rule based on the order in which
the rules must run. In the Actions column for a rule that you want to run before or after the new rule,
select either Insert above or Insert below.
2. To specify whether the rule includes or excludes fields, from the Operator column, choose either Include
or Exclude.
3. In the Field Selection Criteria column, choose one of the following methods:
• To rename all of the incoming fields in bulk, select All Fields.
• To apply the rule to the fields that you specify, select Named Fields.
Incoming fields 25
• To apply the rule based on the data type of each field, select Fields by Data Types.
• To apply the rule to all fields that contain a specific prefix, suffix, or pattern, select Fields by Text or
Pattern.
4. To provide the field selection details, in the Detail column, click the Configure or Rename link. The
Rename link appears if the field selection criteria is All Fields.
5. In the Configure Field Rules dialog box, select the fields to apply to the rule, based on the chosen field
selection criteria. Alternatively, click Parameters to add a parameter so fields can be selected in the
mapping task.
6. To rename fields, click the Rename Fields tab and choose to rename fields individually or in bulk.
If you want to rename all fields, you must rename in bulk. If you want to rename fields in bulk by pattern,
you can create a parameter to specify the pattern in the mapping task.
7. To ensure that the field rules run in a logical order, in the Field Rules area, review the order in which the
rules display. In the Included Fields and Excluded Fields lists, review the results of the rules. Move field
rules to the appropriate location if required.
8. To delete a rule, in the Actions column, select Delete.
This data preview feature is different from mapping data preview. For information on mapping data preview,
see Mappings in Cloud Data Integration.
To preview the data object, open the Source, Target, or Lookup Object tab of the Properties panel, and click
Preview Data.
When you preview data, Data Integration displays the first 10 rows. By default, Data Integration displays the
fields in native order. To display the fields in alphabetical order, enable the Display source fields in
alphabetical order option.
If the source, target, or lookup object is a flat file, you can also configure the formatting options. The
following table describes the formatting options for flat files:
Property Description
Field Labels For delimited files, determines whether the task generates the field labels or imports them
from the source file. If you import them from the source file, enter the row number that
contains the field labels.
Fixed Width File File format to use for fixed-width files. If there are no available fixed-width file formats,
Format select New > Components > Fixed-Width File Format to create one.
26 Chapter 1: Transformations
Note: Other formatting options might be available based on the connection type. For more information, see
the help for the appropriate connector.
Variable fields
A variable field defines calculations and stores data temporarily. You can use variable fields in the
Expression and Aggregator transformations.
For example, you want to generate a mailing list by concatenating first and last names, and then merging the
name with the address data. To do this, you might create a variable field, FullName, that concatenates the
First and Last fields. Then, you create an expression field, NameAddress, to concatenate the FullName
variable field with the Address field.
The results of a variable field does not pass to the data flow. To use data from a variable field in the data
flow, create an expression field for the variable field output. In the preceding example, to pass the
concatenated first and last name to the data flow, create a FullName_out expression field. And then, use the
FullName variable field as the expression for the field.
Expression macros
An expression macro is a macro that you use to create repetitive or complex expressions in mappings.
You can use an expression macro to perform calculations across a set of fields or constants. For example,
you might use an expression macro to replace null values in a set of fields or to label items based on a set of
sales ranges.
In an expression macro, one or more input fields represent source data for the macro. An expression
represents the calculations that you want to perform. And an output field represents the results of the
calculations.
At run time, the task expands the expression to include all of the input fields and constants, and then writes
the results to the output fields.
You can create expression macros in Expression and Aggregator transformations but you cannot combine an
expression macro and an in-out parameter in an Expression transformation.
Macro types
You can create the following types of macros:
Vertical
A vertical macro expands an expression vertically. The vertical macro generates a set of similar
expressions to perform the same calculation on multiple incoming fields.
Variable fields 27
Horizontal
A horizontal macro expands an expression horizontally. The horizontal macro generates one extended
expression that includes a set of fields or constants.
Hybrid
A hybrid macro expands an expression both vertically and horizontally. A hybrid macro generates a set
of vertical expressions that also expand horizontally.
A macro input field in a horizontal macro can represent a set of incoming fields or a set of constants. You
can create a multiple macro input fields in a horizontal macro to define multiple sets of constants.
For example, you want to apply an expression to a set of address fields. You create a macro input field
named %AddressFields% and define a field rule to indicate the incoming fields to use. When you configure
the expression, you use %AddressFields% to represent the incoming fields.
Vertical macros
Use a vertical macro to apply a macro expression to a set of incoming fields.
The macro input field in a vertical macro represents the incoming fields. The expression represents the
calculations that you want to perform on all incoming fields. And the macro output field represents a set of
output fields that passes the results of the calculations to the rest of the mapping. You configure the macro
expression in the macro output field.
The macro output field represents the output fields of the macro, but the names of the output fields are not
explicitly defined in the mapping. To include the results of a vertical macro in the mapping, configure a field
rule in the downstream transformation to include the output fields that the macro generates.
To write the results of a vertical macro to the target, link the output fields to target fields in the Target
transformation.
When the task runs, the task generates multiple expressions to perform calculations on each field that the
macro input field represents. The task also replaces the macro output field with actual output fields, and then
uses the output fields to pass the results of the calculations to the rest of the mapping.
Note: The macro output field does not pass any data.
Example
The following vertical macro expression trims leading and trailing spaces from fields that the %Addresses%
macro input field represents:
LTRIM(RTRIM(%Addresses%))
At run time, the task generates the following set of expressions to trim spaces from the fields that %Address
% represents:
LTRIM(RTRIM(Street))
LTRIM(RTRIM(City))
LTRIM(RTRIM(State))
LTRIM(RTRIM(ZipCode))
28 Chapter 1: Transformations
Configuring a vertical macro
You can configure a vertical macro on the Expression tab of the Expression transformation or the Aggregate
tab of the Aggregator transformation.
When you create a macro input field, define a name for the macro input field, and then use field rules to
define the incoming fields that you want to use. At run time, the macro input field expands to represent the
selected fields.
You can use the following field rules when you configure a macro input field:
• All Fields
• Named Fields
• Fields by Text or Pattern
Expression macros 29
The following image shows a Named Fields field rule that includes the Q1 to Q4 fields:
When you configure a macro output field, you select the macro input field to use and define a naming
convention for the output fields. You can customize a prefix or suffix for the naming convention. By default,
the macro output field uses the following naming convention for output fields: <macro_input_field>_out.
You can define the data type, precision, and scale of the output fields. Or, you can configure the macro output
field to use the datatype, precision, and scale of the incoming fields. Use the datatype of incoming fields
when the incoming fields include more than one datatype and when the expression does not change the
datatype of incoming data.
At run time, the task generates output fields based on the macro output field configuration. The task creates
an output field for each incoming field that the macro input field represents, and then writes the results of the
expression to the output fields.
30 Chapter 1: Transformations
For example, the following image shows a macro output field that creates output fields based on the
incoming fields that %QuarterlyData% represents:
If the %QuarterlyData% macro input field represents the Q1 to Q4 fields, then the task creates the following
output fields at run time: Q1_out, Q2_out, Q3_out, Q4_out. The output fields have the same datatype as the
incoming fields.
Note that you cannot define the precision and scale after you select the Input Field Type datatype.
Because an expression macro represents fields that are not explicitly defined until run time, you need to
configure the downstream transformation to include the output fields of a vertical macro. There are two ways
to do this:
• Create named fields in the downstream transformation. On the Incoming Fields tab, create a named field
rule and create a new incoming field for each output field of the vertical macro. You can use these fields
in downstream transformations.
• Alternatively, if your Target transformation is directly downstream from the macro, completely
parameterize the target field mapping. When you configure the mapping task, Data Integration creates the
macro output fields in the target. Map the incoming fields to the target fields.
Example
A macro input field named %InputDates% represents the following source fields for a macro that converts the
data to the Date data type:
OrderDate
ShipDate
PaymentReceived
Expression macros 31
The macro output field uses the default naming convention: <macro input field>_out. To use the Date fields
that the macro generates, create a Named Field rule in the downstream transformation. Create the following
fields:
OrderDate_out
ShipDate_out
PaymentReceived_out
Configure the field rule to include the fields that you create.
After you create the field rule, you can use the fields in expressions and field mappings in downstream
transformations.
The Aggregator transformation uses the store ID field as the group by field. A %QuarterlyData% macro input
field reads sales data from the following input fields: Q1, Q2, Q3, and Q4.
A %QuarterlyData%_out macro output field is based on the %QuarterlyData% macro input field. To find the
sum of sales for each quarter, the macro output field includes the following expression: SUM(%QuarterlyData
%).
In the Target transformation, a field rule includes the following output fields in the incoming fields list:
Q1_out, Q2_out, Q3_out, Q4_out. In the target field mapping, the Qx_out fields are mapped to the target.
The following image shows the vertical expression macro in an Aggregator transformation:
32 Chapter 1: Transformations
Horizontal macros
Use a horizontal macro to generate a single complex expression that includes a set of incoming fields or a
set of constants.
In a horizontal macro, a macro input field can represent a set of incoming fields or a set of constants.
In a horizontal macro, the expression represents calculations that you want to perform with the incoming
fields or constants. The expression must include a horizontal expansion function.
A horizontal macro expression produces one result, so a transformation output field passes the results to the
rest of the mapping. You configure the horizontal macro expression in the transformation output field.
The results of the expression pass to the downstream transformation with the default field rule. You do not
need additional field rules to include the results of a horizontal macro in the mapping.
To write the results of a horizontal macro to the target, connect the transformation output field to a target
field in the Target transformation.
Example
For example, a horizontal macro can check for null values in the fields represented by the %AllFields% macro
input field. When a field is null, it returns 1. And then, the %OPR_SUM% horizontal expansion function returns
the total number of null fields.
Horizontal expansion functions use the following naming convention: %OPR_<function_type>%. Horizontal
expansion functions use square brackets ([ ]) instead of parentheses.
In the Field Expression dialog box, the functions appear in the Horizontal Expansion group of the functions
list.
Uses the CONCAT function and expands an expression in an expression macro to concatenate multiple
fields. %OPR_CONCAT% creates calculations similar to the following expression:
FieldA || FieldB || FieldC...
%OPR_CONCATDELIM%
Uses the CONCAT function and expands an expression in an expression macro to concatenate multiple
fields, and adds a comma delimiter. %OPR_CONCATDELIM% creates calculations similar to the following
expression:
FieldA || ", " || FieldB || ", " || FieldC...
Expression macros 33
%OPR_IIF%
Uses the IIF function and expands an expression in an expression macro to evaluate a set of IIF
statements. %OPR_IIF% creates calculations similar to the following expression:
IIF(<field> >= <constantA>, <constant1>,
IIF(<field> >= <constantB>, <constant2>,
IIF(<field> >= <constantC>, <constant3>, 'out of range')))
%OPR_SUM%
Uses the SUM function and expands an expression in an expression macro to return the sum of all fields.
%OPR_SUM% creates calculations similar to the following expression:
FieldA + FieldB + FieldC...
For more information about horizontal expansion functions, see Function Reference.
Configure a horizontal macro based on whether you want to use incoming fields or constants in the macro
expression.
When you create a macro input field, define a name for the macro input field, and then use field rules to
define the incoming fields that you want to use. At run time, the macro input field expands to represent the
selected fields.
You can use the following field rules when you configure a macro input field:
• All Fields
• Named Fields
• Fields by Text or Pattern
34 Chapter 1: Transformations
The following image shows a Named Fields field rule that includes the Q1 to Q4 fields:
When you create a macro input field, define a name for the macro input field, and then define the constants
that you want to use. At run time, the macro input field expands to represent the constants and uses them in
the listed order.
When you create multiple macro input fields with corresponding sets of constants, the task evaluates each
set of constants in the listed order.
Expression macros 35
The following image shows a macro input field that represents constants:
At run time, the macro input field expands and uses the constants in the following order: 50000, 100000,
150000.
When you create a transformation output field, you define the name and datatype for the field. You also
configure the expression for the macro. In the expression, include a horizontal expansion function and any
macro input fields that you want to use.
The default field rule passes the transformation output field to the downstream transformation. You can use
any field rule that includes the transformation output field to pass the results of a horizontal macro to the
mapping.
In an Expression transformation, macro input fields define the constants to use in the expression.
%IncomeMin% defines the low end of each salary range and %IncomeMax% defines the high end of each
salary range. %EmployeeType% lists the job category that corresponds to each range.
36 Chapter 1: Transformations
The EmployeeStatus transformation output field passes the results to the mapping and includes the
following horizontal macro expression:
%OPR_IIF[ (EMP_SALARY>=%IncomeMin%) AND (EMP_SALARY<%IncomeMax%), %EmployeeType%,
'unknown' ]%
In the Target transformation, the default field rule includes the EmployeeStatus transformation output field in
the incoming fields list. In the target field mapping, the EmployeeStatus is mapped to the target.
The horizontal macro expression expands as follows when you run the task:
IIF(Salary>=5000 AND Salary<50000), 'IndividualContributor',
IIF (Salary>=50000 AND Salary<100000), 'Manager',
IIF (Salary>=100000 AND Salary<150000), 'SeniorManager', 'unknown')))
Note that the expression uses the first value of each macro input field in the first IIF expression and
continues with each subsequent set of constants.
Hybrid macros
A hybrid macro expands an expression both vertically and horizontally. A hybrid macro generates a set of
vertical expressions that also expand horizontally.
Configure a hybrid macro based on your business requirements. Use the configuration guidelines for vertical
and horizontal macros to create a hybrid macro.
Example
For example, the following expression uses the %OPR_IIF% horizontal expansion function to convert the
format of the date fields represented by the %dateports% macro input field to the 'mm-dd-yyyy' format:
%OPR_IIF[IsDate(%dateports%,%fromdateformat%),To_String(To_Date(%dateports
%,%fromdateformat%),'mm-dd-yyyy'),%dateports%]%
The %fromdateformat% macro input field defines the different date formats used in the date fields:
mm/dd/yy and mm/dd/yyyy.
At run time, the application expands the expression vertically and horizontally, as follows:
IIF(IsDate(StartDate,’mm/dd/yy’),To_String(To_Date(StartDate,’mm/dd/yy’),’mm-dd-yyyy’),
IIF(IsDate(StartDate,’mm/dd/yyyy’),To_String(To_Date(StartDate,’mm/dd/yyyy’),’mm-dd-
Expression macros 37
yyyy’), StartDate))
IIF(IsDate(EndDate,’mm/dd/yy’),To_String(To_Date(EndDate,’mm/dd/yy’),’mm-dd-yyyy’),
IIF(IsDate(END _DT,’mm/dd/yyyy’),To_String(To_Date(EndDate,’mm/dd/yyyy’),’mm-dd-
yyyy’), EndDate))
The expression expands vertically to create an expression for the StartDate and EndDate fields that
%dateports% represents. The expression also expands horizontally to use the constants that
%fromdateformat% represents to evaluate the incoming fields.
File lists
You can use a file list as a source for flat file connections. A file list is a file that contains the names and
directories of each source file that you want to use in a mapping. Use a file list to enable a task to read
multiple source files for one source object in a mapping.
For example, you might want to use a file list if your organization collects data for multiple locations that you
want to process through the same mapping.
You configure a source object such as a Source transformation or Lookup transformation to read the file list.
You can also write the source file name to each target row. When you run a mapping that uses a file list, the
task reads rows of data from the different source files in the file list.
Use the following rules and guidelines when you create a file list:
• Each file in the list must use the user-defined code page that is configured in the connection.
• Each file in the list must share the same file properties as configured in the connection.
• If you do not specify a path for a file, the task assumes that the file is in the same directory as the file list.
• Each path must be local to the Secure Agent.
You can create a file list manually or you can use a command to create the file list.
When you create the file list, enter one file name or one file path and file name on each line. Data Integration
extracts the field names from the first file in the file list.
38 Chapter 1: Transformations
File list commands
You can use a command to generate a list of source files for a mapping. You can use a valid DOS or UNIX
command, batch file, or shell script. Data Integration reads each file in the list when the task runs.
Use a command to generate a file list when the list of source files changes often or when you want to
generate a file list based on specific conditions. For example, you can use a command to generate a file list
from all files in a directory or based on the file names.
Use the following guidelines when you generate a file list through a command:
• You must enter Windows commands that use parameters such as "/b" in a batch file.
• You must enter fully qualified file paths in each command, batch file, and shell script.
• You cannot use an in-out parameter for the file list command.
The following table shows the command that you enter in the Source transformation and the contents of the
corresponding shell script:
/home/dsmith/flatfile/parts/parts.sh cd /home/dsmith/flatfile/parts
ls *.txt
The following table shows the command that you enter in the Source transformation and the corresponding
batch file contents:
echo C:\sources\source.csv
File lists 39
Command sample file
When you generate a file list through a command, you select a sample file that Data Integration uses to
extract the field names. Data Integration does not extract data from the sample file unless the sample file is
included in the generated file list.
If a file in the generated file list does not contain all fields in the sample file, Data Integration sets the record
values for the missing fields to null. If a file in the file list contains fields that are not in the sample file, Data
Integration ignores the extra fields.
For example, the sample file that you select contains the fields CustID, NameLast, and NameFirst. One file in
the generated file list does not contain the NameFirst field. When Data Integration reads data from the file, it
sets the first names for each record in the file to null.
Another file in the generated file list contains the fields CustID, NameLast, NameFirst, and PhoneNo. Data
Integration does not import records for the PhoneNo field because the field is not in the sample file. If you
want to import the phone numbers, either select a sample file that contains the PhoneNo field, or add a field
for the phone numbers in the transformation.
1. Create the text file, batch file, or shell script that creates the file list and install it locally to the Secure
Agent.
2. In the Mapping Designer, select the Source transformation.
3. On the Source tab, select a flat file connection.
4. To use a manually created file list, perform the following steps:
a. In the Source Type field, select File List.
b. In the Object field, select the text file that contains the file list.
c. On the Fields tab, verify the incoming fields for the Source transformation.
Data Integration extracts source fields from the first file in the file list. If the source fields are not
correct, you can add or remove fields.
5. To use a file list that is generated from a command, perform the following steps:
a. In the Source Type field, select Command.
b. In the Sample Object field, select the sample file from which Data Integration extracts source fields.
You can use one of the files you use to generate the file list as the sample file or select a different
file.
c. In the Command field, enter the command that you use to generate the file list, for example, /home/
dsmith/flatfile/parts/parts.sh.
d. On the Fields tab, verify the incoming fields for the Source transformation.
If the source fields are not correct, you can add or remove fields, or click the Source tab and select a
different sample file.
6. Optionally, to write the source file name to each target row, click the Fields tab, and enable the Add
Currently Processed Filename field option.
The CurrentlyProcessedFileName field is added to the fields table.
40 Chapter 1: Transformations
Using a file list in a Lookup transformation
To use a file list in a Lookup transformation, create the text file, batch file, or shell script that creates the file
list. Then configure the Lookup transformation to use the file list.
1. Create the text file, batch file, or shell script that creates the file list and install it locally to the Secure
Agent.
2. In the Mapping Designer, select the Lookup transformation.
3. On the Lookup Object tab, select a flat file connection.
4. To use a manually created file list, perform the following steps:
a. In the Source Type field, select File List.
b. In the Lookup Object field, select the text file that contains the file list.
c. On the Return Fields tab, verify the return fields for the Lookup transformation.
Data Integration extracts the return fields from the first file in the file list. If the return fields are not
correct, you can add or remove fields.
5. To use a file list that is generated from a command, perform the following steps:
a. In the Source Type field, select Command.
b. In the Lookup Object field, select the sample file from which Data Integration extracts return fields.
You can use one of the files you use to generate the file list as the sample file or select a different
file.
c. In the Command field, enter the command that you use to generate the file list, for example, /home/
dsmith/flatfile/parts/parts.sh.
d. On the Return Fields tab, verify the return fields for the Lookup transformation.
If the return fields are not correct, you can add or remove fields, or click the Lookup Object tab and
select a different sample file.
Source transformation
A Source transformation extracts data from a source. When you add a Source transformation to a mapping,
you define the source connection, source objects, and source properties related to the connection type. For
some connection types, you can use multiple source objects within a Source transformation.
You can use a Source transformation to read data from the following source types:
• File. The Source transformation can read data from a single source file or a file list.
• Database. The Source transformation can read data from a single source table or multiple source tables.
• Web service. The Source transformation can read data from a single web service operation.
• Informatica Cloud Data Integration connectors. The Source transformation can read data from a single
source object, a multi-group source object, or multiple source objects based on the connection type.
For more information about sources for individual connectors, see the Connectors section of the online
help. If you create an elastic mapping, refer to Data Integration Elastic Administration in the Administrator
help for information about supported connectors.
You can use one or more Source transformations in a mapping. If you use two Source transformations in a
mapping, you can use a Joiner transformation to join the data. If you use multiple Source transformations
with the same structure, you can use a Union transformation to merge the data into a single pipeline.
In a Source transformation, the source properties that appear vary based on the connection type. For
example, when you select a Salesforce connection, you can use multiple related source objects and configure
the Salesforce API advanced source property. In contrast, when you select a flat file connection, you specify
the file type and configure the file formatting options.
Source object
Select the source object for the Source transformation on the Source tab of the Properties panel.
The properties that you configure for the source object vary based on the connection type and the mapping
type. Your organization's licenses can also determine the source properties that appear when the Source
transformation is part of a mapplet.
42
The following image shows the Source tab for a relational source:
1. Source details where you configure the source connection, source type, and source object.
2. Select the source object from the mapping inventory.
In the Details area, select the source connection, source type, and source object. For some source types,
you can select multiple source objects. You can also create a new connection.
The source type varies based on the connection type. For example, for relational sources you can select
a single object, multiple related objects, or an SQL query. For flat file sources, you can select a single
object, file list, or file list command.
If your organization administrator has configured Enterprise Data Catalog integration properties, and you
have added objects to the mapping from the Data Catalog page, you can select the source object from
the Inventory panel. If your organization administrator has not configured Enterprise Data Catalog
integration properties or you have not performed data catalog discovery, the Inventory panel is empty.
For more information about data catalog discovery, see Mappings.
Use a parameter.
You can use input parameters to define the source connection and source object when you run the
mapping task. For more information about parameters, see Mappings.
Source object 43
File sources
File sources include flat files and FTP/SFTP files. When you configure a file source, you specify the
connection, source type, and formatting options. Configure file source properties on the Source tab of the
Properties panel.
Property Description
Source Type Source type. The source type can be single object, file list, command, or parameter.
Object If the source type is a single object, this property specifies the file source, for example,
Customers.csv.
If the source property is a file list, this property specifies the text file that contains the file list, for
example, SourceList.txt.
If the source type is a command, this property specifies the sample file from which Data Integration
imports the source fields.
In an elastic mapping, the object name cannot contain the dollar sign character, $. The dollar sign is
a reserved character for parameters.
Command If the source type is a command, this property specifies the command that generates the source file
list, for example, ItemSourcesCmd.bat.
Parameter If the source type is a parameter, this property specifies the source parameter.
Formatting Flat file format options. Opens the Formatting Options dialog box to define the format of the file.
Options You can choose either a delimited or fixed-width file type. Default is delimited.
For a delimited flat file type, configure the following file format options:
- Delimiter. Delimiter character. Can be a comma, tab character, colon, semicolon, nonprintable
control character, or a single-byte or multibyte character that you specify.
- Text Qualifier. Character to qualify text.
- Escape character. Escape character.
- Field labels. Determines if the task generates field labels or imports labels from the source file.
- First data row. The first row of data. The task starts the read at the row number that you enter.
You can use a tab, space, or any printable special character as a delimiter. The delimiter can have a
maximum of 10 characters. The delimiter must be different from the escape character and text
qualifier.
For a fixed-width flat file type, select the fixed-width file format to use. If you do not have a fixed-
width file format, go to New > Components > Fixed-Width File Format to create one.
For more information about file lists and commands, see “File lists” on page 38. For more information about
parameters and file formats, see Mappings.
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You can
choose terse, normal, verbose initialization, or verbose data. Default is normal.
Thousand Thousand separator character. Can be none, comma, or period. Cannot be the same as the decimal
Separator separator or the delimiter character.
Field type must be Number. You might also need to update the field precision and scale.
Default is None.
Decimal Decimal character. Can be a comma or period. Cannot be the same as the thousand separator or
Separator delimiter character.
Field type must be Number. You might also need to update the field precision and scale.
Default is Period.
Source File Name of the source directory for a flat file source. By default, the mapping task reads source files
Directory from the source connection directory.
You can also use an input parameter to specify the file directory.
If you use the service process variable directory $PMSourceFileDir, the task writes target files to the
configured path for the system variable. To find the configured path of a system variable, see the
pmrdtm.cfg file located at the following directory:
<Secure Agent installation directory>\apps\Data_Integration_Server\<Data
Integration Server version>\ICS\main\bin\rdtm
You can also find the configured path for the $PMSourceFileDir variable in the Data Integration
Server system configuration details in Administrator.
Source File File name, or file name and path of the source file.
Name
Database sources
Database sources include relational sources such as Oracle, MySQL, and Microsoft SQL Server. When you
configure a Source transformation for a database source, you can use a single source table or multiple
source tables. If you use multiple tables as a source, you can select related tables or create a relationship
between tables.
To configure a Source transformation for a database source, perform the following tasks:
Database sources 45
Database source properties
Configure properties for database sources such as the database connection, source type, and source
objects. You can also specify filter and sort conditions, pre- and post-SQL commands, and whether the output
is deterministic or repeatable.
Property Description
Add Related Objects For multiple sources. Displays objects related to the selected source object.
Select an object with an existing relationship or click Custom Relationship to create a custom
relationship with another object.
Select Distinct Rows Reads unique rows from the source. Adds SELECT DISTINCT to the SQL query.
Only
Define Query For a custom query. Displays the Edit Custom Query dialog box. Enter a valid custom query
and click OK.
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You
can choose terse, normal, verbose initialization, or verbose data. Default is normal.
Pre SQL SQL command to run against the source before reading data from the source.
You can enter a command of up to 5000 characters.
Post SQL SQL command to run against the source after writing data to the target.
You can enter a command of up to 5000 characters.
SQL Query SQL query to override the default query that Data Integration uses to read data from the
source. You can enter an SQL statement supported by the source database.
Output is Relational source or transformation output that does not change between mapping runs when
deterministic the input data is consistent between runs.
When you configure this property, the Secure Agent does not stage source data for recovery if
transformations in the pipeline always produce repeatable data.
Output is repeatable Relational source or transformation output that is in the same order between session runs
when the order of the input data is consistent.
When output is deterministic and output is repeatable, the Secure Agent does not stage
source data for recovery.
Existing relationships
You can use relationships defined in the source system to join related objects. You can join objects with
existing relationships for the following connection types:
• Database
• Salesforce
• Some Informatica Cloud Data Integration connectors
To join related objects, you select a primary object. Then you select a related object from a list of related
objects.
For example, after you add Opportunity as a primary Salesforce source object, you can add any related
objects, such as Account.
The following image shows a list of Salesforce objects with existing relationships with the Opportunity
object:
Custom relationships
You can create custom relationships to join objects in the same source system. To create a custom
relationship, select a primary object, select another object from the source system, and then select a field
from each source to use in the join condition. You must also specify the join type and join operator.
Inner
Performs a normal join. Includes rows with matching join conditions. Discards all rows that do not
match, based on the condition.
Database sources 47
Left
Performs a left outer join. Includes all rows for the source to the left of the join syntax and the rows from
both tables that meet the join condition. Discards the unmatched rows from the right source.
Right
Performs a right outer join. Includes all rows for the source to the right of the join syntax and the rows
from both tables that meet the join condition. Discards the unmatched rows from the left source.
For example, the following image shows a custom relationship that uses an inner join to join the EMPLOYEE
and MANAGER database tables when the EMPLOYEE.E_MANAGERID and MANAGER.M_ID fields match:
Advanced relationships
You can create an advanced relationship for database sources when the source object in the mapping is
configured for multiple sources.
To create an advanced relationship, you add the primary source object in the Objects and Relationships
table. Then you select fields and write the SQL statement that you want to use. Use an SQL statement that is
valid for the source database. You can also add additional objects from the source.
You can also convert a custom relationship to an advanced relationship. To do this, create a custom
relationship, and then select Advanced Relationship from the menu above the Objects and Relationships
table. You can edit the relationship that Data Integration creates.
When you create an advanced relationship, the wizard converts any relationships that you defined to an SQL
statement that you can edit.
Custom queries
Create a custom query when you want to use a database source that you cannot configure using the single-
or multiple-object source options. You might create a custom query to perform a complicated join of multiple
tables or to reduce the number of fields that enter the data flow in a very large source.
To use a custom query as a source, select Query as the source type, and then click Define Query. When you
define the query, use SQL that is valid for the source database. You can use database-specific functions in
the query.
You can also use a custom query as a lookup source. For information about using a custom query in a
Lookup transformation, see “Custom queries” on page 239.
When you create a custom query, enter an SQL SELECT statement to select the source columns you want to
use. Data Integration uses the SQL statement to retrieve source column information. You can edit the
datatype, precision, or scale of each column before you save the custom query.
For example, you might create a custom query based on a TRANSACTIONS table that includes transactions
from 2016 with the following SQL statement:
Data Integration ensures that custom query column names are unique. If an SQL statement returns a
duplicate column name, Data Integration adds a number to the duplicate column name as follows:
<column_name><number>
Database sources 49
When you change a custom query in a saved mapping, at design time Data Integration replaces the field
metadata with metadata using the revised query. Typically, this is the desired behavior. However, if the
mapping uses a relational source and you want to retain the original metadata, use the Retain existing field
metadata option. When you use this option, Data Integration doesn't refresh the field metadata during design
time. Data Integration maps the existing fields with the fields from the revised query at run time. Fields that
can't be mapped will cause run time failure.
Tip: Test the SQL statement you want to use on the source database before you create a custom query. Data
Integration does not display specific error messages for invalid SQL statements.
Configure the query options on the Source tab of the Source transformation. Expand the Query Options
section, and configure the filter and sort conditions.
Filter
Filter source data to limit the amount of source data that enters the data flow.
When you configure a filter, you select the source field and configure the operator and value to use in the
filter. When you configure more than one filter, the task applies the filter expressions in the listed order.
You can use a parameter for a filter expression and define the filter expression in the task.
You can also configure an advanced filter to create a filter expression using the expression editor. You
can use an input parameter for one of the fields, to be selected when the task runs. You can reuse the
same parameter in an Expression transformation to create a field expression and in the Target
transformation.
Sort
You can sort source data to provide sorted data to the mapping. For example, you can improve task
performance when you provide sorted data to an Aggregator transformation that uses sorted data.
When you sort data, you select one or more source fields. When you select more than one source field,
the task sorts the fields in the listed order. Data in each field is sorted in ascending order.
You can use parameters for the sort fields and define the sort fields in the task.
Data that comes from a web service typically has a hierarchical structure. For example, when you use a
Workday v2 source connection, the data passes as XML with a hierarchical structure.
When you select a web service connection for a Source transformation, you perform the following steps to
configure the transformation:
For example, you want to include worker information from Workday in a mapping with a relational database
target. You create a Source transformation and select the Workday connection. You select the Get_Workers
operation, which pulls the worker data in a defined XML structure. You define an advanced filter so that only
name and contact information enters the data flow. You define a relational structure for the worker data and
then map the fields to fields in the target database.
When you define the source properties for a web service connection, you select the web service operation.
The available operations are determined by the connection. For example, for a Workday connection,
Get_Workers is an operation.
The request message is in XML format. To customize the request message, you can begin with a template
that includes the necessary formatting for the message. The request message template shows the contents
for the selected operation.
Copy and paste the template into the Request Message editor pane and then revise the message.
You can parameterize the request message using in-out parameters. For example, instead of using specific
Effective_From and Effective_Through dates in the message, you can use $$Effective_From and $
$Effective_Through parameters. You need to create the in-out parameters in the Parameters panel before you
can use them in the request message.
For more information about in-out parameters, see the "Parameters" section in Mappings.
Be sure you use well-formed XML formatting in the request message. You can validate the message to be
sure that the XML matches the structure expected by operation.
The response shown in the Field Mapping tab shows the hierarchical structure of the data that comes from
the source.
When you select fields in the Response area, the fields appear in the Output Fields area in a relational
structure with generated primary keys and foreign keys. For example, in the Response Fields area you select
First_Name and Last_Name, and then you select Email_Address, which is located under a different parent in
Consider cardinality when you map response fields to the relational structure. Cardinality imposes
constraints on the number of times a field or group can occur at a specific point in the XML structure. A
cardinality of 0-many means the field or group can have zero to many occurrences. A cardinality of 1-1
means a field or group is required and can only occur once.
If you map a field with 0-1 or 1-1 cardinality, the first parent node that has 0 to more than 1 cardinality is also
mapped. If a parent group with 0 to more than 1 cardinality does not exist, the system creates a group. For
example, if you map Email_Comment, which has cardinality of 0-1, the Email_Address_Data group, which has
cardinality of 0-many, is automatically mapped.
Packed fields
You can pack fields to reduce the number of output groups for a request message. You can mark fields to be
packed when you configure field mapping. When you run the mapping task, the task packs the element and
its children into a single XML string.
Fields can come from the source already marked for packing. The Pack icon displays next to elements
marked to be packed. To pack a field, click the Pack icon, as shown in the following image:
You can use XPath expressions to mark multiple fields for packing or unpacking. In the Response Fields area,
click the arrow and select Mark Packed Structures, as shown in the following image:
In the following image, all fields that have ID as a child are marked to pack:
If a mapping task processes large data sets or includes transformations that perform complicated
calculations, the task can take a long time to process. When you use multiple partitions, the mapping task
divides data into partitions and processes the partitions concurrently, which can optimize performance. Not
all source types support partitioning.
Enable partitioning when you configure the Source transformation in the Mapping Designer. When you
configure partitions in the Source transformation, partitioning occurs throughout the mapping.
To enable partitioning for a source, select a partitioning method on the Partitions tab. The partitioning
methods that you can select vary based on the source type. For more information about partitioning different
types of sources, see the help for the appropriate connector.
You can select one of the following partitioning methods based on the source type:
None
The mapping task processes all data in a single partition. This is the default option.
Fixed
The mapping task distributes rows of data based on the number of partitions that you specify. You can
specify up to 64 partitions.
Use this method for a source type that does not allow key range partitioning such as a flat file source, or
when the mapping includes a transformation that does not support key range partitioning.
Consider the number of records to be passed in the mapping to determine an appropriate number of
partitions for the mapping. For a small number of records, partitioning might not be advantageous.
If the mapping includes multiple sources, specify the same number of partitions for each source.
Key range
The mapping task distributes rows of data based on a field that you define as a partition key. You select
one field in the source as the partition key, and then you define a range of values for the partition key.
You can use this method for tabular sources such as relational, Google BigQuery, and JDBC V2 sources.
• String
• Any type of number data type. However, you cannot use decimals in key range values.
• Date/time type. Use the following format: MM/DD/YYYY HH24:MI:SS
If the mapping includes multiple sources, use the same number of key ranges for each source.
Pass through
The mapping task processes data without redistributing rows among partitions. All rows in a single
partition stay in the partition. Choose pass-through partitioning when you want to create additional
partitions to improve performance, but do not want to change the distribution of data across partitions.
You can use this method for sources such as Amazon S3, Netezza, and Teradata.
Dynamic
The mapping task determines the optimal number of partitions to create at runtime based on the source
size.
When you configure partitions, be sure to save and run the mapping in the Mapping Designer to validate the
partition settings.
• Consider the types of transformations in the mapping and the order in which transformations appear so
that you do not get unexpected results. You can partition a mapping if the mapping task can maintain
data consistency when it processes the partitioned data.
• For flat file partitioning, session performance is optimal with large source files. The load may be
unbalanced if the amount of input data is small.
• When a Sequence Generator transformation is in a mapping with partitioning enabled, ensure that you set
up caching in the Sequence Generator transformation. Otherwise, the sequence numbers the task
generates for each partition are not consecutive.
• Sequence numbers generated by Normalizer and Sequence Generator transformations might not be
sequential for a partitioned source, but they are unique.
• When a Sorter transformation is in a mapping with partitioning enabled, the task sorts data in each
partition separately.
• A Sorter transformation must be placed before any Joiner transformation or Aggregator transformation
that is configured to use sorted data.
• You cannot use in-out parameters for key range values.
• If your mapping has more than eight partitions, mapping task performance might degrade. You can
configure the Buffer Block Size and DTM Buffer Size advanced properties in the mapping task to improve
performance.
• On Linux, if a target table name includes a unicode character, you need to set the default locale to UTF-8
to support multibyte data. To set the default locale to UTF-8, see the following examples:
- For bash and related UNIX shells:
export LC_ALL=en_US.UTF-8
- For csh and related UNIX shells:
setenv LC_ALL en_US.UTF-8
Partitioning examples
The following examples show how you can configure partitioning in a mapping.
Partitions 57
On the Partitions tab for the Source transformation, you select fixed partitioning and enter the number of
partitions, as shown in the following image:
On the Partitions tab for the Source transformation, you select key range partitioning and choose the
BILLINGPOSTALCODE field as the partition key. You add three key ranges to create three partitions, as shown
in the following image:
Note that for the first partition, you leave the start value blank for the minimum value. In the last partition, you
leave the end value blank for the maximum value.
Using these values, records with a postal code of 0 up to 30000 are processed in partition #1, records with a
postal code of 30001 to 50000 are processed in partition #2, and records with a postal code of 50001 or
higher are processed in partition #3.
After you configure the mapping, you save and run the mapping to validate the partitions.
You can use the hierarchical fields as pass-through fields to convert data from one complex file format to
another. For example, you can read hierarchical data from an Avro source and write the data to a JSON
target. You can also use the hierarchical fields and their child fields in expressions and conditions in
downstream transformations. For information about accessing child fields, see the Function Reference.
• Target
• Aggregator
• Expression
• Filter
• Hierarchy Processor
• Joiner
• Rank
• Router
• Sequence Generator
• Sorter
• You must use an Amazon S3 V2 or Azure Data Lake Storage Gen2 connection to read hierarchical data.
For more information, see the help for the appropriate connector.
• You cannot use a parameter for the source connection or the source object.
• If hierarchical fields contain child fields with decimal data types, the elastic mapping runs using low
precision.
• The transformation sets the precision and scale based on the values in the first row of data. Note that this
first row is sometimes referred to as row 0.
• To avoid data truncation, increase the precision and scale in the first row of data. Also ensure that the
first row does not include null values.
Configuration options vary based on the connection type. For most connection types, you can add and
remove source fields, configure how the fields are displayed, edit field metadata, and restore original fields
from the source object. For some connection types, you can restore original fields from the source object
only. For more information about configuring source fields, see the help for the appropriate connector.
If you are using a file list as the source, and you want to identify the source for each row, add the source
file name to the field list. You can pass this information to the target table.
To add the source file name to each row, enable the Add Currently Processed Filename Field option.
When you enable this option, Data Integration adds the CurrentlyProcessedFileName field to the Fields
table. The Add Currently Processed Filename Field option is visible for file sources.
When you enable or disable this option, Data Integration prompts you to synchronize fields with the
source object. You can synchronize all fields, synchronize new fields only, or skip the synchronization.
If field metadata changes after a mapping is saved, Data Integration uses the updated field metadata
when you run the mapping. Typically, this is the desired behavior. However, if the mapping uses a native
flat file connection and you want to retain the metadata used at design time, enable the Retain existing
fields at runtime option. When you enable this option, Data Integration mapping tasks will use the field
metadata that was used when you created the mapping.
You can add fields to a mapping source. Add a field to retrieve a field from the source object that is not
displayed in the list. To add a field, click Add Field, and then enter the field name, type, precision, and
scale.
You can also remove fields that you do not want to use in the mapping. To remove fields, select the
fields that you want to remove, and then click Delete.
You can display source fields in native order, ascending order, or descending order. To change the sort
order, click Sort, and select the appropriate sort order.
To change the display option for field names, select Options > Use Technical Field Names or Options >
Use Labels.
You can edit the metadata for a field. You might edit metadata to change information that is incorrectly
inferred. When you edit metadata, you can change the name, native type, native precision, and native
scale, if applicable for the data type. For some source types, you can also change the transformation
data type in the Type column.
When you change the metadata for a field, avoid making changes that can cause errors when you run the
task. For example, you can usually increase the native precision or native scale of a field without causing
errors. But if you reduce the precision of a field, you might cause the truncation of data.
To restore the original fields from the source object, enable the Synchronize option. When you
synchronize fields, Data Integration restores deleted source fields and adds fields that are new to the
source. Data Integration removes any added fields that do not have corresponding fields in the source
object.
Data Integration updates the metadata for existing source fields based on whether you synchronize all
fields or synchronize new fields only. When you synchronize all fields, Data Integration replaces any field
metadata that you edited with the field metadata from the source object. When you synchronize new
fields only, Data Integration retains the metadata for any existing source field. Data Integration does not
revert changes that you made to the Name field.
In elastic mappings, hierarchical data types such as array, map, and struct are assigned those native types.
For example, a map field in an Amazon S3 source might have the native data type "map (string_integer)." You
cannot edit the metadata for array, map, or struct fields.
In non-elastic mappings, Data Integration flattens complex hierarchical data types into native string data
types with precision up to 4000 characters. Some native data types come from the connector, and others
come from the parser that Data Integration uses when it reads the source data. Parser data types are
prefixed with the format type. For example, in an Amazon S3 source with the Avro format, a map field that
comes from the parser has the native data type avro_string. You can change the native data type for the
connector and parser fields.
To change the native data type, edit the metadata for the source, and select the appropriate data type in the
Native Type column.
When you change the native data type, you cannot change a non-parser data type to a parser data type. For
example, in an Amazon S3 source, Data Integration sets the native data type for the FileName field to string.
You can change the native data type to nstring but not to avro_string. Similarly, you cannot change a parser
data type to a non-parser data type.
For more information about editing native data types in complex file sources, see the help for the appropriate
connector.
Source fields 61
transformation data types to the comparable native data types. When you edit source metadata, you can
sometimes change the transformation data type for a field.
You can change the transformation data type for connectors in which the native data type has multiple
corresponding transformation data types. For example, in a Kafka source, you can map the native data type
binary to the transformation data type binary or string.
To change the transformation data type, edit the metadata for the source, and select the appropriate
transformation data type in the Type column.
When you edit the transformation data type for a field, Data Integration updates the data type for the field in
the downstream transformations. It also updates the data type for the field in the target if the target is
created at runtime. If the mapping contains an existing target, you might need to edit the field metadata in
the target to ensure that the data types are compatible.
For more information about editing transformation data types for different source types, see the help for the
appropriate connector.
Target transformation
Use the Target transformation to define the target connection and target object for the mapping. You can use
one or more Target transformations in a mapping.
Based on the connection type, you can define advanced target options, specify to use a new or existing target
object, or configure update columns for the target. The target options that appear also depend on the
connection type that you select. For example, when you select a Salesforce connection, you can configure
success and error log details.
You can use file, database, and Informatica Cloud Data Integration connections in the Target transformation.
If you create an elastic mapping, refer to Administrator in the Administrator help for information about
supported connectors.
Target example
You might work with a flat file target in a mapping that reads Salesforce user account data but excludes user
preference data. A Source transformation reads data from the Account object and the related User object.
The Target transformation uses a flat file connection that writes to the following directory:
C:\UserAccountData. The default All Fields rule includes all incoming fields. You create a Named Field rule
to exclude the unnecessary user preferences fields.
When you select Create New Target at Runtime, you enter the following name for the target file:
SF_UserAccount_%d%m%y.csv.
The mapping task creates a target file named SF_UserAccount_291116.csv in the C:\UserAccountData
directory when the task runs on November 29, 2016. The target file includes all fields from the Salesforce
Account and User objects except for the user preferences fields specified in the Named Fields rule.
63
Target object
Select the target object for the Target transformation on the Target tab of the Properties panel.
The properties that you configure for the target object vary based on the connection type and the mapping
type. Your organization's licenses can also determine the target properties that appear when the Target
transformation is part of a mapplet.
The following image shows the Target tab for a flat file target:
1. Target details where you configure the target connection, target type, target object, and target operation.
2. Select the target object from the mapping inventory.
In the Details area, select the target connection, target type, and target object. You can create a new
connection. For flat file and relational targets, you can also create the target object at run time.
If you use an existing target object, select the target from a list of target objects and link the target to the
upstream transformation. If the target table changes, you must update the Target transformation to
match it. If the physical target and the Target transformation do not match, the mapping fails.
If you use an existing target object for a flat file target, the existing target is overwritten when you run the
mapping task.
If your organization administrator has configured Enterprise Data Catalog integration properties, and you
have added objects to the mapping from the Data Catalog page, you can select the target object from
the Inventory panel. If your organization administrator has not configured Enterprise Data Catalog
integration properties or you have not performed data catalog discovery, the Inventory panel is empty.
For more information about data catalog discovery, see Mappings.
Use a parameter.
You can use input parameters to define the target connection and target object when you run the
mapping task. For more information about parameters, see Mappings.
File targets
File targets include flat files and FTP/SFTP files. When you configure a file target, you specify the connection,
target type, and target object.
For FTP/SFTP targets, you can select an existing target object. For flat file targets, you can select an existing
target object or create a new target at run time.
If you create a flat file target at run time, you can specify a static or dynamic file name.
Property Description
Formatting Flat file format options. Opens the Formatting Options dialog box to define the format of the file.
Options You can choose either a delimited or fixed-width file type. Default is delimited.
To write to a delimited flat file type, configure the following file format options:
- Delimiter. Delimiter character. Can be a comma, tab character, colon, semicolon, nonprintable
control character, or a single-byte or multibyte character that you specify.
- Text Qualifier. Character to qualify text.
- Escape character. Escape character.
- Field labels. Determines if the mapping task generates field labels or imports labels from the
source file.
- First data row. The first row of data. The task starts the read at the row number that you enter.
You can use a tab, space, or any printable special character as a delimiter. The delimiter can have
a maximum of 10 characters. The delimiter must be different from the escape character and text
qualifier.
To write to a fixed-width flat file type, select the fixed-width file format to use. If you do not have a
fixed-width file format, click New > Components > Fixed Width File Format to create one.
File targets 65
The following table describes the advanced properties for flat file targets:
Property Description
Forward Causes the mapping task to forward rejected rows to the reject file.
Rejected If you do not forward rejected rows, the mapping task drops rejected rows and writes them to the
Rows session log.
If you enable row error handling, the mapping task writes the rejected rows and the dropped rows to
the row error logs. It does not generate a reject file. If you want to write the dropped rows to the
session log in addition to the row error logs, you can enable verbose data tracing.
Thousand Thousand separator character. Can be none, comma, or period. Cannot be the same as the decimal
Separator separator or the delimiter character.
Field type must be Number. You might also need to update the field precision and scale.
Default is None.
Decimal Decimal character. Can be a comma or period. Cannot be the same as the thousand separator or
Separator delimiter character.
Field type must be Number. You might also need to update the field precision and scale.
Default is Period.
Append if Appends the output data to the target files and reject files for each partition. You cannot use this
Exists option for FTP/SFTP target files.
If you do not select this option, the mapping task truncates each target file before writing the output
data to the target file. If the file does not exist, the mapping task creates it.
Header Creates a header row in the file target. You can choose the following options:
Options - No Header. Do not create a header row in the flat file target.
- Output Field Names. Create a header row in the file target with the output field names.
- Use header command output. Use the command in the Header Command field to generate a header
row. For example, you can use a command to add the date to a header row for the file target.
Default is No Header.
Header Command used to generate the header row in the file target. For example, you can use a command to
Command add the date to a header row for the file target.
Footer Command used to generate the footer row in the file target.
Command
Output Type Type of target for the task. Select File to write the target data to a file target. Select Command to
output data to a command. You cannot select Command for FTP/SFTP target connections.
Output File File name or file name and path of the output file. By default, the mapping task names output files
Name after the target object.
Output File Name of the output directory for a flat file target. By default, the mapping task writes output files to
Directory the target connection directory.
You can also use an input parameter to specify the target file directory.
If you use the service process variable directory $PMTargetFileDir, the task writes target files to the
configured path for the system variable. To find the configured path of a system variable, see the
pmrdtm.cfg file located at the following directory:
<Secure Agent installation directory>\apps\Data_Integration_Server\<Data
Integration Server version>\ICS\main\bin\rdtm
You can also find the configured path for the $PMTargetFileDir variable in the Data Integration Server
system configuration details in Administrator.
Reject File Directory path to write the reject file. By default, the mapping task writes all reject files to the
Directory following service process variable directory:
$PMBadFileDir/<task federated ID>
If you specify both the directory and file name in the Reject File Name field, clear this field. The
mapping task concatenates this field with the Reject File Name field when it runs the task.
Reject File File name, or file name and path of the reject file. By default, the mapping task names the reject file
Name after the target object name: <target name>.bad.
The mapping task concatenates this field with the Reject File Directory field when it runs the task.
For example, if you have C:\reject_file\ in the Reject File Directory field, and enter
filename.bad in the Reject File Name field, the mapping task writes rejected rows to
C:\reject_file\filename.bad.
If you need to edit target object metadata, you can edit it in the Source transformation.
You cannot link the target fields to the upstream transformation. If you want to reduce the number of unused
fields in the target, configure field rules in the Target transformation or in the upstream transformations.
When you create a flat file target at run time, the mapping task creates the physical target the first time the
mapping runs based on the fields from the upstream transformation. In subsequent runs, if the target file
name does not change, the mapping task overwrites the target file. If the file name changes between
mapping runs, the mapping task creates a new target. Data Integration creates the target in the default
connection directory.
You can configure a static or dynamic file name for the target file. A static file name can include a time
stamp. A dynamic file name uses an expression to generate the file name when the mapping task runs.
To specify a static file name, in the Target Object dialog box, enter the file name in the Static File Name field.
To include a time stamp, enable Handle Special Characters and add the time stamp characters to the file
name. For example, the file name MyTarget_%d-%m.csv includes the day and month in which the mapping
ran.
File targets 67
The following image shows the Target Object dialog box:
If you do not include a time stamp, the mapping task creates the target file the first time the task runs and
overwrites the file during subsequent runs.
If you append a time stamp to the target file name, the mapping task writes data to a new file when the time
stamp changes. For example, you enable special character handling, enter static file name MyTarget_%d-
%m.csv, and run the mapping task on January 15 and January 16. The mapping task creates the target files
MyTarget_15-01.csv and MyTarget_16-01.csv.
When you specify the file name for the target file, you include special characters based on Linux STRFTIME
function formats that the mapping task uses to include the time stamp information in the file name. The time
stamp is based on the organization's time zone.
The following table describes some common STRFTIME function formats that you might use:
%p Either AM or PM.
You can use a dynamic file name to create a new target file every time the mapping task runs. For example,
the following expression creates a file called "OrdersOut_<system_timestamp_with_second_precision>.csv"
each time the mapping task runs:
'OrdersOut_'||To_Char(SYSDATE, 'YYYYMMDDHH24MISS')||'.csv'
You can also use a dynamic file name in a mapping that contains a Transaction Control transformation to
write data to a different target file each time a transaction boundary changes. For example, the following
expression can be used in a target that is downstream of the Transaction Control transformation to commit
data to a different target file every time the DEPT_ID field changes:
'Results_Dept_'||To_Char(DEPT_ID)||'.dat'
To specify a dynamic file name, in the Target Object dialog box, select Use a Dynamic File Name and enter
the file name expression in the expression editor.
File targets 69
The following image shows the Target Object dialog box with the Use a Dynamic File Name option enabled:
You can include incoming field names, constants, operators, built-in functions, and user-defined functions in
the target file name expression.
If you use an incoming field name in the file name expression, you can choose to exclude the field from the
target. When you enable the Exclude Dynamic File Name Field option, Data Integration does not write the
incoming field used in the expression to the target. Include only one incoming field in the expression. If you
include more than one incoming field, the expression is invalid.
To use more than one incoming field, add an Expression transformation directly before the Target
transformation. In the Expression transformation, configure a field to hold the expression that you want to
use as the file name. In the Target transformation, use this field as the expression in the dynamic file name.
For more information about creating expressions, see the Function Reference.
1. On the Target tab of the Target transformation, select a flat file connection.
2. Set the target type to Single Object.
3. Click Select to select the target object.
4. In the Target Object dialog box, select Create New at Runtime.
Database targets
Database targets include relational sources such as Oracle, MySQL, and Microsoft SQL Server.
When you configure a Target transformation for a database target, you can write data to a single target table.
You can select an existing table or create the table at run time.
Ensure that the table and column names do not exceed 74 characters.
Property Description
Operation Target operation, either insert, update, upsert, delete, or data driven.
Truncate Target Truncates the target object before inserting new rows.
Applies to insert and data driven operations.
Enable Target Uses the database bulk API to perform an insert operation.
Bulk Load Use the bulk API to write large amounts of data to the database with a minimal number of API
calls. Loading in bulk mode can improve performance, but it limits the ability to recover because no
database logging occurs.
Applies to insert operations.
Database targets 71
Property Description
Update The fields to use as temporary primary key columns when you update, upsert, or delete target data.
Columns When you select more than one update column, the mapping task uses the AND operator with the
update columns to identify matching rows.
Applies to update, upsert, delete and data driven operations.
Data Driven Enables you to define expressions that flag rows for an insert, update, delete, or reject operation.
Condition For example, the following IIF statement flags a row for reject if the ID field is null. Otherwise, it
flags the row for update:
IIF (ISNULL(ID), DD_REJECT, DD_UPDATE )
Applies to the data driven operation.
Forward Causes the mapping task to forward rejected rows to the reject file.
Rejected Rows If you do not forward rejected rows, the mapping task drops rejected rows and writes them to the
session log.
If you enable row error handling, the mapping task writes the rejected rows and the dropped rows
to the row error logs. It does not generate a reject file. If you want to write the dropped rows to the
session log in addition to the row error logs, you can enable verbose data tracing.
Pre SQL SQL command to run against the target before reading data from the source.
You can enter a command of up to 5000 characters.
Post SQL SQL command to run against the target after writing data to the target.
You can enter a command of up to 5000 characters.
Update Override Overrides the default UPDATE statement for the target.
Enter the update statement. Alternatively, click Configure to generate the default UPDATE
statement, and then modify the default statement.
The UPDATE statement that you enter overrides the default UPDATE statement that Data
Integration uses to update targets based on key columns. You can define an override UPDATE
statement to update target tables based on non-key columns.
For more information about database target properties, see the help for the appropriate connector.
If you need to edit target object metadata, you can edit it in the Source transformation.
You cannot link the target fields to the upstream transformation. If you want to reduce the number of unused
fields in the target, configure field rules in the Target transformation or in the upstream transformations.
When you create a database target at run time, the mapping task creates the database table the first time the
mapping runs based on the fields from the upstream transformation.
In subsequent runs, the mapping task replaces the data in the target table that was created in the initial run.
Consequently, if you change the mapping after the initial run, in subsequent runs the target will not reflect
changes to the number of target fields and its metadata. To see the changes, you can either delete the
existing target before you run the mapping or change the name of the target.
If you create a relational target at run time, the target operation is always insert. You can choose to truncate
the target.
Data Integration does not convert Bigint data in mappings created after the Spring 2020 September release.
The mapping task uses an update column to update or upsert data in the target. When you select more than
one update column, the mapping task uses the AND operator with the update columns to identify matching
rows.
When you use a parameter for the target connection or target object, you can configure update columns in
the task.
Database targets 73
Target update override
By default, Data Integration updates target tables based on key values. However, you can override the default
UPDATE statement for each target in a mapping. You might want to update the target based on non-key
columns.
You can enter a target update override for relational and ODBC connections. For more information, see the
help for the appropriate connector.
Override the UPDATE statement in the Target transformation advanced properties. Enter the target UPDATE
statement in the Update Override field. Alternatively, click Configure to generate the default UPDATE
statement and then modify the statement.
Because the target fields must match the target column names, the update statement includes the
keyword :TU to specify the fields in the target transformation. If you modify the UPDATE portion of the
statement, you must use :TU to specify fields.
When you override the default UPDATE statement, you must enter an SQL statement that is valid for the
database. Data Integration does not validate the syntax.
Example
A mapping passes the total sales for each salesperson to the T_SALES table.
Data Integration generates the following default UPDATE statement for the target T_SALES:
UPDATE
T_SALES
SET
EMP_NAME = :TU.EMP_NAME,
DATE_SHIPPED = :TU.DATE_SHIPPED,
TOTAL_SALES = :TU.TOTAL_SALES
WHERE
EMP_ID = :TU.EMP_ID
You want to override the WHERE clause to update records for employees named Mike Smith only. To do this,
you edit the WHERE clause as follows:
UPDATE
T_SALES
SET
DATE_SHIPPED = :TU.DATE_SHIPPED,
TOTAL_SALES = :TU.TOTAL_SALES
WHERE
:TU.EMP_NAME = EMP_NAME AND EMP_NAME = 'MIKE SMITH'
• If you use target update override, you must manually put all database reserved words in quotes.
• You cannot override the default UPDATE statement if a target column name contains any of the following
characters:
' , ( ) < > = + - * / \ t \ n \ 0 <space>
• If you update an individual row in the target table more than once, the database only has data from the
last update. If the mapping does not define an order for the result data, different runs of the mapping on
identical input data may result in different data in the target table.
• A WHERE clause that does not contain any column references updates all rows in the target table, or no
rows in the target table, depending on the WHERE clause and the data from the mapping. For example, the
1. On the Target tab of the Target transformation, open the advanced properties.
2. Click Configure next to the Update Override field.
3. In the Update Override SQL Editor, click Generate SQL.
The default UPDATE statement appears.
4. Modify the UPDATE statement.
Tip: Click Format SQL to format the UPDATE statement for easier readability.
You can override the WHERE clause to include non-key columns. Enclose all database reserved words in
quotes.
5. Click OK.
You can map fields that are in a relational structure to request fields in a web service target to generate
hierarchical output.
When you select a web service connection for a Target transformation, you perform the following steps to
configure the transformation:
When you define the target properties for a web service connection, you select the web service operation.
The available operations are determined by the connection. For example, for a Workday connection,
Update_Job_Posting is an operation.
On the Field Mapping tab for the Target transformation, the fields shown in the Target Fields area are shown
in a hierarchical structure. The target fields are determined by the request message structure of the
operation selected for the Target transformation.
Each source object displays as a group in the Input Fields area. You can select fields in the Input Fields area
to map the fields to the web service request. If the input fields include multiple input groups, map the groups
to the corresponding nodes in the web service request. You need to map all of the fields that are required in
the web service request.
If you map multiple source objects, you must assign primary and foreign keys to at least two groups. Ensure
that the source data is sorted on the primary key for the parent object, and sorted on the foreign key and
primary key for child objects.
To assign keys, click the key icon next to a field that you want to be a primary or foreign key, as shown in the
following image:
In the Mark as Key dialog box, you can assign a field as primary or foreign key and select the related group,
as shown in the following image:
You can configure partition key fields and the partitioning method on the Partitions tab. The Partitions tab is
displayed for targets in elastic mappings.
For example, you create an elastic mapping that loads data to an Amazon S3 V2 target that you create at
runtime. The target is a partitioned Hive table that is backed by Avro data files. You want to write the data
files in directories that are partitioned based on the columns YEAR, MONTH, and DAY. Configure the fields
YEAR, MONTH, and DAY as partition keys.
Configure the fields to be used as partition keys in the Partition Fields area on the Partitions tab. You can
add, delete, and change the order of the partition key fields.
For more information about configuring partition key fields for different target types, see the help for the
appropriate connector.
Partitioning methods
If a mapping task loads large data sets, the task can take a long time to load data. When you use multiple
partitions, the mapping task divides data into partitions and loads the data in each partition concurrently,
which can optimize performance. Not all target types support partitioning.
If a target in an elastic mapping supports partitioning, you can select the partitioning method in the Parallel
Processing area on the Partitions tab. The partitioning methods that you can select vary based on the target
type. For more information about partitioning different types of targets, see the help for the appropriate
connector.
You can select one of the following partitioning methods based on the target type:
None
The mapping task loads all data in a single partition. This is the default option.
Fixed
The mapping task distributes rows of data based on the number of partitions that you specify. You can
specify up to 64 partitions.
Consider the number of records to be passed to the target to determine an appropriate number of target
partitions. For a small number of records, partitioning might not be advantageous.
Pass through
The mapping task processes data without redistributing rows among partitions. All rows in a single
partition stay in the partition. Choose pass-through partitioning when you want to create additional
partitions to improve performance, but do not want to change the distribution of data across partitions.
Dynamic
The mapping task determines the optimal number of partitions to create at runtime.
Partitions 77
Writing hierarchical data in an elastic mapping
You can use a Target transformation in an elastic mapping to write hierarchical data to complex files, such
as Avro, JSON, and Parquet files.
• You must use an Amazon S3 V2 or Azure Data Lake Storage Gen2 connection to write hierarchical data.
For more information, see the help for the appropriate connector.
• You must create a new target at run time.
• Do not use a parameter for the target connection or the target object.
Target fields
You can configure the target fields that you want to use in the data flow. You can add and remove target
fields, configure how the fields are displayed, edit field metadata, and restore original fields from the target
object.
Configure target fields on the Target Fields tab of the Properties panel.
You can add fields to a mapping target. To add a field, click Add Field, and then enter the field name,
type, precision, and scale.
You can also remove fields that you do not want to use in the mapping. To remove fields, select the
fields that you want to remove, and then click Delete.
You can display target fields in native order, ascending order, or descending order. To change the sort
order, click Sort, and select the appropriate sort order.
To change the display option for field names, select Options > Use Technical Field Names or Options >
Use Labels.
You can edit the metadata for a field. You might edit metadata to change information that is incorrectly
inferred. When you edit metadata, you can change the name, native type, native precision, and native
scale, if applicable for the data type.
To edit the name or metadata for one or more fields, click Options > Edit Metadata. When you edit
metadata, you can also display native names by label or technical field name. To change the display
option for native names, select Options > Show Technical Field Names or Options > Show Labels.
When you change the metadata for a field, avoid making changes that can cause errors when you run the
task. For example, you can usually increase the native precision or native scale of a field without causing
errors. But if you reduce the precision of a field, you might cause the truncation of data.
To restore the original fields from the target object, use the Synchronize option. When you synchronize
fields, Data Integration restores deleted target fields, reverts data type and precision changes, and adds
fields that are new to the target. Data Integration removes any added fields that do not have
corresponding fields in the target object.
For existing target fields, Data Integration replaces any metadata that you edited with the field metadata
from the target object. Data Integration does not revert changes that you made to the Name field.
The Field Mappings tab includes a list of incoming fields and a list of target fields.
Method of mapping incoming fields to target fields. Select one of the following options:
• Manual. Manually link incoming fields to target fields. Selecting this option removes links for
automatically mapped fields. To map fields manually, drag a field from the incoming fields list and
position it next to the appropriate field in the target fields list. Or, you can map selected fields, unmap
selected fields, or clear all of the mappings using the Actions menu.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
• Completely Parameterized. Use a parameter to represent the field mapping. In the task, you can
configure all field mappings.
• Partially Parameterized. Configure links in the mapping that you want to enforce and use a parameter
to allow other fields to be mapped in the mapping task. Or, use a parameter to configure links in the
mapping, and allow all fields and links to display in the task for configuration.
To see more information on field mapping parameters, see Mappings.
You can configure how the fields display and which fields to display. To do so, click Options and select
from the following display options:
Automap
If you want Data Integration to automatically link fields with the same name and you also want to
manually map fields, select the Manual option and open the Automap menu.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single
field, select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
c. For flat file targets, enter the name of the target file including the extension, for example, Accounts.
csv.
If you want the file name to include a time stamp, select Handle Special Characters and add special
characters to the file name, for example, Accounts_%d%m%y%T.csv.
If you want to use a dynamic file name, select Use a Dynamic File Name and configure the file name
expression.
d. For relational targets, enter the table name.
e. Click OK.
8. To configure formatting options for flat file targets, click Formatting Options, and configure the
formatting options such as the delimiter character and text qualifier.
9. For relational targets, select the target operation and related properties such as whether to truncate the
table for Insert operations.
If you create a relational target at run time, the target operation is always Insert.
10. Specify advanced properties for the target, if required.
Advanced properties vary based on the connection type. For information about connector properties, see
the help for the appropriate connector.
11. Configure the target fields on the Target Fields tab.
You can edit field names and metadata, add fields, and delete unnecessary fields.
If you create the target at run time, you cannot configure target fields.
12. Map incoming fields to target fields on the Field Mapping tab.
If you create the target at run time, fields are mapped automatically.
For more information about field mapping, see “Target transformation field mappings” on page 79.
Aggregator transformation
Use the Aggregator transformation to perform aggregate calculations, such as averages and sums, on
groups of data.
When the mapping task performs aggregate calculations, the task stores data in groups in an aggregate
cache.
Group by fields
Use group by fields to define how to group data for aggregate expressions. Configure group by fields on the
Group By tab of the Properties panel.
To define a group for the aggregate expression, select the appropriate input, input/output, output, and
variable fields in the Aggregator transformation. You can select multiple group by fields to create a new
group for each unique combination. Data Integration then performs the defined aggregation for each group.
When you group values, Data Integration produces one row for each group. If you do not group values, Data
Integration returns one row for all input rows. Data Integration typically returns the last row of each group, or
the last row received, with the result of the aggregation. You can specify a particular row to be returned. For
example, if you use the FIRST aggregator function, Data Integration returns the first row.
When you select multiple group by fields in the Aggregator transformation, Data Integration uses field order
to determine the order by which it groups. The group order can affect the results. Order the group by fields to
ensure the appropriate grouping. You can change the field order after you select the fields in the group.
For example, you create aggregate fields called TOTAL_QTY and TOTAL_PRICE to store the total quantity and
total price for each item by store. You define the following expressions for each field:
82
STORE_ID ITEM QTY PRICE
Data Integration performs the aggregate calculations on the following unique groups:
STORE_ID ITEM
101 'battery'
101 'AAA'
201 'battery'
301 'battery'
Data Integration returns the store ID, item name, total quantity for each item by store, and total price for each
item by store:
Sorted data
To improve job performance, you can configure an Aggregator transformation to use sorted data. To
configure the Aggregator transformation to process sorted data, on the Advanced tab, select Sorted Input.
When you configure an Aggregator transformation to use sorted data, you must sort data earlier in the data
flow. If the Aggregator transformation processes data from a relational database, you must also ensure that
the sort keys in the source are unique. If the data is not presorted correctly or the sort keys are not unique,
you can receive unexpected results or errors when you run the mapping task.
When the mapping task performs aggregate calculations on sorted data, the task caches sequential rows of
the same group. When the task reads data for different group, it performs aggregate calculations for the
cached group, and then continues with the next group.
Sorted data 83
For example, an Aggregator transformation has the STORE_ID and ITEM group by fields, with the sorted input
option selected. When you pass the following data through the Aggregator, the mapping task performs an
aggregation for the three rows in the 101/battery group as soon as it finds the new group, 201/battery:
When you do not use sorted data, the mapping task performs aggregate calculations after it reads all data.
Aggregate fields
Use an aggregate field to define aggregate calculations.
When you configure an Aggregator transformation, create an aggregate field for the output of each
calculation that you want to use in the data flow. You can use aggregate functions in aggregate fields. You
can also use conditional clauses and nonaggregate functions.
Configure aggregate fields on the Aggregate tab of the Properties panel. When you configure an aggregate
field, you define the field name, data type, precision, scale, and optional description. The description can
contain up to 4000 characters. You also define the calculations that you want to perform.
When you configure aggregate fields, you can use variable fields for calculations that you want to use within
the transformation. You can also include macros in aggregate and variable fields.
In an elastic mapping, the output is NULL if the Group by field returns a single row and the aggregate
expression contains the STDDEV and VARIANCE functions. This is because Data Integration uses Spark 3.2.
To get an output value of 0, set the spark.sql.legacy.statisticalAggregate session property to true in the
mapping task.
Aggregate functions
You can use aggregate functions in expressions in aggregate fields.
For example, the following expression sums sales and returns the highest number:
MAX( SUM( SALES ))
You can include multiple single-level or multiple nested functions in different output fields in an Aggregator
transformation. You cannot include both single-level and nested functions in an Aggregator transformation.
You cannot nest aggregate functions in an elastic mapping.
Conditional clauses
Use conditional clauses in the aggregate expression to reduce the number of rows used in the aggregation.
The conditional clause can be any clause that evaluates to TRUE or FALSE.
For example, use the following expression to calculate the total commissions of employees who exceeded
their quarterly quota:
SUM( COMMISSION, COMMISSION > QUOTA )
Advanced properties
You can configure advanced properties for an Aggregator transformation. The advanced properties control
settings such as the tracing level for session log messages, whether the transformation uses sorted input,
cache settings, and whether the transformation is optional or required.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You can
choose terse, normal, verbose initialization, or verbose data. Default is normal.
Sorted Input Indicates that input data is presorted by groups. Select this option only if the mapping passes
sorted data to the Aggregator transformation.
Cache Directory Local directory where Data Integration creates the index and data cache files.
By default, Data Integration uses the directory entered in the Secure Agent $PMCacheDir
property for the Data Integration Server. If you enter a new directory, make sure that the
directory exists and contains enough disk space for the aggregate caches.
Data Cache Size Data cache size for the transformation. Select one of the following options:
- Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Index Cache Size Index cache size for the transformation. Select one of the following options:
- Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Advanced properties 85
Property Description
Transformation Specifies how Data Integration applies the transformation logic to incoming data. Select one of
Scope the following options:
- Transaction. Applies the transformation logic to all rows in a transaction. Choose Transaction
when a row of data depends on all rows in the same transaction, but does not depend on rows
in other transactions.
- All Input. Applies the transformation logic on all incoming data. When you choose All Input,
Data Integration drops incoming transaction boundaries. Choose All Input when a row of data
depends on all rows in the source.
Optional Determines whether the transformation is optional. If a transformation is optional and there are
no incoming fields, the mapping task can run and the data can go through another branch in the
data flow. If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data
flow, you add a transformation with a field rule so that only Date/Time data enters the
transformation, and you specify that the transformation is optional. When you configure the
mapping task, you select a source that does not have Date/Time data. The mapping task ignores
the branch with the optional transformation, and the data flow continues through another branch
of the mapping.
You can use hierarchical fields as pass-through fields. You can also use complex operators to access a
primitive child field and use the child field to perform an aggregate calculation. For example, you can use the
dot operator to access an integer in a struct field and pass the integer as an argument to the SUM function.
For more information about complex operators, see Function Reference.
Cleanse transformation
The Cleanse transformation adds a cleanse asset that you created in Data Quality to a mapping. A cleanse
asset is a set of data transformation operations that standardize the form and content of your data.
You add a single cleanse asset to a Cleanse transformation. You can map one or more input fields to a
cleanse asset.
A Cleanse transformation is similar to a Mapplet transformation, as it allows you to add data transformation
logic that you designed elsewhere to a mapping. Like mapplets, cleanse assets are reusable assets.
A Cleanse transformation does not display the logic that the cleanse asset contains or allow you to edit the
cleanse asset. To edit the cleanse asset, open it in Data Quality.
The steps to configure the transformation depend on the number of inputs that the cleanse asset specifies.
87
The following image shows the options that you use to select the cleanse asset and add the input
fields:
• If you include a cleanse asset that contains multiple inputs, the Add option does not appear on the
Cleanse tab. Use the Field Mapping tab options to connect the transformation input fields to the
asset.
3. On the Incoming Fields tab, verify the incoming fields.
By default, the transformation inherits all incoming fields from any connected upstream object in the
mapping. You can define a field rule to limit or rename the incoming fields.
4. On the Field Mapping tab, connect one or more input fields on the transformation to the asset. If the
cleanse asset specifies a single input, Data Integration automatically links the incoming fields with the
target field. If the cleanse asset specifies multiple inputs, link the incoming fields to the fields manually.
The cleanse asset input names might reflect the names of the transformation input fields. If so, you can
use the Automap options to connect the fields.
The following image shows the options that you can use to map the input fields in the transformation:
You can add multiple cleanse instances to an asset in Data Quality, so that a Cleanse transformation can
apply multiple cleanse operations to different sets of input fields with a single asset.
If you select a cleanse asset that specifies more than one input field, you must understand the asset
configuration and know the types of cleanse operation that the asset will perform on each input. The Cleanse
transformation does not identify the instance on which each asset input originates. If you define multiple
instances on the asset in Data Quality, make a record of the instances to which each input belongs. Use the
record as a guide when you connect the asset inputs to the transformation input fields.
To synchronize the asset versions, open the transformation in the mapping and select the transformation
name in the properties panel. For example, in a Cleanse transformation select Cleanse in the properties
panel. If synchronization is necessary, Data Integration displays a message that prompts you to synchronize
the assets.
When you synchronize the asset versions, Data Integration may prompt you to propagate the field attributes
of the current asset to other assets in the mapping. Data Integration may display the prompt if the current
asset originates in an earlier version of Data Quality.
You select the assets to which to propagate the field attributes. You can select multiple assets and
propagate the attributes in a single operation.
Note: Field propagation occurs by default for assets that you create in the current version of Data Quality.
• Manual. Manually link an incoming field to an asset input field. Removes links for any automatically
mapped field. Manual is the default option when you select an asset that contains multiple inputs.
• Automatic. Automatically link fields with the same name. You cannot manually link fields with this
option. Automatic is the default option when you select an asset that contains a single input.
Parameter
Select the parameter to use for the field mapping, or create a new parameter. This option appears when
you select Completely Parameterized or Partially Parameterized as the field map option. The parameter
must be of type field mapping.
Do not use the same field mapping parameter in more than one Cleanse transformation in a single
mapping.
Options
Controls how fields are displayed in the Incoming Fields and Target Fields lists.
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields and to manually configure other
field mappings. The Automap options appear when you select the Manual or Partially Parameterized
field map option.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap.
To unmap a single field, select the field to unmap and click Actions > Unmap on the context menu for the
field. To unmap one or more fields that you selected, click Unmap Selected on the Target Fields context
menu.
To clear all field mappings from the transformation, click Clear Mapping on the Target Fields context
menu.
The Output Fields tab displays the name, type, precision, and scale for each output field.
The output field name is the name of the target field appended by the name of the Cleanse
transformation. For example, if you have a target field Person_name and you entered the name for the
Cleanse transformation as Cleanse_TX, the operation returns the output field name as
Person_name_Cleanse_TX.
The transformation creates the cleansed fields and additionally any merged output field that you or
another user configured in the asset.
The transformation attempts to adds the suffix _Cleansed to the name of each target field. For example,
if you have a target field FirstName, the operation returns the output field name as FirstName_Cleansed.
Note: If you created the cleanse asset in the current version of Data Quality, the transformation applies
the suffix _Cleansed to all output fields. If you created the asset in an older version of Data Quality, the
transformation may apply a different naming policy to the outputs. See Rules and guidelines for output
field names for more information.
The merged output field names are the names of the merged fields that you configured in the asset.
You cannot edit the output field properties in the Cleanse transformation. To edit the properties, open the
cleanse asset in Data Quality.
When you use a cleanse asset with multiple outputs in a transformation, verify that the field names meet your
mapping requirements. You may need to map the output fields again to ensure that the transformation
defines the field names in the manner that you expect.
Consider the following rules and guidelines when you review the output field names:
• Some older cleanse assets supported a single output field in a single cleanse instance. If you add outputs
to such an asset and you do not update the original instance, the Cleanse transformation applies the
_Cleansed suffix to the newer outputs only. The transformation does not apply any suffix to the output in
the original instance.
The Cleanse transformation applies this policy regardless of when you add the asset to the
transformation. The transformation deletes a suffix from an older output field when you add an output in
another instance and you do not change the original instance.
• If you add an output to the original instance in any cleanse asset, the Cleanse transformation applies the
_Cleansed suffix to all output names.
The Cleanse transformation applies this policy regardless of the age of the asset and regardless of when
you add the asset to the transformation.
Create masked data for software development, testing, training, and data mining. You can maintain data
relationships in the masked data and maintain referential integrity between database tables. The Data
Masking transformation is a passive transformation.
The Data Masking transformation provides masking rules based on the source data type and masking type
you configure for a port. For strings, you can restrict the characters in a string to replace and the characters
to apply in the mask. For numbers and dates, you can provide a range of numbers for the masked data. You
can configure a range that is a fixed or percentage variance from the original number. The Integration Service
replaces characters based on the locale that you configure with the masking rules.
To use the Data Masking transformation, you need the appropriate license.
Masking techniques
The masking technique is the type of data masking to apply to a selected column.
Applies a credit card mask format to columns of string data type that contain credit card numbers.
Email masking
Applies an email mask format to columns of string data type that contain email addresses.
Masks an email address with a realistic email address from a first name, last name, and a domain name.
You can mask the string data type.
IP Address masking
Applies an IP address mask format to columns of string data type that contain IP addresses.
Key masking
Produces deterministic results for the same source data and seed value. You can apply key masking to
datetime, string, and numeric data types.
Phone masking
Applies a phone number mask format to columns of string data type that contain phone numbers.
92
Random masking
Produces random results for the same source data and mask format. You can apply random masking to
datetime, string, and numeric data types.
SIN masking
Applies a Social Insurance number mask format to columns of string data type that contain Social
Insurance numbers.
SSN masking
Applies a Social Security number mask format to columns of string data type that contain Social Security
numbers.
Replaces a column of data with similar but unrelated data from a custom dictionary. You can apply
custom substitution masking to columns with string data type.
Dependent masking
Replaces a field value with a value from a custom dictionary based on the values returned from the
dictionary for another input column. You can mask the string data type.
Substitution masking
Replaces a column of data with similar but unrelated data from a default dictionary. You can apply
substitution masking to columns with string data type.
URL masking
Applies a URL mask format to columns of string data type that contain URLs.
The configuration properties that appear depend on the masking technique and the data type. For example,
you cannot blur string data. You cannot select a seed value when you use the Random masking technique.
Repeatable output
Repeatable output is the consistent set of values that the Data Masking transformation returns.
Repeatable output returns deterministic values. For example, you configure repeatable output for a column of
first names. The Data Masking transformation returns the same masked value every time the same name
appears in the workflow.
You can configure repeatable masking when you use the Random masking technique, Substitution masking
technique, or the special mask formats for string data type. Select Repeatable and enter the seed value to
configure repeatable masking.
You cannot configure repeatable output for the Key masking technique.
If you perform substitution masking or custom substitution masking, you can choose to optimize the
dictionary usage. The workflow uses some values from the selected dictionary to mask source data. These
dictionary values might be used for multiple entries so that all source data is masked in the target. The
chances of using duplicate dictionary values reduces if you optimize dictionary usage. To optimize dictionary
output, you must configure the masking rule for repeatable output.
Seed
The seed value is a starting point to generate masked values.
The Data Masking transformation creates a default seed value that is a random number from 1 through 999.
You can enter a different seed value. Apply the same seed value to a column to return the same masked data
values in different source data. For example, if you have the same Cust_ID column in four tables, and you
want all of them to output the same masked values. You can set all four columns to the same seed value.
You can enter the seed value as a parameter. Seed value parameter names must begin with $$. You can
include an underscore (_) in the name but you cannot include other special characters. Add the required
parameter and value to the parameter file and specify the parameter file name at run time.
Note: If you enter the seed value as a parameter, you must run the mapping in a mapping task. If you run a
mapping that includes a seed value parameter, the mapping uses an incorrect value because it cannot read
the parameter value.
Unique substitution
Unique substitution masking ensures that each unique source value uses a unique dictionary value.
To mask a source value with a unique dictionary value, you can configure unique substitution masking. If a
source value is masked with a specific dictionary value, then no other source value is masked with this
dictionary value.
For example, the Name column in the source data contains multiple entries of John. If you configure
repeatable masking, every entry of John takes the same dictionary value, such as Xyza. However, other
source values might also be masked with the same dictionary value. A source entry Jack can also use the
dictionary value Xyza. As a result, all entries of John and Jack use the same dictionary value. When you
configure unique substitution masking, if all source values of John use the Xyza dictionary value, then no
other source value uses the same dictionary value.
Unique substitution masking requires a storage connection for the storage tables. Storage tables contain the
source to dictionary value mapping information required for unique substitution masking.
Note: If the source data contains more unique values than the dictionary, the masking fails because there are
not enough unique dictionary values to mask all the source data.
Mask format
When you configure key or random masking for string data type, configure a mask format to limit each
character in the output column to an alphabetic, numeric, or alphanumeric character.
If you do not define a mask format, the Data Masking transformation replaces each source character with
any character. If the mask format is longer than the input string, the Data Masking transformation ignores the
When you configure a mask format, configure the source filter characters or target filter characters that you
want to use the mask format with.
The mask format contains uppercase characters. When you enter a lowercase mask character, the Data
Masking transformation converts the character to uppercase.
Character Description
+ No masking.
R Remaining characters. R specifies that the remaining characters in the string can be any character type.
R must appear as the last character of the mask.
nnn-<department_name>
You can configure a mask to force the first three characters to be numeric, the department name to be
alphabetic, and the dash to remain in the output. Configure the following mask format:
DDD+AAAAAAAAAAAAAAAA
The Data Masking transformation replaces the first three characters with numeric characters. It does not
replace the fourth character. The Data Masking transformation replaces the remaining characters with
alphabetic characters.
When you set a character as a source filter character, the character is masked every time it occurs in the
source data. The position of the characters in the source string does not matter, and you can configure any
number of characters. If you do not configure source filter characters, the masking replaces all the source
characters in the column.
The source filter characters are case-sensitive. The Data Masking transformation does not always return
unique data if the number of source string characters is fewer than the number of result string characters.
The Data Masking transformation replaces characters in the target with the target filter characters. For
example, enter the following characters to configure each mask to contain all uppercase alphabetic
characters: ABCDEFGHIJKLMNOPQRSTUVWXYZ.
To avoid generating the same output for different input values, configure a wide range of substitute
characters or mask only a few source characters. The position of each character in the string does not
matter.
Range
Define a range for numeric or datetime data. When you define a range for numeric or date values, the Data
Masking transformation masks the source data with a value between the minimum and maximum values.
Numeric Range
Set the minimum and maximum values for a numeric column. The maximum value must be less than or equal
to the field precision. The default range is from one to the field precision length.
Date Range
Set minimum and maximum values for a datetime value. The minimum and maximum fields contain the
default minimum and maximum dates. The default datetime format is MM/DD/YYYY HH24:MI:SS. The
maximum datetime must be later than the minimum datetime.
Blurring
Blurring creates an output value within a fixed or percent variance from the source data value. Configure
blurring to return a random value that is close to the original value. You can blur numeric and date values.
Select a fixed or percent variance to blur a numeric source value. The low bound value is a variance below the
source value. The high bound value is a variance above the source value. The low and high values must be
greater than or equal to zero. When the Data Masking transformation returns masked data, the numeric data
is within the range that you define.
You can mask a date as a variance of the source date by configuring blurring. Select a unit of the date to
apply the variance to. You can select the year, month, day, hour, minute, or second. Enter the low and high
bounds to define a variance above and below the unit in the source date. The Data Masking transformation
applies the variance and returns a date that is within the variance.
For example, to restrict the masked date to a date within two years of the source date, select year as the unit.
Enter two as the low and high bound. If a source date is February 2, 2006, the Data Masking transformation
returns a date between February 2, 2004, and February 2, 2008.
You can use custom flat file or relational dictionaries. Unique substitution masking techniques also require a
storage connection for source- to dictionary-value mapping.
If you export a mapping created before the April 2022 release, the Data Masking transformation in the
mapping might not include the dictionary and storage connection information. When you import the mapping,
the fields appear blank. To avoid this issue when you import the mapping into an environment with the April
2022 release or later, open and save the mapping before you export the mapping. The exported mapping
displays the dictionary and storage connection information when imported. The connections also appear on
the Uses tab of the Show Dependencies page.
Note: You might need to make a change to the mapping to enable the Save button.
The Data Masking transformation generates a logically valid credit card number when it masks a valid credit
card number. The length of the source credit card number must be from 13 through 19 digits. The input credit
card number must have a valid checksum based on credit card industry rules.
The first six digits of a credit card number identify the credit card issuer. You can keep the original credit card
issuer or you can select another credit card issuer to appear in the masking results.
Parameter Description
Repeatable Returns the same masked value when you run a task multiple times or when you generate masked
values for a field that is in multiple tables.
Seed Value A starting number to create repeatable output. Enter a number from 1 through 999. Default seed value
is 190. You can enter the seed value as a parameter.
Keep Issuer Returns the same credit card type for the masked credit card. For example, if the source credit card is a
Visa card, generate a masked credit card number that is the Visa format.
Mask Issuer Replaces the source credit card type with another credit card type. When you disable Keep Issuer,
select which type of credit card to replace it with. You can choose credit cards such as AMEX, VISA,
and MASTERCARD. Default is ANY.
When you use the email masking format, you must set the seed value. The seed value is a random number
from 1 through 999 and is a starting point to generate masked values. You can enter a different seed value.
Apply the same seed value to a column to return the same masked data values in different source data. For
example, you have the same Cust_ID column in four tables. You want all of them to output the same masked
values. Set all four columns to the same seed value.
When you configure advanced email masking, you can configure parameters to mask the user name and the
domain name in the email address. For example, a source table might contain columns called First_Name
and Last_Name. You can configure the email address to contain the first character of First_Name and seven
characters of the last name. Define a domain name for the email address. The Masking task creates an
address with the following syntax:
The following table describes the parameters you can configure for advanced email masking:
Parameter Description
Repeatable Returns the same masked value when you run a task multiple times or when you generate
masked values for a field that is in multiple tables.
Seed Value A starting number to create repeatable output. Enter a number from 1 through 999. Default seed
value is 190. You can enter the seed value as a parameter.
First Name Name of the column to use as the first part of the email name. The email name contains the
masked value of the column you choose.
First Name Length The number of characters in the first name to include in the email address.
Delimiter Delimiter, such as a dot, hyphen, or underscore, to separate the first name and last name in the
email address. If you do not want to separate the first name and last name in the email address,
leave the delimiter blank.
Last Name Name of the masked column to use in the email name. The email name contains the masked
value of the column you choose.
Last Name Length The number of characters in the last name to include in the email address.
Domain Name A string value that represents an Internet Protocol (IP) resource such as gmail.com.
The Data Masking transformation masks a Class A IP address as a Class A IP Address and a 10.x.x.x address
as a 10.x.x.x address. The Data Masking transformation does not mask the class and private network
address. For example, the Data Masking transformation can mask 11.12.23.34 as 75.32.42.52. and
10.23.24.32 as 10.61.74.84.
Note: When you mask many IP addresses, the Data Masking transformation can return nonunique values
because it does not mask the class or private network of the IP addresses.
Key masking
A column configured for key masking returns deterministic masked data each time the source value and seed
value are the same. The Data Masking transformation returns unique values for the column.
When you configure a column for key masking, the Data Masking transformation creates a seed value for the
column. You can change the seed value to produce repeatable data between different Data Masking
transformations. For example, configure key masking to enforce referential integrity. Use the same seed
value to mask a primary key in a table and the foreign key value in another table.
You can configure masking rules that affect the format of data that the Data Masking transformation returns.
You can mask numeric, string, and datetime data types with key masking.
When you can configure key masking for datetime values, the Data Masking transformation requires a
random number as a seed. You can change the seed to match the seed value for another column to return
repeatable datetime values between the columns. The Data Masking transformation can mask dates between
1753 and 2400 with key masking. If the source year is in a leap year, the Data Masking transformation returns
a year that is also a leap year. If the source month contains 31 days, the Data Masking transformation returns
a month that has 31 days. If the source month is February, the Data Masking transformation returns
February. The Data Masking transformation always generates valid dates.
Configure key masking for numeric source data to generate deterministic output. When you configure a
column for numeric key masking, you assign a random seed value to the column. When the Data Masking
transformation masks the source data, it applies a masking algorithm that requires the seed.
You can configure key masking for strings to generate repeatable output. Configure a mask format to define
limitations for each character in the output string. To define a mask format, configure the Source Filter
characters and the Target Filter characters. The source filter characters define the source characters to
mask. The target filter characters define the characters to mask the source filter characters with.
IP address masking 99
Phone number masking
You can mask phone numbers with random numbers.
The Data Masking transformation masks a phone number without changing the format of the original phone
number. For example, the Data Masking transformation can mask the phone number (408) 382-0658 as (607)
256-3106.
The source data can contain numbers, spaces, hyphens, and parentheses. The Data Masking transformation
does not mask alphabetic and special characters.
You can configure repeatable output when you mask phone numbers. You must select Repeatable and enter
a seed value.
Random masking
Random masking generates random nondeterministic masked data.
The Data Masking transformation returns different values when the same source value occurs in different
rows. You can configure masking rules that affect the format of data that the Data Masking transformation
returns.
You can mask datetime, numeric, and string values with random masking.
To mask date values with random masking, either configure a range of output dates or choose a variance.
When you configure a variance, choose a part of the date to blur. Choose the year, month, day, hour, minute,
or second. The Data Masking transformation returns a date that is within the range you configure.
When you mask numeric data, you can configure a range of output values for a column. The Data Masking
transformation returns a value between the minimum and maximum values of the range based on field
precision. To define the range, configure the minimum and maximum ranges or configure a blurring range
based on a variance from the original source value.
Configure random masking to generate random output for string columns. To configure limitations for each
character in the output string, configure a mask format. Configure source and target filter characters to
define which source characters to mask and the characters to mask them with.
If the number contains no delimiters, the masked number contains no delimiters. Otherwise the masked
number has the following format:
xxx-xxx-xxx
To set a start digit, enable Start Digit and enter the digit. The Data Masking transformation creates masked
Social Insurance numbers that start with the number that you enter.
The Data Masking transformation returns unique values for each Social Insurance number.
The Data Masking transformation generates a Social Security number that is not valid based on the latest
High Group List from the Social Security Administration. The High Group List contains valid numbers that the
Social Security Administration has issued. The Data Masking transformation accesses the latest High Group
List from the following location:
The Data Masking transformation generates Social Security numbers that are not on the High Group List. The
Social Security Administration updates the High Group List every month. Download the latest version of the
list from the following location: http://www.socialsecurity.gov/employer/ssns/highgroup.txt
Substitution masking replaces a column of data with similar but unrelated data. For example, you can create
a dictionary that contains male and female first names. Use the dictionary to perform substitution masking
on a column that contains both male and female first names.
Before you can use a dictionary or storage connection in a masking rule assignment, you must add the
dictionary and storage connection to the transformation. Add the connections to the transformation on the
Masking Rules tab. For flat file dictionaries, create a connection to the flat file dictionary from the Configure |
Connections view and add the connection to the transformation.
When you configure custom substitution masking, select the dictionary type and the dictionary connection.
You can then select the column that you want to use from the dictionary. To support non-English characters,
you can use different code pages from a flat file connection.
The flat file connection code page and the Secure Agent system code page must be compatible for the
masking task to work.
You can substitute data with repeatable or nonrepeatable values. When you choose repeatable values, the
Data Masking transformation produces deterministic results for the same source data and seed value. You
must configure a seed value to substitute data with deterministic results. You can substitute more than one
column of data with masked values from the same dictionary row.
Note: Before you run the mapping, verify that the dictionary file is present in the following location: <Secure
Agent installation directory>\apps\Data_Integration_Server\data
Parameter Description
Flat File Choose the type of custom dictionary to use. The transformation must include the required
Dictionary dictionary connection.
Relational If you choose flat file, you must create a flat file connection with the directory that points to the
Dictionary dictionary files.
To make a flat file dictionary available to all Secure Agents in a runtime environment, verify that the
file is in the following location:
<Secure Agent installation directory>\apps\Data_Integration_Server\data
Dictionary The output column from the custom dictionary. For flat file dictionaries, you can select a dictionary
Column column if the flat file contains column headers.
Order By Applicable for relational dictionaries. The dictionary column on which you want to sort entries.
Specify a sort column to generate deterministic results even if the order of entries in the dictionary
changes. For example, if you move a relational dictionary and the order of entries changes, sort on
the serial number column to consistently mask the data.
Note: The column that you choose must contain unique values. Do not use columns that can
contain duplicate values to sort the data.
Lookup Input Optional. The source input column on which you perform a lookup operation with the dictionary.
Column
Lookup Required if you enter a lookup Input Column value. The dictionary column to compare with the input
Dictionary port. The source is replaced with values from the dictionary rows where the Lookup Input and
Column Lookup Dictionary values match.
Lookup Error Optional. A constant value that you can configure when there are no matching values for the lookup
Constant condition from the dictionary. Default is an empty string.
Repeatable Returns the same masked value when you run a task multiple times or when you generate masked
values for a field that is in multiple tables.
Seed Value A starting number to create repeatable output. Enter a number from 1 through 999. Default seed
value is 190. You can enter the seed value as a parameter.
Optimize Increases the usage of masked values from the dictionary. Available if you choose the Repeatable
Dictionary option. The property is not applicable if you enable unique substitution.
Usage
Is Unique Applicable for repeatable substitution. Replaces the target column with unique dictionary values for
every unique source column value. If there are more unique values in the source than in the
dictionary file, the data masking operation fails. Default is nonunique substitution.
Dependent masking
Dependent masking replaces a column of data with values from a custom dictionary that you use to mask
data in another column. To use dependent masking, at least one other source column must be masked with a
custom substitution rule.
For example, mask a Name column in the source data with a custom substitution rule. Configure the rule to
mask the values with values from the Name column in a Personal_Information dictionary.
You can configure dependent masking on another column to mask the source with values from a
corresponding column in the same dictionary. For example, apply dependent masking on the Age column.
Choose the Name column as the dependent column. You can then select a corresponding column from the
Personal_Information dictionary as the dependent output column. If you select the Age column from the
dictionary, the masking rule uses the age value that corresponds to the name value.
The following table describes the parameters that you can configure for dependent masking:
Property Description
Dependent The input column configured for custom substitution masking that you want to relate to the
Column source column. Choose a column from the list. Columns that you configure with substitution
masking appear in the list.
Dependent The dictionary column to use to mask the source data column. Lists the columns in the dictionary
Output Column used to mask the dependent column. Choose the required column from the list of dictionary
columns.
The following table describes the substitution masking types that are available:
Name Description
Substitution Name Substitutes source data with data from a dictionary file of names.
Substitution Female Name Substitutes source data with data from a dictionary file of female names.
Substitution Male Name Substitutes source data with data from a dictionary file of male names.
Substitution Last Name Substitutes source data with data from a dictionary file of last names.
Substitution Position Substitutes source data with data from a dictionary file of job positions.
Substitution US ZIP Code Substitutes source data with data from a dictionary file of U.S. ZIP codes.
Substitution Street Substitutes source data with data from a dictionary file of street names.
Substitution City Substitutes source data with data from a dictionary file of U.S. city names.
Substitution State Substitutes source data with data from a dictionary file of U.S. state names
Substitution Country Substitutes source data with data from a dictionary file of country names.
The Data Masking transformation performs a lookup on the dictionary file and replaces source data with data
from the dictionary. You download the dictionary files when you download the Secure Agent. The dictionary
files are stored in the following location:
You can substitute data with repeatable or nonrepeatable values. When you choose repeatable values, the
Data Masking transformation produces deterministic results for the same source data and seed value. You
must configure a seed value to substitute data with deterministic results.
You can substitute more than one column of data with masked values from the same dictionary row.
The Data Masking transformation does not mask the protocol of the URL. For example, if the URL is
http://www.yahoo.com, the Data Masking transformation can return http://MgL.aHjCa.VsD/. The Data
Masking transformation can generate a URL that is not valid.
Note: The Data Masking transformation always returns ASCII characters for a URL.
Use a mask rule parameter when you do not have the source connection information. You can also use a
mask rule parameter when you want to assign masking techniques to source fields when you run the
mapping task. For example, if the source transformation object in a mapping uses a parameter, you cannot
assign masking techniques when you create the mapping. After you create a mapping with a source
connection, you might add additional fields to the source that you want to mask. Use a mask rule parameter
when you create the mapping. You can then create multiple tasks to mask different data based on the same
mapping.
Use the Mapping Designer to create a Data Masking transformation. When you create a mapping, a Source
transformation and a Target transformation are already on the canvas for you to configure. Configure the
Source transformation to represent the source data that you want to mask. Configure the Target
transformation to represent the target connection where you want to store the masked data.
1. Drag a Data Masking transformation from the transformations palette onto the mapping canvas.
2. Connect the Data Masking transformation object to the data flow.
3. Select the Data Masking transformation object in the mapping designer.
The properties appear in the properties tab.
4. On the General tab, enter a name and optional description for the transformation object.
5. On the Incoming Fields tab, configure the field rules that define the data that you want to copy to the
target.
By default, the transformation includes all fields.
6. On the Masking Rules tab, you can configure the following properties:
• Parameter. To assign masking techniques at run time, add or create another mask rule parameter.
• Relational Dictionary Connection. To use a relational dictionary for custom substitution masking,
choose the dictionary connection from the list of relational connections.
• Flat File Dictionary Connection. To use a flat file dictionary for custom substitution masking, choose
the dictionary connection from the list of flat file connections.
• Storage Connection. To configure unique substitution masking, choose a storage connection from
the list of connections.
• Add. To configure masking techniques in the mapping, click Add and select the fields that you want
to mask. Click Configure to select and configure the required masking technique. If you assign a
masking technique in the mapping, you cannot edit it at run time.
7. Select the Target transformation and map the incoming fields to fields in the target.
Note: If you assign a masking technique to a column in a mapping task and then change the assignment
of the column in the mapping, the mapping task configuration takes precedence. If you unselect the rule
assignment of the column in the mapping task, then the mapping task uses the masking technique
assigned to the column in the mapping.
You can use the following tools to generate the same masked output from the same source data:
Substitution masking rules use values from dictionaries to create masked output. The default dictionaries on
Informatica Intelligent Cloud Services and on-premise Test Data Management are the same. When you use
the same substitution rule, the workflow uses the same dictionary to substitute source data. The same seed
value therefore ensures that the same substitute value is used for all rows provided the dictionaries are the
same.
On Informatica Intelligent Cloud Services, the dictionary files are available at: <Secure Agent installation
directory>\apps\Data_Integration_Server\data
In on-premise Test Data Management, the dictionary files are available at: <Informatica installation
directory>\server\infa_shared\LkpFiles
The Repeatable option must be set to ON to ensure that the task or workflow repeats dictionary values for
the same source value.
Example
Consider the following example:
The source data contains First Name and Last Name columns that you need to mask to ensure that you mask
the full name in the target data.
You can use the following methods to generate the masked output:
1. Use the Substitution Name masking rule to mask the First Name column. Set the Repeatable option to
ON. Enter a seed value.
2. Use the Substitution Last Name masking rule to mask the Last Name column. Set the Repeatable option
to ON. Enter a seed value.
3. Use the default dictionaries available with the setup. Do not make changes to the dictionaries.
When you run the masking task, mapping, or mapping task on Informatica Intelligent Cloud Services, Test
Data Management workflow, or PowerCenter mapping to generate output, you generate the same masked
output for the same source data.
The production data includes a table Personnel_Information with the following data:
In the Mapping Designer, add Personnel as a source transformation for the table Personnel_Information. Add
a target transformation Personnel_test.
Add the Data Masking transformation to the mapping canvas and connect it to the data flow.
You need to mask the Surname, DOB, and the State columns to ensure sensitive data is masked. You can use
the Substitution Last Name masking technique to mask the Surname column. This masking technique
replaces data in the column with data from the dictionary file on surnames. You can use the Random Date
masking technique to mask the DOB column. Use the Substitution State masking technique to mask the State
column. This masking technique replaces data in the column with data from the dictionary file on U.S. state
names.
You can now use the masked data in the table Personnel_test in the test environment.
Deduplicate transformation
The Deduplicate transformation adds a deduplicate asset that you created in Data Quality to a mapping.
Use a Deduplicate transformation to analyze the levels of duplication in a data set and optionally to
consolidate sets of duplicate records into a single, preferred record. Deduplicate transformations analyze the
identity information in the records. An identity is a group of data values in a record that identify a person or
an organization.
Deduplication and consolidation are useful operations in the following types of data project:
• Customer Relationship Management. For example, a store designs a mail campaign and must check the
customer database for duplicate customer records.
• Regulatory compliance initiatives. For example, a business operates under government or industry
regulations that insist all data systems are free of duplicate records.
• Financial risk management. For example, a bank may want to search for relationships between account
holders.
• Any project that must identify or eliminate records that store duplicate identity information.
The deduplicate asset that you add to the transformation specifies the comparison criteria for the
deduplication operation, including the threshold score that duplicate records must meet.
Consolidation is an optional process that the deduplicate asset can specify for the transformation. During
consolidation, the transformation evaluates the sets of matching records that the deduplication process
identifies. The transformation selects or constructs a preferred version of the records in each set.
A Data Quality user configures the deduplication and consolidation processes in the deduplicate asset. For
more information about the criteria that the asset defines, contact the Data Quality user.
The deduplicate asset that you add to the transformation specifies a type of identity, such as a person
name or an organization name. The asset identifies the identity type as the objective of the deduplication
110
operations. The type of identity on the asset defines the types of information that the transformation
expects to find in the input fields.
You must map the appropriate input fields on the transformation to the target fields that the
transformation indicates. You can optionally map the optional input fields to other fields.
The Deduplicate transformation calculates a score for each possible pair of records in the input data.
The transformation returns the scores for the records within each set of matching duplicate records. It
does not return the scores for records that do not belong in the same set.
The transformation represents the relationships between the records in a matching set as a link score
and a driver score.
On the Field Mapping tab, the transformation adds a group key field and a sequence ID field to the fields
that the asset specifies. The group key field is mandatory. The sequence ID field mandatory in an elastic
mapping and is otherwise optional.
The group key is a data value that allows the transformation to sort the input records into subsets and to
perform discrete duplicate analyses on each subset. When you select a suitable group key, you reduce
the time that the mapping takes to run without reducing the quality of the mapping results. If you do not
want to divide the input records into groups, add a field to the input data that contains a single or
constant value and select the field as the group key.
The sequence ID values determine the order in which the transformation reads the input records. If your
input records do not contain a sequence ID field, the transformation reads the records in the order in
which they appear in the input data set.
Metadata fields
On the Output Fields tab, the transformation adds fields that display the score values for pairs of
matching records. The fields also identify the set of matching records to which each record belongs. If
the deduplicate asset specifies a consolidation process, the metadata fields specify a preferred record
for each record set. The transformation identifies the preferred record as the survivor record.
Data Quality downloads the population files to the Secure Agent host machine when you install the Secure
Agent. You do not need to take any action to download the files.
For more information on population file properties, consult the Deduplicate Guide in the Data Quality online
help.
The following table shows the number of calculations that a mapping performs for different numbers of data
values on a single field:
10,000 50 million
To reduce the time that the mapping takes to run, the transformation can organize the input records into
groups. A group is a set of records that contain identical values on a field that you specify. When you perform
duplicate analysis on grouped data, the transformation analyzes the field data on the records within each
group. The transformation does not compare the records in one group with the records in another group. The
groups reduce the overall number of comparisons that the transformation must perform without any
meaningful loss of accuracy in the mapping analysis.
Consider the following rules and guidelines when you organize data into groups:
• The field on which you group the data is the GroupKey field. Find the field on the Field Mapping tab in the
transformation. A group key field must contain a predictable range of expected duplicate values, such as
a city name or a state name in an address data set.
The presence of duplicate values in the group key field does not mean that the respective input records
must also be duplicates.
• Do not specify a group key field that the asset requires to identify the identity information in the input
data.
• Groups do not reorder the position of the records in the mapping data set.
The deduplicate asset specifies the type of identity that the transformation searches for. The identity type
determines the types of input field that you must connect to the transformation. For more information about
the fields that the Deduplicate transformation expects, consult the Data Quality user who configured the
asset.
3. On the Incoming Fields tab, verify the fields that enter the transformation from upstream objects.
By default, the transformation inherits all incoming fields from any connected upstream object in the
mapping. You can define a field rule to limit or rename the incoming fields.
4. On the Field Mapping tab, connect one or more incoming fields to the deduplicate asset.
The fields that you map must include data that can describe an identity of the type that the asset
specifies. The input fields must also include a field that can act as a group key during duplicate analysis.
The tab lists the fields from upstream objects in the Incoming Fields section and lists the fields that the
asset specifies in the Target Fields section.
The incoming field names might reflect the names of the target fields in the transformation. If so, you
can use the Automap options to connect the fields.
5. Verify the output field properties on the Output Fields tab.
The output fields include several metadata fields that contain important information about the results of
the duplication and consolidation process. For more information about the metadata fields, see
“Metadata fields on the Deduplicate transformation” on page 115.
6. You can optionally rename the Deduplicate transformation and add a description on the General tab. You
can also update the tracing level for the transformation on the Advanced tab. The default tracing level is
Normal.
Note: If you update an asset in Data Quality after you add it to a transformation, you may need to synchronize
the asset version in the transformation with the latest version. For more information about data quality asset
synchronization, see “Synchronizing data quality assets” on page 89.
• Manual. Manually link an incoming field to a transformation input field. Removes links for any
automatically mapped field.
• Automatic. Automatically link fields with the same name. You cannot manually link fields with this
option.
• Completely Parameterized. Use a parameter to represent the field mapping.
Choose the Completely Parameterized option when the deduplicate asset in the transformation is
parameterized or any upstream transformation in the mapping is parameterized.
• Partially Parameterized. Configure links in the mapping that you want to enforce and use a parameter
to allow other fields to be mapped in the mapping task. Or, use a parameter to configure links in the
mapping, and allow all fields and links to display in the task for configuration.
Parameter
Select the parameter to use for the field mapping, or create a new parameter. This option appears when
you select Completely Parameterized or Partially Parameterized as the field map option. The parameter
must be of type field mapping.
Do not use the same field mapping parameter in more than one Deduplicate transformation in a single
mapping.
Options
Controls how fields are displayed in the Incoming Fields and Target Fields lists.
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields and to manually configure other
field mappings. The Automap options appear when you select the Manual or Partially Parameterized
field map option.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap.
To unmap a single field, select the field to unmap and click Actions > Unmap on the context menu for the
field. To unmap one or more fields that you selected, click Unmap Selected on the Target Fields context
menu.
To clear all field mappings from the transformation, click Clear Mapping on the Target Fields context
menu.
GroupKey
Contains the data values that the transformation uses to sort input records into groups for duplicate
analysis.
SequenceId
Contains a unique identifier for each record that enters the transformation.
The transformation uses the sequence ID values to identify records in the Out_DriverId and Out_LinkId
data. If you do not map the SequenceId field, the transformation uses the values on the OutRowId field
as unique identifiers for the records.
Out_ClusterId
Note: In the deduplication process, a cluster is a set of records whose data values match each other to a
degree that exceeds the duplicate threshold. Records in the same set are likely to identify the same
identity. A set may contain a single record, as every unique record is a perfect match with itself.
Out_ClusterSize
Contains the number of records in the set to which the current record belongs. When a set contains a
unique record, the cluster size is 1.
Out_DriverId
Contains the identifier of the driver record in each matching record set. The driver record is the record in
the set with the lowest value on the SequenceId input field. If the transformation does not use the
SequenceId field, the driver record is the record in the matching set with the lowest Out_RowId value.
Out_DriverScore
Contains the score that represents the degree of similarity between the current record and the driver
record in the matching record set.
Out_IsSurvivor
Contains an identifier for the preferred record that a consolidation process specifies.
Out_LinkId
Contains the identifier of the record that matched with the current record and linked it to the matching
record set.
Out_LinkScore
Contains the score between two records that results in the addition of a record to a matching record set.
The Out_LinkId field identifies the record with which the current record shares the link score.
Contains a unique identifier for each record in the mapping source data set.
The transformation uses the Out_RowId values to identify records if you do not map a field of unique
identifiers to the SequenceId field.
The Out_DriverId value provides a benchmark for all records in a matching record set. The Out_DriverId value
is the score between the current record and the record in the set with the lowest sequence ID or row ID value.
The record with the lowest ID is also the first record that the deduplication process added to the set.
The link score is the score between two records that identifies them as members of the same matching set.
The transformation creates a link between a given record and the first record that it matches with a score
above the threshold value.
The LinkId field identifies the records to which a link score applies. The link score and link ID values do not
imply that a pair of records are the best match in the input data. The purpose of the link score and link ID
values is to determine the composition of the matching record set.
The driver score is the score between the first record added to a matching record set and another record in
the same set. The transformation uses the sequence ID or row ID values to identify the first record in the set.
Driver scores provide a means to assess all records in the set against a single record.
Note: Duplicate analysis generates a single set of scores for the input records. The driver scores and link
scores represent the different relationships between the records and do not indicate different types of
duplicate analysis. The driver score and link score assignments can depend on the order in which the records
enter the transformation. A driver score for a given pair of records might be lower than the threshold value.
The following table shows the results that the transformation might return:
The results provide the following information about the surname data:
• SMITT and SMITS do not match any other record with a score that meets the threshold. The
transformation determines that the records are unique in the data set.
SMITT and SMITS each have a ClusterSize value of 1, which indicates that they are the only record in their
respective sets. To find unique records in the output, search for matching record sets that contain a single
record.
• SMITH and SMITH have a link score of 1. The transformation determines that the records are identical.
The transformation adds the records to a single matching record set. The ClusterId value indicates that
the records belong to the same set.
• SMYTH and SMYTHE have a link score of 0.83333. The score exceeds the duplicate threshold. Therefore,
the transformation adds the records to a single matching record set.
The tab displays the name, type, precision, and scale for the output fields.
You cannot edit the output field properties in the Deduplicate transformation. To edit the properties, open the
deduplicate asset in Data Quality.
For more information about the metadata fields that the transformation creates, see “Metadata fields on the
Deduplicate transformation” on page 115.
Expression transformation
The Expression transformation calculates values within a single row. Use the Expression transformation to
perform non-aggregate calculations.
For example, you might use an Expression transformation to adjust bonus percentages or to concatenate
first and last names.
When you configure an Expression transformation, create an expression field for the output of each
calculation that you want to use in the data flow. Create a variable field for calculations that you want to use
within the transformation.
Expression fields
An expression field defines the calculations to perform on an incoming field and acts as the output field for
the results. You can use as many expression fields as necessary to perform calculations on incoming fields.
When you configure an expression field, you define the field name, data type, precision, scale, and optional
description. The description can contain up to 4000 characters. You also define the calculations that you
want to perform.
Expression editor
Use the expression editor to configure the expression field. The expression can contain constants, variables,
built-in functions, and user-defined functions. You can also create a complex expression by nesting functions
within functions.
You can add source fields, functions, and variables to the expression by clicking Add next to the object that
you want to use. You can also type in the expression manually.
Alternatively, press the Ctrl + Space keys to see a list of recommended arguments and functions in-line. Data
Integration provides recommendations based on the type of function arguments and keystrokes. In-line
recommendations are not available for hierarchical source data.
To validate the expression, click Validate. Data Integration validates the expression.
118
Window functions
With elastic mappings, you can use a window function to concisely express stateful computations. A window
function takes a small subset of a larger data set for processing and analysis.
Window functions operate on a group of rows and calculate a return value for every input row.
Before you define a window function, configure the following window properties on the Window tab:
Frame
Defines the rows that are included in the frame for the current input row, based on physical offsets from
the position of the current input row.
You configure a frame if you use an aggregate function as a window function. The window functions
LEAD and LAG reference individual rows and ignore the frame.
Partition Keys
If you do not define partition keys, all rows belong to a single partition.
Order Keys
The fields you choose determine the position of a row within a partition. The order key can be ascending
or descending. If you do not define order keys, the rows have no particular order.
You cannot parameterize an expression that contains a window function. If the expression is parameterized,
you cannot specify a window function in the mapping task.
Frame
The frame determines which rows are included in the calculation for the current input row based on the rows'
relative position to the current row. Configure a frame if you use an aggregate function as a window function.
The start offset and end offset describe the number of rows that appear before and after the current input
row. An offset of "0" represents the current input row. For example, a start offset of -3 and an end offset of 0
describe a frame including the current input row and the three rows before the current row.
The following image shows a frame with a start offset of -1 and an end offset of 1:
You can also specify a frame that does not include the current input row. For example, a start offset of 10
and an end offset of 15 describe a frame that includes six total rows, from the tenth to the fifteenth row after
the current row.
Offsets of All Preceding Rows and All Following Rows represent the first row of the partition and the last row
of the partition. For example, if the start offset is All Preceding Rows and the end offset is -1, the frame
includes one row before the current row and all rows before that.
The following image shows a frame with a start offset of 0 and an end offset of All Following Rows:
If the frame offsets are outside the partition, the aggregate function ignores the frame. If the offsets of a
frame are not within the partition or table, the aggregate function processes only the rows within the
partition. The function does not return NULL or a default value.
For example, you partition a table by seller ID and you order by quantity. You set the start offset to -3 and the
end offset to 4.
The following image shows the partition and frame for the current input row:
Consider the following rules and guidelines when you define a frame:
• LEAD and LAG use the frame that you specify in the function arguments and ignore the frame that you
configure on the Window tab.
• The start offset must be less than or equal to the end offset.
Use the following keys to group and order the rows in a window:
Partition keys
Configure partition keys to define partition boundaries rather than performing the calculation across all
rows.
If you do not specify partition keys, all the data is included in the same partition.
Order keys
Use order keys to determine how rows in a partition are ordered. Order keys define the position of a
particular row in a partition.
You must also arrange the data in ascending or descending order. If you do not specify order keys, the
rows in a partition are arranged randomly.
Consider the following rules and guidelines when you define window properties for partition and order keys:
The following table lists the products, the corresponding product categories, and the revenue from each
product:
You partition the data by category and order the data by descending revenue.
The following table shows the data grouped into two partitions according to category. Within each partition,
the revenue is organized in descending order:
You can run the MAX function within each partition to determine that the two best-selling coffees are
espresso and Americano, and the two best-selling teas are white and black.
For each customer, you want to know the expiration date for the current plan based on the activation date of
the next plan. The previous plan ends when a new plan starts, so the end date for the previous plan is the
start date of the next plan minus one day.
The following table lists the customer codes, the associated plan codes, and the start date of each plan:
C1 00001 2014-10-01
C2 00002 2014-10-01
C2 00002 2014-11-01
C1 00004 2014-10-25
C1 00001 2014-09-01
C1 00003 2014-10-10
Frame Not specified. The LEAD function will access rows based on the offset argument and
ignore the frame.
Partition CustomerCode Groups the rows according to customer code so that calculations are
key based on individual customers.
Order key StartDate Arranges the data chronologically by ascending start date.
Ascending
The following table lists the data grouped by customer code and ordered by start date:
C1 00001 2014-09-01
C1 00002 2014-10-01
C1 00003 2014-10-10
C1 00004 2014-10-25
C2 00001 2014-10-01
C2 00002 2014-11-01
*The LEAD function returned the default value because these plans have not yet ended. The rows were
outside the partition, so the ADD_TO_DATE function subtracted one day from 01-Jan-2100, returning
2099-12-31.
You order the events chronologically and partition the events by trip. You define a window function that
accesses the event time from the previous row, and you use an ADD_TO_DATE function to calculate the time
difference between the two events.
Frame Not specified Window functions access rows based on the offset argument and
ignore the frame.
Partition key trip_id Groups the rows according to trip ID so that calculations are based
on events from the same trip.
Order key _event_id Ascending Arranges the data chronologically by ascending event ID.
Window function
You define the following LAG function to get the event time from the previous row:
LAG ( _event_time, 1, NULL )
For more information about the LAG function, see Function Reference.
You define the following DATE_DIFF function to calculate the length of time between the two dates:
DATE_DIFF ( _event_time, LAG ( _event_time, 1, NULL ), 'ss' )
You flag the row as skipped if the DATE_DIFF is less than 60 seconds, or if the _event_time is NULL:
IIF ( DATE_DIFF < 60 or ISNULL ( _event_time ), 'Skip', 'Valid' )
Output
The transformation produces the following outputs:
*The rows preceding these rows are outside the bounds of the partition, so the LAG function produces NULL
values.
The following table lists the department names, the employee identification number, and the employee
salary:
Development 11 5200
Development 7 4200
Development 9 4500
Development 8 6000
Development 10 5200
Personnel 5 3500
Personnel 2 3900
Sales 3 4800
Sales 1 5000
Sales 4 4800
You set an unbounded frame to include all employees in the calculation, and you define an aggregate
function to calculate the difference between the salary of each employee and the average salary in the
department.
Window Properties
You define the following window properties on the Window tab:
Start offset All Preceding Rows Describes the number of rows that appear before the current
input row.
End offset All Following Rows Describes the number of rows that appear after the current
input row.
When you select All Preceding Rows and All Following Rows, the function includes all partition rows. For
example, suppose the current row is the third row. The third row is in the "Development" partition, so the
frame includes the third row in addition to all rows before and after the third row in the "Development"
partition.
You define the following aggregate function to calculate the difference between the salary of each employee
and the average salary in the corresponding department:
Salary - AVG ( Salary ) = Salary_Diff
Output
The transformation produces the following salary differences:
You can identify which employees are making less or more than the average salary within the same
department. Based on this information, you can add other transformations to learn more about the data. For
example, you might add a Rank transformation to produce a numerical rank for each employee within the
same department.
Advanced properties
You can configure advanced properties for an Expression transformation. The advanced properties control
settings such as the tracing level for session log messages and whether the transformation is optional or
required.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Tracing Detail level of error and status messages that Data Integration writes in the session log. You can
Level choose terse, normal, verbose initialization, or verbose data. Default is normal.
Optional Determines whether the transformation is optional. If a transformation is optional and there are no
incoming fields, the mapping task can run and the data can go through another branch in the data flow.
If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data flow, you
add a transformation with a field rule so that only Date/Time data enters the transformation, and you
specify that the transformation is optional. When you configure the mapping task, you select a source
that does not have Date/Time data. The mapping task ignores the branch with the optional
transformation, and the data flow continues through another branch of the mapping.
You can use hierarchical fields as pass-through fields. You can also use hierarchical fields in an expression
or use a hierarchical field with a complex operator to access primitive child fields in the expression. For more
information about complex operators, see Function Reference.
Consider the following guidelines when you use hierarchical fields in an expression:
Filter transformation
The Filter transformation filters data out of the data flow based on a specified filter condition. To improve job
performance, place the Filter transformation close to mapping sources to remove unnecessary data from the
data flow.
A filter condition is an expression that returns TRUE or FALSE. When the filter condition returns TRUE for a
row, the Filter transformation passes the row to the rest of the data flow. When the filter condition returns
FALSE, the Filter transformation drops the row.
You can filter data based on one or more conditions. For example, to work with data within a data range, you
can create conditions to remove data before and after specified dates.
Link a single transformation to the Filter transformation. You cannot merge transformations into the Filter
transformation.
Filter conditions
The filter condition is an expression that returns TRUE or FALSE.
You can create one or more simple filter conditions. A simple filter condition includes a field name, operator,
and value. For example, Sales > 0 retains rows where all sales values are greater than zero.
Filter conditions are case sensitive. You can use the following operators in a simple filter:
= (equals)
< (less than)
> (greater than)
< = (less than or equal to)
> = (greater than or equal to)
! = (not equals)
When you define more than one simple filter condition, the mapping task evaluates the conditions in the
order that you specify. The task evaluates the filter conditions using the AND logical operator to join the
conditions. The task returns rows that match all of the filter conditions.
You can use an advanced filter condition to define a complex filter condition. When you configure an
advanced filter condition, you can incorporate multiple conditions using the AND or OR logical operators. You
can use a constant to represent condition results: 0 is the equivalent of FALSE, and any non-zero value is the
equivalent of TRUE.
129
When you change the filter condition type from simple to advanced, the Mapping Designer includes
configured simple filter conditions in the advanced filter condition. You can use or delete the simple filter
conditions. The conversion does not include parameters.
To filter rows that contain null values, use the ISNULL function to test the value of the field. To filter rows that
contain spaces, use IS_SPACES.
For example, if you want to filter out rows that contain a null value in the First_Name field, use the following
condition: IIF(ISNULL(First_Name),FALSE,TRUE). The condition states that if the First_Name field is NULL, the
return value is FALSE. The mapping task discards the row. Otherwise, the row passes through to the next
transformation.
Advanced properties
You can configure advanced properties for a Filter transformation. The advanced properties control settings
such as the tracing level for session log messages and whether the transformation is optional or required.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Tracing Detail level of error and status messages that Data Integration writes in the session log. You can
Level choose terse, normal, verbose initialization, or verbose data. Default is normal.
Optional Determines whether the transformation is optional. If a transformation is optional and there are no
incoming fields, the mapping task can run and the data can go through another branch in the data flow.
If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data flow, you
add a transformation with a field rule so that only Date/Time data enters the transformation, and you
specify that the transformation is optional. When you configure the mapping task, you select a source
that does not have Date/Time data. The mapping task ignores the branch with the optional
transformation, and the data flow continues through another branch of the mapping.
You can use hierarchical fields as pass-through fields. You can also use hierarchical fields in an advanced
filter condition or use a hierarchical field with a complex operator to access primitive child fields in the filter
condition. For more information about complex operators, see Function Reference.
Consider the following guidelines when you use hierarchical fields in a filter condition:
The transformation processes relational input from the upstream transformation and provides one of the
following output types to the downstream transformation:
• JSON
• XML
• Avro
• Parquet
• ORC
The Hierarchy Builder transformation produces hierarchical output based on the following options:
The hierarchical schema or intelligent structure model that you associate with the transformation.
To associate a schema with the transformation, you can select an existing schema or create a new
schema. To associate an intelligent structure model, you generate a new model from a sample file that
you provide.
To convert the input to JSON or XML, associate a hierarchical schema with the transformation. The
schema defines the expected hierarchy of the output data. You can use an existing hierarchical schema,
or create a new schema at design time. You can create a schema for XML output from an XML sample
file or an XML schema file. To create a schema for JSON output, use a JSON sample file. Don't use a
JSON schema file.
To convert the input to Avro, Parquet, or ORC, create an intelligent structure model. The model defines
the expected hierarchy of the output data from a sample file at design time.
You can select to write the data to a flat file, for example, when the transformation processes a large
amount of data and the output field size exceeds 100 MB.
Configure field mapping to define which relational elements link to schema elements to provide the
hierarchical output.
To use the Hierarchy Builder transformation, you need the appropriate license.
131
Using a Hierarchy Builder transformation
To use the Hierarchy Builder transformation in a mapping, perform the following steps:
Sample or schema
When you create a hierarchical schema, you import a JSON sample file or an XSD schema as the basis of the
hierarchical schema.
It is recommended to use a schema, if available. The schema must not contain recursive elements.
If you use a sample, ensure that it is representative of the data that you expect to process with the
transformation and that it is comprehensive. Ensure that the sample contains all the possible fields that the
transformation might process, including permutations regarding values and types. The lengths of fields must
be representative.
Hierarchical schemas
A hierarchical schema is based on a schema file or sample file that you import to Data Integration. If you
import a sample file, Data Integration generates a schema based on the structure of the sample file. The
schema defines the expected hierarchy of the input data.
You can create a hierarchical schema in two ways. You can create a standalone hierarchical schema that can
be associated with any transformation that you choose. Alternatively, you can create a hierarchical schema
from within a specific transformation.
When you create a standalone hierarchical schema, you import a JSON sample file or XSD file as the basis of
the schema.
You can associate the hierarchical schema with any transformation, whether you create it as a standalone
hierarchical schema or as part of a specific transformation.
You can create, edit, or delete a hierarchical schema. You can edit a hierarchical schema and change the root
or change the schema definition. However if you have used the hierarchical schema in a transformation, you
cannot edit or delete it. Before you delete a hierarchical schema, verify that you did not use it in a
transformation.
When you create the model, you select a sample file that represents the expected hierarchy of the output
data. Data Integration creates the model based on the file.
Output settings
Associate a hierarchical schema or intelligent structure model with the Hierarchy Builder transformation,
define the precision of the output, and select the output format on the Output Settings tab of the Properties
panel.
To associate a hierarchical schema with the transformation you can select an existing hierarchical schema
or create a hierarchical schema from an XSD file or a JSON sample file.
To define a buffer size for the output, enter the value in the Precision field.
The default output format of the transformation is string. To change the output format to binary, from the
Output Format list, select Binary.
To write the data to a flat file, select Write to file and enter the file path in the File path field. The path can't
be parameterized.
Tip: Write data to a flat file when the transformation processes a large amount of data and the output field
size exceeds 100 MB.
1. In the Properties panel of the Hierarchy Builder transformation, click the Output Settings tab.
2. Next to the Schema field, click Select.
The Select Schema dialog box appears.
3. Select a hierarchical schema from the list.
4. To search for a hierarchical schema, select the search criteria, enter the characters to search for in the
Search field, and click Search.
You can search for a hierarchical schema by name or description. You can sort the hierarchical schema
by name, description, or date last modified.
5. Select the hierarchical schema to include in the Hierarchy Builder transformation and click OK.
The selected hierarchical schema appears in the Properties panel.
1. In the Properties panel of the Hierarchy Builder transformation, click the Output Settings tab.
2. To create a schema, click New > Create New Schema.
The New Hierarchical Schema page appears.
3. Enter a name for the schema. Optionally, enter a description of the schema.
4. Browse to select a project location.
5. To select a schema or sample file, click Upload. Click Choose File and browse for an XSD file or select a
sample JSON file, and then click OK.
When you add a JSON sample file, Data Integration generates a schema from the sample.
6. If you selected an XSD file with multiple possible root elements, select a root from the drop-down menu.
7. If you selected a schema that refers to another schema, you must also upload the referenced schema.
To upload the referenced schema, click Upload, browse for the referenced schema file and click OK.
8. To save the hierarchical schema, click OK.
1. In the Properties panel of the Hierarchy Builder transformation, click the Output Settings tab.
2. To create a schema, click New > Auto-generate from sample file.
The Create Intelligent Structure Model from Sample File page appears.
3. Enter a name for the model. Optionally, enter a description of the model.
4. Browse to select a project location.
5. Browse to select a sample file and then click Create.
Data Integration creates a model and associates it with the transformation.
Field mapping
Configure the field mapping in a Hierarchy Builder transformation to define which relational elements link to
schema elements to provide the hierarchical output. Configure the field mapping on the Field Mapping tab of
the Properties panel. If you have more than one group, use the field mapping editor to define primary and
foreign keys. You also use the field mapping editor to link relational fields to schema elements.
To the left, the field mapping editor shows the relational fields. To the right, the editor shows the schema
elements. In this example, the transformation has two relational groups so you must designate primary and
foreign keys. In this example, the NSId field is a primary key in the NewSource group, and the NSLeadId field
is the foreign key that links to the NewSource1 group. The NS1Id field is a primary key in the NewSource1
group.
A primary key is signified by the primary key icon in the Key column. A foreign key is signified by the foreign
key icon.
To link a relational field to schema element, drag the relational element to the schema element. The Mapped
Field column shows the relational field to which the schema element is mapped.
If the input relational fields constitute just one group, the data will be treated as denormalized input and there
is no need to define primary or foreign keys.
1. In the Properties panel of the Hierarchy Builder transformation, click the Field Mapping tab.
The field mapping editor displays the transformation input elements and output fields. To the left, the
editor shows the relational fields. To the right, the editor shows the schema elements.
2. To search for a relational field, enter the characters to search for in the Search field, and click Search.
3. To designate a relational field as a primary or foreign key, click the Key column of the field.
A key selector window appears.
4. Select whether the key is a primary key or foreign key.
5. If the key is foreign key, select the group to which the field relates as a foreign key.
6. To remove a primary or foreign key, click the Key column of the field and select Not a key.
7. To clear all the keys, click the action menu at the top of the Hierarchy Fields panel, and then select Clear
Keys.
1. In the Properties panel of the Hierarchy Builder transformation, click the Field Mapping tab.
The field mapping editor displays the transformation input elements and output fields. To the right, the
editor shows the schema elements. To the left, the editor shows the relational input fields.
2. To search for a relational field, enter the characters to search for in the Search field, and click Search.
3. To view child schema elements, expand a field.
4. To search for an element, enter the characters to search for in the Search field, and click Search.
5. To link a relational input field to a schema element, drag the relational field to the schema element.
6. To automatically link a relational input field to a schema element, select a schema element, click the
action menu at the top of the Hierarchy Fields panel, and then select Map Selected.
7. To automatically unlink a relational input field to a schema element, select one of the following options:
• Select a schema element. In the Action column, click the delete icon.
• Select a schema element. Click the action menu at the top of the Hierarchy Fields panel and select
Unmap Selected.
8. To clear all the links, click the action menu at the top of the Hierarchy Fields panel, and then select Clear
Mapping.
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You
can choose terse, normal, verbose initialization, or verbose data. Default is normal.
Transformation Specifies how Data Integration applies the transformation logic to incoming data. Select one of
Scope the following options:
- Transaction. Applies the transformation logic to all rows in a transaction. Choose
Transaction when a row of data depends on all rows in the same transaction, but does not
depend on rows in other transactions.
- All Input. Applies the transformation logic on all incoming data. When you choose All Input,
Data Integration drops incoming transaction boundaries. Choose All Input when a row of data
depends on all rows in the source.
You need to configure a hierarchical schema that uses a schema file to define the hierarchy of the output
data.
The following example shows the schema hierarchy that you want to use:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.itemfield.com"
targetNamespace="http://www.itemfield.com" elementFormDefault="qualified">
<xs:element name="Employees">
<xs:complexType>
<xs:sequence>
<xs:element name="Name" type="xs:string" minOccurs="0"/>
<xs:element name="Address" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="Employee" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="EmployeeID" type="xs:string" minOccurs="0"/>
<xs:element name="Department" type="xs:string" minOccurs="0"/>
To parse the input data, use a Hierarchy Builder transformation in a mapping to transform the data from the
hierarchical input.
In the Mapping Designer, you add two source objects that are flat files that contain the paths to the data files
that you want to parse. The following image shows one of the Source transformations:
You add an Hierarchy Builder transformation and use the name NewHierarchyBuilder. Configure it to use the
hierarchical schema that you created.
To map the relational fields to the hierarchical output, in the Field Mapping tab, select primary and foreign
keys. Then select which relational fields are linked to schema elements for the hierarchical output.
Run the mapping to write the data in a hierarchical format to the Target transformation.
You can configure a hierarchical schema that defines the expected hierarchy of the output data from a
sample file or schema file. The Hierarchy Parser transformation converts hierarchical input based on the
hierarchical schema that you associate with the transformation. You can use an existing hierarchical schema,
or configure one.
Note: To create a JSON-based schema object use a JSON sample, not a JSON schema.
To parse complex hierarchical structures, consider using the Structure Parser transformation for a more
comprehensive handling of hierarchical file inputs. For more information, see Chapter 31, “Structure Parser
transformation” on page 352.
To use the Hierarchy Parser transformation, you need the appropriate license.
141
Handling of Boolean data types
The Hierarchy Parser transformation always returns 0 for Boolean data type input.
• xsd:any
• xsd:type
• mixed
• xsi:type
• default values
• fixed values
• no type, for example <xs:element name="A" maxOccurs="unbounded"/>
To parse large or complex XSD files that contains more than 10,000 elements and attributes, recursive
elements, or complex deep hierarchies, use the Structure Parser transformation. For more information, see
Chapter 31, “Structure Parser transformation” on page 352.
Sample or schema
When you create a hierarchical schema, you import a JSON sample file or an XSD schema as the basis of the
hierarchical schema.
It is recommended to use a schema, if available. The schema must not contain recursive elements.
If you use a sample, ensure that it is representative of the data that you expect to process with the
transformation and that it is comprehensive. Ensure that the sample contains all the possible fields that the
transformation might process, including permutations regarding values and types. The lengths of fields must
be representative.
Hierarchical schemas
A hierarchical schema is based on a schema file or sample file that you import to Data Integration. If you
import a sample file, Data Integration generates a schema based on the structure of the sample file. The
schema defines the expected hierarchy of the input data.
You can create a hierarchical schema in two ways. You can create a standalone hierarchical schema that can
be associated with any transformation that you choose. Alternatively, you can create a hierarchical schema
from within a specific transformation.
When you create a standalone hierarchical schema, you import a JSON sample file or XSD file as the basis of
the schema.
You can associate the hierarchical schema with any transformation, whether you create it as a standalone
hierarchical schema or as part of a specific transformation.
Input settings
The Hierarchy Parser transformation converts hierarchical input to relational data based on the hierarchical
schema that you associate with the transformation.
You can select the input type on the Input Settings tab of the Properties panel. The input type can be a buffer
or a file.
Select the buffer input type if the source transformation passes data to the Hierarchy Parser transformation.
Select the file input type if the source transformation passes data by reference instead. If the source
transformation passes a path to an input file, this applies.
On the Input Settings tab you can choose to select an existing hierarchical schema and add it to the
transformation, or create a hierarchical schema from an XSD file or a JSON sample file.
1. In the Properties panel of the Hierarchy Parser transformation, click the Input Settings tab.
2. Click Select.
The Select Schema dialog box appears.
3. Select a hierarchical schema from the list.
4. To search for a hierarchical schema, select the search criteria, enter the characters to search for in the
Search field, and click Search.
You can search for a hierarchical schema by name or description. You can sort the hierarchical schema
by name, description, or date last modified.
5. Select the hierarchical schema to include in the Hierarchy Parser transformation and click OK.
The selected hierarchical schema appears in the Properties panel.
Automap
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Employee_Name and an hierarchical schema input field Emp_Name, Data Integration automatically
links the Employee_Name with the Emp_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single
field, select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
Options
Controls the fields that appear in the Incoming Fields list. Show all fields, unmapped fields, or mapped
fields. Determine how field names appear in the input field list. Use technical field names or labels.
Field mapping
Configure the field mapping in a Hierarchy Parser transformation to define which schema elements provide
the relational output. Configure the field mapping on the Field Mapping tab of the Properties panel. Use the
field mapping editor to select or exclude schema elements. When you exclude schema elements, the editor
removes the corresponding relational fields.
To the left, the field mapping editor shows the schema elements. To the right, the editor shows the relational
fields. The transformation creates a separate output group for each multiple-occurring input element. The
transformation also creates primary and foreign keys and assigns them to the groups. A primary key is
signified by the prefix PK_ in the name of the field. A foreign key is signified by the prefix FK_.
For each relational field, the Relational Fields panel shows the XPath expression of the hierarchy element
from which the relational field was mapped.
When you select to include schema elements, the editor displays corresponding relational fields to the right.
When you exclude schema elements, the editor removes the corresponding relational fields.
You can select to include or exclude all child elements nested under an element. Alternatively, you can select
to include or exclude immediate child elements that are one hierarchy level down.
You can select to denormalize relational output. When you denormalize relation output, all the hierarchy
elements you select to map are mapped to relational fields in one group.
1. In the Properties panel of the Hierarchy Parser transformation, click the Field Mapping tab.
The field mapping editor displays the transformation input elements and output fields. To the left, the
editor shows the schema elements. To the right, the editor shows the relational output fields.
2. To view child elements, expand an element.
3. To search for an element, enter the characters to search for in the Search field, and click Search.
4. To denormalize the relational output, in the Format field select Denormalized, then select elements to
include in the relational output. By default, the relational output is not denormalized, and the setting for
the Format field is Relational.
5. To include an element without child elements in the relational output, select the element.
6. To exclude an element without child elements from the relational output, clear the element.
7. For an element with child elements, you can choose from the following options:
• To select to include all child elements in the relational output, right-click the element and select Map
all descendants.
• To select to include immediate child elements in the relational output, right-click the element and
select Map immediate children.
• To select to exclude all child elements in the relational output, right-click the element and select
Unmap all descendants.
• To select to exclude immediate child elements in the relational output, right-click the element and
select Unmap immediate children.
8. To search for a relational field, enter the characters to search for in the Search field, and click Search.
9. To exclude a relational output field from the output, click the field.
Output fields
When you select schema elements to use in the Hierarchy Parser transformation, the schema output fields
appear on the Output Fields tab of the Properties panel.
The Mapping Designer displays the name, type, precision, scale, and origin for each output field in each
output group.
To edit the precision of the output fields, click the precision and enter the precision you require.
You can edit the transformation output fields, except for primary key and foreign key fields.
You need to configure a hierarchical schema that uses a schema file to define the hierarchy of the input data.
The following example shows the schema hierarchy that you want to use:
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.itemfield.com"
targetNamespace="http://www.itemfield.com" elementFormDefault="qualified">
<xs:element name="root">
<xs:complexType>
<xs:sequence>
<xs:element name="Emp_Details" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="employee">
<xs:complexType>
<xs:sequence>
<xs:element name="Employeeid" type="xs:short"/>
<xs:element name="Name">
<xs:complexType>
<xs:sequence>
<xs:element name="Firstname" type="xs:string"/>
<xs:element name="Lastname" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Dependents" minOccurs="1" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
To parse the input data, use a Hierarchy Parser transformation in a mapping to transform the data from the
hierarchical input. In the Mapping Designer, you add a source object that is flat file that contains the path to
the data that you want to parse.
You add an Hierarchy Parser transformation and use the name NewHierarchyParser. Configure it to use the
hierarchical schema that you created.
You connect the source object to the NewHierarchyParser transformation. To map the incoming data to the
fields of the transformation, select the NewHierarchyParser transformation. In the Input Field Selection tab,
map the selected incoming field from the source transformation to the NewHierarchyParser hierarchical
schema input field.
To map the data to relational fields, in the Field Mapping tab, select which schema elements are reflected as
relational fields to the output.
Run the mapping to write the data in a relational format to the Target transformation.
Hierarchy Processor
transformation
In an elastic mapping, you can use the Hierarchy Processor transformation to process data from complex
data sources. The transformation can read hierarchical or relational input and convert it to relational,
hierarchical, or flattened denormalized output.
The Hierarchy Processor transformation is an active transformation that processes hierarchical fields that
represent a struct or an array.
The Hierarchy Processor transformation includes the following data processing strategies:
• Hierarchical to relational. Converts one hierarchical input group to multiple output groups, which can
include delimited flat files or relational files.
• Relational to Hierarchical. Converts up to five relational input groups to one hierarchical output group.
• Hierarchical to Hierarchical. Converts one or more hierarchical input groups to one hierarchical output
group with a different schema.
• Hierarchical to flattened. Converts one hierarchical input group to one flattened denormalized output
group.
The data that you pass to or from the Hierarchy Processor transformation must be through a Microsoft Azure
Data Lake Store V2 or an Amazon S3 V2 connection.
To use the Hierarchy Processor transformation, your organization must have the appropriate licenses.
151
Hierarchical to relational data processing
In a mapping that converts hierarchical data to relational output, you can process one hierarchical input
group and write the data to multiple relational output groups. The output data can be written as normalized
relational data or to delimited flat files.
In this mapping, the data source is a complex file containing customer and order data. The data flows into
two relational files: a file with customer data and a file with order data.
For more information, see “Defining relational output with the Hierarchy Processor transformation” on page
155.
In this mapping, the source input includes three relational files: customer address data, purchase orders, and
purchase order details. The data flows into one complex file that combines data from the three source files.
For more information, see “Defining hierarchical output with the Hierarchy Processor transformation” on page
157.
You can convert hierarchical input from one schema to a different schema. You can read data from primitive
fields, structs, and arrays and arrange the data in a different structure.
You can also transform the data that you are converting. You can join data sources, configure group by and
order by fields, filter for specific information, and aggregate incoming and output data.
The following image shows an example of an elastic mapping that uses a Hierarchy Processor
transformation to convert hierarchical data to hierarchical data of a different structure:
In this mapping, the data source is a JSON file that contains orders and items data. The data flows into a
different JSON file that contains order information. The Hierarchy Processor transformation is selected, and
the Hierarchy Processor tab shows the structure of the incoming and output data.
For more information, see “Defining hierarchical output with the Hierarchy Processor transformation” on page
157.
In a mapping that converts hierarchical data to flattened data, you can read from one hierarchical input group
and write to one flattened output group. You can read data from primitive fields, structs, and arrays and
quickly create a fully denormalized output file. You can also choose to flatten and denormalize only a portion
of the incoming fields.
The following image shows an example of an elastic mapping that uses a Hierarchy Processor
transformation to convert hierarchical data to flattened data:
In this mapping, the data source is a JSON file that contains personal and vehicle data. The data flows into a
flattened file that contains vehicle information. The Hierarchy Processor transformation is selected, and the
Hierarchy Processor tab shows the structure of the incoming and output data.
For more information, see “Defining flattened output with the Hierarchy Processor transformation” on page
158.
The following image shows the Hierarchy Processor tab with hierarchical input and relational output:
1. Output format. Select Relational to convert incoming hierarchical data into one or more relational output groups.
2. Input groups, incoming fields. Use these fields to map to the output fields.
3. Output group, output fields. Use these fields to create the complex output file.
4. Generate keys. Optionally generate keys for the input group to define relationships between output groups.
5. Add incoming field to output group. Use to add fields to the output group.
6. Output field names. Click on a field name to modify the field name or data type.
7. Data Configuration icons. Use to configure the output groups and fields.
8. Expression. Click on an expression to view or customize the output field expression.
9. Add or delete output field. Use to create or delete output fields.
Tip: Use the maximize icon and resize the Incoming Fields panel or Output Fields panel to see the information
you need.
To define a Hierarchy Processor transformation with relational output, perform the following tasks:
The following image shows the Hierarchy Processor tab with the option to generate unique keys:
When you generate unique keys, you generate a primary key for the input group and a key for every array
element within the input group. Each key is a combination of a global unique ID and a value that increases for
each additional field. The generated keys can be mapped to the output groups just like any other incoming
field.
The result of mapping key fields from an input parent element to the output group of the child data set
results in the output data or output groups having a primary key and foreign key relationship. The primary key
and foreign key relationship is generated on the output side based on how the generated keys are mapped.
The following image shows the Hierarchy Processor tab with relational input and hierarchical output:
1. Output format. Select Hierarchical to construct a hierarchical schema for incoming fields.
2. Input groups, incoming fields. Use these fields to map to the output fields.
3. Output fields. Use these fields to create the complex output file.
4. Add incoming field to output group. Use to add fields to the output group.
5. Output field names. Click on a field name to modify the field name or data type.
6. Data Configuration icons. Use to configure the output groups and fields.
7. Expression. Click on an expression to view or customize the output field expression.
8. Add or delete output field. Use to create or delete output fields.
Tip: Use the maximize icon and resize the Incoming Fields panel or Output Fields panel to see the information
you need.
To define a Hierarchy Processor transformation with hierarchical output, perform the following tasks:
The following image shows the Hierarchy Processor tab with hierarchical input and flattened output:
1. Output format. Select Flattened to convert hierarchical input into denormalized output.
2. Incoming fields. View the incoming data schema and field types.
3. Output fields. View the output fields as you create the output file.
4. Configure the output. Select input fields to build the flattened output file schema.
5. Output field names. Click on a field name to modify the field name.
6. Expression. View the field expressions to determine the input to output field mappings.
7. Delete. Use to delete output fields.
Tip: Use the maximize icon and resize the Incoming Fields panel or Output Fields panel to see the information
you need.
To define a Hierarchy Processor transformation with flattened output, perform the following tasks:
To add output fields, click the Add link on the Hierarchy Processor tab next to the incoming field or input
group that you want to add.
You can add fields in the following ways based on the input field data type and the output type:
Adds the selected incoming field to the selected output group or field.
Adds all incoming fields in the group to the selected output group or field.
You can select this option when you add incoming fields with primitive data types to the output.
Adds all children under the field to the output, including all arrays and structs. If the incoming field
contains arrays, Data Integration creates a separate output group for each array.
You can select this option when you select an incoming struct or array field, and the output is relational.
Adds all single occurring children under the field to the output, including single occurring child fields that
are nested under a struct. Does not add single occurring children that are nested under arrays.
You can select this option when you select an incoming struct or array field, and the output is relational.
Adds all primitive single occurring children under the field to the output, including the primitive, single
occurring child fields that are nested under a struct. Does not add single occurring children that are
nested under arrays.
You can select this option when you select an incoming struct or array field, and the output is
hierarchical.
Preserves the hierarchical structure of the selected field in the output. For example, if you add an array
of structs to the output group, then the output group contains an array of structs with the same
structure.
You can select this option when you select an incoming struct or array field, and the output is
hierarchical.
Flattens an array of primitives into a primitive field. Creates one output record for each element in the
array.
You can select this option when you select an incoming struct or array field, and the output is
hierarchical.
Flattens an array of structs into a struct field. Creates one output record for each element the array.
You can select this option when you select an incoming struct or array field, and the output is
hierarchical.
After you add the incoming fields to the output group, you can rename or delete output fields as needed.
Hierarchical data with arrays cannot be mapped to the same output group when you select this option. If you
add an incoming field that contains hierarchical data with arrays to the output group, Data Integration creates
a separate output group for each array.
Example
You want to add all fields in the Input field to relational output groups. The incoming fields contain the arrays
Customer and Order.
Add the Input field to the output fields by selecting Add All Descendants.
Data Integration maps the fields in the Customer array to the first output group. It maps all fields in the Order
array to the second output group.
When you select this option, the incoming field must have child objects.
If the field you select contains an array, the array and its children are not added because an array can contain
multiple elements.
Example
You want to extract customer information from the Customer array. Each output record should contain
information about one customer.
Add the Customer array to the output group and choose Add Single Occurring Children.
All single occurring children under Customer, including the children under the struct FullAddress, map to the
output group.
If you choose Add Single Occurring Children on the Customers struct, an error message displays because
there are no single occurring children under that top-level struct.
When you select this option, the incoming field must have child objects.
If the field you select contains an array, the array and its children are not added because an array can contain
multiple elements.
Example
You want to extract the make, model, company, and policy number from an array of vehicle records. Each
output record should contain information about one vehicle.
Add the vehicle array to the output group and choose Add Primitive Single Occurring Children.
Note that the date field is not added to the output group because it is under a child array of the selected field
and is not single occurring.
When you add a nested array and select this option, the records that get created vary based on how you
configure the data source for the output group.
Example
You want to extract the description information from a nested array of maintenance records. The description
information is in an array of strings. You want the output data to also be in an array of strings.
Add the description array to the output group and choose Preserve Incoming Field.
When Data Integration creates the output array, it sets the data source for the description array to
Input.vehicle.vehicle.maintenance.maintenance.desc.elem, indicating that the information for the
output array comes from the elements in the desc array in the Input group. The data source for the Output
group determines the structure of the output records.
This option creates one record for each element in the array.
Example
You want to extract the description information from a nested array of maintenance records. The description
information is in an array of strings. You want to flatten the output into a string field.
Add the description array to the output group and choose Flatten Selected Array.
The output contains one record for each occurrence of description in the incoming data.
This option creates one record for each element in the selected array.
Example
You want to flatten an array of vehicle records into a struct without flattening the child array of maintenance
records.
Add the vehicle array to the output group and choose Add Selected Array as Struct.
The output contains one record for each element in the vehicle array.
For example, the incoming data contains the following record which contains data about two vehicles:
[
{
"vehicle": [
{
"make": "Toyota",
"model": "Corolla",
"insurance": {
"company": "Allstate",
"policy_num": "AS12876"
},
"maintenance": [
{
"date": "01/01/2020",
"description": ["oil filter1", "oil filter2"]
},
{
"date": "01/08/2020",
"description": ["tire rotation1", "tire rotation2"]
}
]
},
{
"make": "Toyota",
"model": "RAV4",
"insurance": {
"company": "Allstate",
"policy_num": "AS2033"
},
"maintenance": [
{
"date": "01/02/2020",
"description": ["air filter replacement1", "air filter replacement2"]
},
{
"date": "01/08/2020",
"description": ["battery replacement1", "battery replacement2"]
}
]
}
]
}
]
After you add incoming fields to an output group, you can modify the fields by clicking on the output field
name on the Hierarchy Processor tab. You can also add and define output fields manually.
Note: The output field properties that appear depend on the data type.
Property Description
Child Of The parent field or group that this field belongs to. The name structure describes the group,
parent fields, and struct name. For example: OUTGROUP.Grandparent.Parent.struct_name
Type The data type of the current field. For relational output, you can choose a primitive data type. For
hierarchical output, you can choose either a primitive or complex data type.
Array Element The precision for the current array element. Used when creating the target data.
Precision
Array Element The scale for the current array element. Used when creating the target data.
Scale
Struct Name The struct name for the current struct field.
Element Struct The element struct name for the current array of structs field.
Name
The following table describes the properties for aggregating hierarchical output data:
Property Description
Aggregate Options: Optionally, use this Indicates that you want to aggregate output data into the current field.
field to aggregate values in an output
field array.
Output Field Specify the sibling array with fields to aggregate. The array must be a
child of the current output field or group.
For example, the Output group contains an array with orders information. The Orders array contains the
OrderPrice field, which stores the price for each order. You want to find the total order price for each
company.
To find the total order price, add a field called TotalOrderPrice to the output group.
Edit the TotalOrderPrice field. Select Use this field to aggregate values in an output field array, and select
the Orders array as the output field for which you want to aggregate values. Configure the following
expression for the TotalOrderPrice field:
SUM(:fld.{Output.Orders.Orders.OrderPrice})
When the output data format is relational, you can configure filter conditions.
When the output data format is hierarchical, you can configure data source join and filter conditions as well
as group by and order by fields. You can aggregate on both the input and output data.
Order of operations
When the output data format is hierarchical, the order of operations is as follows:
When you modify the input group, the following configurations are also updated, if they refer to the input
group:
• Expressions
• Join conditions
• Filter conditions
Note: When the output data format is relational, you cannot change the input group name.
About aggregation
When the output data format is hierarchical, you can aggregate both the input and output data.
For information about aggregating input data, see “Configure group by fields” on page 176.
For information about aggregating output data, see “Configuring output groups and fields” on page 166.
Note: When the output data format is relational, you cannot aggregate data.
If you select a field as a data source, you have access to the following objects:
You can configure filters to exclude certain records. You can also specify group by fields for aggregating the
data and order by fields for sorting records.
If the output is relational, the data source for the output groups is always the input group or an incoming
field. For example, you add an array to the output group. If you add single occurring children, the data source
for the output group is the input group. If you add all descendants, the data source for each output group is
an incoming field array.
If the output is hierarchical, the data sources for the output group and fields can vary based on the output
data structure.
When you use the incoming data, the incoming data is used to populate the children of the array or struct.
When you choose to inherit the parent's data sources, the data that is transformed into the parent output field
is used to populate the children of the array or struct. Data transformations, such as joins and filters, that are
applied to the parent field are preserved. You can apply filters to the field to further filter the data, but you
cannot configure data sources, joins, group by fields, or order by fields.
For example, you are reading data from a relational table of customer records in which the customer ID is
unique. The incoming data contains the following records:
CustID,Name,Street,City,State,ZIP
00234,Ravindra Singh,123 6th St. Apt. 5A,Boston,MA,02134
14416,Melissa Clark,11 Winding Way,Watch Hill,RI,02891
You want to write the customer address fields to a struct.
In the Output Fields panel, set the data source for the Output group to Input and the data source for the
Address struct to Inherit parent's data sources (Output). When you run the mapping, Data Integration
creates one record for each occurrence of CustID in the input data and populates the struct with the address
data that corresponds to the customer ID in the output:
{
"CustID":"00234",
"Name":"Ravindra Singh",
"Address":{
"Street":"123 6th St. Apt. 5A",
"City":"Boston",
"State":"MA",
"ZIP":"02134"
}
}
{
"CustID":"14416",
"Name":"Melissa Clark",
"Address":{
"Street":"11 Winding Way",
"City":"Watch Hill",
"State":"RI",
"ZIP":"02891"
}
}
If you set the data source for the Address struct to Input, then you must also configure the following filter
condition on the struct to get the same output: :fld.{Input.CustID} = :fld.{Output.CustID} AND :fld.
{Input.Name} = :fld.{Output.Name}. For more information about configuring filter conditions, see
“Configure filter conditions” on page 174.
When the output field is an array that inherits its parent's data, Data Integration creates an array with one
element.
For example, you want to extract the description information from a nested array of maintenance records in a
JSON file. The description information is in an array of strings. You want the output data to also be in an
array of strings.
When Data Integration creates the output array, it sets the data source for the description array to
Input.vehicle.vehicle.maintenance.maintenance.desc.elem, indicating that the information for the
output array comes from the elements in the desc array in the Input group. By default, Data Integration sets
the data source for the Output group to Input.
The data source configuration for the Output group determines how Data Integration creates the output
records.
To collate all descriptions into one output record, keep the data source for the Output group as Input. This
produces the following output:
{"description":["battery replacement2","battery replacement1","air filter
replacement2","air filter replacement1","tire rotation2","tire rotation1","oil
filter2","oil filter1"]}
To create one output record for each occurrence of vehicle in the incoming data, set the data source to
Input.vehicle.vehicle. In this case, the output data contains one record for each vehicle:
{"description":["tire rotation2","tire rotation1","oil filter2","oil filter1"]}
{"description":["battery replacement2","battery replacement1","air filter
replacement2","air filter replacement1"]}
To create one output record for each occurrence of maintenance in the incoming data, set the data source to
Input.vehicle.vehicle.maintenance.maintenance. In this case, the output contains one record for each
maintenance record:
{"description":["oil filter2","oil filter1"]}
{"description":["tire rotation2","tire rotation1"]}
{"description":["air filter replacement2","air filter replacement1"]}
{"description":["battery replacement2","battery replacement1"]}
If you have a data source conflict, the transformation remains invalid until you resolve the conflict.
Additionally, you cannot configure joins, filters, order by fields, or group by fields until you resolve the
conflict.
If you select both Array1 and Array2 as data sources for the Output group, there is no way to determine which
data source provides the data for Field1 and Field2. In this case, Data Integration displays a conflicting data
sources error.
To resolve the conflict, remove one of the data sources from the Output group.
1. Click the Data Sources icon for the output group, array field, or struct field.
2. Add the data sources.
If you configure data sources for an output group, add the data sources. If you configure data sources
for an array or struct field, you can add data sources or inherit the parent's data sources.
3. Validate the configuration.
4. Click Save.
Configure the join conditions for the output groups on the Hierarchy Processor tab.
The transformation uses a combination of input and output fields to filter the input data to match the data
constructed for the parent output element for a particular output row. The matching of input data with the
parent element output data is accomplished by using any of the ancestor's primitive child fields.
Configure filter conditions for output groups, array fields, or struct fields on the Hierarchy Processor tab.
1. Click the Filter Condition icon for the output group or field.
2. Click Configure Filter Condition.
3. Select fields and built-in functions to create the expression.
4. Validate the expression.
5. Click Save.
For example, you are converting relational data to a JSON file. The incoming data is in a relational table that
contains orders information. The orders table contains multiple rows for each order because each order can
contain several products.
The following image shows the structure of the incoming and output fields:
In the Output Fields panel, set the data source for the Output group to Input, and configure the group by field
as Input.OrderNumber to remove duplicate records from the output. Set the data source for the
ProductDetails array to Input.
To ensure that the details in the ProductDetails array correspond to the order number in the output, configure
the following filter condition for the array:
:fld.{Input.OrderNumber}= :fld.{Output.OrderNumber}
To further refine the records, use an AND condition in the filter. For example, to exclude records in which the
product type is "Candy," configure the following filter condition:
When you do this, the output contains one record for each order, and incoming records with the product type
"Candy" are excluded.
1. Click the Group By fields icon for the output group or the array or struct field.
2. Add the input fields for aggregation.
3. Validate the configuration.
4. Click Save.
Note: The following conditions must be true for the sort operation to take effect:
1. Click the Order By fields icon for the output group or the array or struct field.
2. Add the input fields to order by and sort the data in ascending or descending order.
3. Rearrange the fields to adjust the sort order.
4. Validate the configuration.
5. Click Save.
When an input file is in JSON format, the schema often spans across multiple lines. For example:
{
"Name": "Tom",
"Surname": "Day",
"City": "Redwood City",
"State": "CA",
"Country": "USA",
"Zip": "94063"
}
The following example shows the same JSON-formatted schema in a single line:
{"Name":"Tom","Street":"2100 Seaport Blvd","City":"Redwood
City","State":"CA","Country":"USA","Zip":"94063"}
To read JSON-formatted input that spans across multiple lines, set the following advanced session
property in the mapping task:
advanced.custom.property infaspark.json.parser.multiLine=True
Note: If this property is not set, a multiple-line JSON input file will result in an output file with null values
in every column.
When the output is a JSON-formatted file, the Spark engine writes each output record to a separate file
by default.
To write the output records to one JSON-formatted file, set the following Spark session property in the
mapping task:
spark.sql.shuffle.partitions 1
The more joins, child fields, nested fields, and flattened arrays that the Hierarchy Processor transformation
contains, the more likely the mapping will exceed the field limit.
If the mapping exceeds the limit, the following message appears in the mapping compilation log:
[LDTM_0502] The mapping [<mapping name>] failed because the number of fields in the
compiled mapping exceeds the threshold: [7,000,000]. Number of fields: [<actual
number>]. Create multiple mappings to process the data incrementally.
To resolve the error, create multiple mappings to process the complex data incrementally. Reduce the size of
the mapping.
• Process hierarchical data and write the data to target files in a relational or delimited format.
• Process relational files and write the data to target files in hierarchical format.
• Read data from a hierarchical file and write it to a hierarchical file that uses a different schema.
• Flatten hierarchical data and write denormalized output data.
Read this section for examples of these use cases.
For more examples, you can watch the following videos on YouTube:
A customer order file contains the current customer contact information and the recent orders for those
customers. The order file is in hierarchical JSON format and is generated by your company's cloud
application.
Using the order file data, you want to create a relational customers table to use for an update on the
customer information in the master database. Separately, you want to analyze the orders that have been
increasing. You can use the order file to create a separate delimited orders file for the analysis.
Use a Hierarchy Processor transformation in a mapping to transform the data from the hierarchical input to
relational and delimited output.
1. Ensure that you have access to an Amazon S3 V2 Connector for the S3 source and target objects.
2. Add a Source transformation that reads hierarchical data from the source JSON file.
Property Value
Connection Amazon S3 V2
5. In the Hierarchy Processor transformation, create the OutputCustomers output group to create the
relational customers data file.
The following image shows how to add incoming fields, which will create the output group:
6. Create the OutputOrders output group to create the delimited orders data file.
7. In the Hierarchy Processor tab, map Incoming Fields to Output Fields.
You can add fields individually or use the following options for struct and array fields:
• Add All Descendants. Adds all children under the field, including all arrays and structs. If the incoming
field contains arrays, Data Integration creates a separate output group for each array.
• Add Single Occurring Children. Adds all single occurring children under the field, even if the single
occurring child is nested under a struct.
8. Add a Target transformation to write the customers data output.
Property Value
Connection Amazon S3 V2
Property Value
Connection Amazon S3 V2
12. Link the OutputCustomers output group to the TargetCustomers Target transformation.
13. Link the OutputOrders output group to the TargetOrders Target transformation.
14. Run the mapping.
Use the Hierarchy Processor transformation to create purchase orders in hierarchical format.
You will perform the following high-level tasks to create and configure the target file:
• Filter by order number and ship-to address to build the shipping address struct.
• Order by item number and group by part number to build the items array of structs.
• Aggregate the item price and quantity.
• Aggregate the total price for each purchase order.
• Join the data sources to build output data for the hierarchical purchase orders.
• Filter same-day shipping information to build the same-day items array of structs.
The POHeader table contains basic information about the orders placed by customers:
The PODetail table contains details about the customer purchase orders:
To use the Hierarchy Processor transformation to create purchase orders in hierarchical format, perform the
following tasks:
1. Add a Hierarchy Processor transformation and change the output data format to Hierarchical.
2. Add the POHeader, PODetail, and Address tables as source objects.
3. Connect the source objects to the Hierarchy Processor transformation in the data flow.
4. In the Hierarchy Processor transformation, add the PurchaseOrder output group and connect the
target object in the data flow.
Use the following steps to create the output group with the basic purchase order data and add the ship-
to address.
1. Add all the incoming fields from POHeader to the PurchaseOrder output group.
2. Add a new output field with the following properties:
Property Value
Child Of PurchaseOrder
Name shipToAddress
Type struct
3. Add all the incoming fields from Address to the shipToAddress struct in the output group:
4. Delete the following fields that you do not need in the output group:
• PurchaseOrder.shipToAddress.OrderNumber
• PurchaseOrder.shipToAddress.AddressType
5. Add a filter condition for the PurchaseOrder.shipToAddress struct: :fld.
{Address.OrderNumber}=:fld.{PurchaseOrder.OrderNumber} AND :fld.
{Address.AddressType}='ShipTo'.
Use the following steps to add the purchase order details in the items array of structs. Configure the
data processing strategies to sort by item number, group by part number, and aggregate the incoming
quantity and price.
Property Value
Child Of PurchaseOrder
Name Items_arr
Type array
2. Add all the incoming fields from PODetail to the Items_arr array in the output group.
3. Delete the following field that you do not need in the output group:
PurchaseOrder.Items_arr.OrderNumber.
4. Add a filter condition for the PurchaseOrder.Items_arr array: :fld.{PODetail.OrderNumber}=:fld.
{PurchaseOrder.OrderNumber}.
5. Configure a group by field for the PurchaseOrder.Items_arr array: PODetail.PartNum.
6. Configure an order by field in ascending order for the PurchaseOrder.Items_arr array:
PODetail.ItemNum.
7. Update the field expression for PODetail.Quantity in the PurchaseOrder.Items_arr array: SUM(:fld.
{PODetail.Quantity}) to aggregate quantity.
8. Update the field expression for PODetail.Price in the PurchaseOrder.Items_arr array: SUM(:fld.
{PODetail.Price}) to aggregate price.
The following image shows the data configuration icons and expressions for the Items_arr array in the
output group.
Use the following steps to aggregate all the items in a particular purchase order, providing the total
price.
Property Value
Child Of PurchaseOrder
Name TotalPrice
Type bigint
Aggregate Options: This field will aggregate values in an output field array Enabled
The following image shows the aggregate options for the TotalPrice output field:
Use the following steps to add and configure the same-day items array of structs. Using a filter, a join,
and a field expression, you output only the items that were ordered and shipped on the same date.
Property Value
Child Of PurchaseOrder
Name SameDayItems
Type array
2. Add all the incoming fields from PODetail to the SameDayItems array in the output group.
3. Delete the following field that you do not need in the output group:
PurchaseOrder.SameDayItems.OrderNumber.
4. Add POHeader as a data source for PurchaseOrder.SameDayItems array.
5. Add a join condition for the PurchaseOrder.SameDayItems array with the following properties:
Property Value
The following JSON shows the PurchaseOrder target output after you run the mapping:
{
"OrderNumber": "1",
"Comment": "AppD for POD4",
"OrderDate": "2018-10-01 00:00:00.0",
"ConfirmDate": "2018-10-02 00:00:00.0",
"address_struct": {
"Name": "Tom",
"Street": "2100 Seaport blvd",
"City": "Redwood City",
"State": "CA",
"Country": "USA",
"Zip": "94063"
},
"Items_arr": [{
"itemNum": "1",
"ProductName": "AppD Agent for JVM",
"Quantity": 60,
"price": 500,
"comment": "JVM agents",
"shipDate": "2018-10-15 00:00:00.0",
"PartNum": "1"
}, {
"itemNum": "2",
"ProductName": "MySQL agents",
"Quantity": 6,
"price": 360,
"comment": "MySQL agents",
"shipDate": "2018-10-15 00:00:00.0",
The existing customer order file, CompanyOrders, contains the names of companies who have placed orders
and information about each order, including the price, date, shipping address, and ID numbers of ordered
You want to restructure the shipping address into a struct and add a field to calculate the total price of all
orders for each company.
You will perform the following tasks to create and configure the target file:
1. Add all the incoming fields from the input to the output group. Set Add to Preserve incoming field.
The following image shows the Add Field dialog:
2. Verify that the data source for the output group is set to Input.
Perform the following steps to create a field that calculates the total price of all orders for each
company:
Property Value
Child Of Output
Name TotalOrdersPrice
Type double
The following image shows the aggregate options for the TotalOrdersPrice output field:
2. Configure the following field expression for Output.TotalOrdersPrice to aggregate the total price of
all orders for a company:
SUM(:fld.{Output.Orders.Orders.OrderPrice})
Perform the following steps to structure the order address in the output:
Property Value
Child Of Output.Orders.Orders
Name OrderAddress
Type struct
2. Add all the incoming fields from Orders to OrderAddress. Set Add to Add primitive single occurring
children.
3. Set the data source for OrderAddress to Use Output.
4. Delete the following fields from the OrderAddress struct that you do not need:
• Output.Orders.Orders.OrderAddress.OrderPrice
• Output.Orders.Orders.OrderAddress.OrderDate
5. Delete the following fields from the Orders output group that you do not need:
• Output.Orders.Orders.Street
• Output.Orders.Orders.City
• Output.Orders.Orders.State
• Output.Orders.Orders.Country
• Output.Orders.Orders.ZipCode
A shop maintenance file contains the customer and vehicle information for customers. The file is in
hierarchical JSON format and is generated by your company's shop application.
The following JSON shows the shop maintenance source input before you run the mapping:
{
"people": [{
"personal": {
"age": 20,
"gender": "M",
"name": {
"first": "John",
"last": "Doe"
}
},
"vehicles": [{
"type": "car",
"model": "Honda Civic",
"insurance": {
"policy_num": "HA12345"
},
"maintenance": [{
"desc": "oil change",
"cost": "111.50",
Perform the following high-level tasks to create and configure the output:
1. Add a Source transformation to the mapping with the shop maintenance file as a source object.
2. Add a Hierarchy Processor transformation to the mapping and connect shop maintenance as an
input source.
3. In the Hierarchy Processor transformation, select Flattened for the output format.
4. Add a Target transformation to the mapping, and connect the Hierarchy Processor transformation
output to this target object.
Use the following steps to create the output group with the vehicle shop maintenance data.
1. Select the top-level input group, which automatically includes all input fields in the hierarchy.
All the output fields are automatically created from the input you select.
2. Clear the personal struct, which automatically clears all the elements within the struct.
All the personal output fields are automatically deleted.
The following image shows the selected input fields and the resulting output fields:
3. Click on the output field for summary line1 and rename it to "Summary_line1."
4. Repeat the rename process for summary line2.
The following table shows the partially denormalized target output after you run the mapping:
car Honda HA12345 oil 111.5 2.0L 4-cyl 4.4 quarts internet
Civic change
Input transformation
The Input transformation is a passive transformation that you use to configure the data that you want to pass
into a mapplet. Use an Input transformation when you want a mapplet to receive data from an upstream
transformation in a mapping or mapplet.
Add input fields to define the data fields that you want to pass into the mapplet from the upstream
transformation.
You can add multiple Input transformations to a mapplet. Each Input transformation in a mapplet becomes
an input group in when you use the mapplet in a Mapplet transformation. You must connect at least one input
group to an upstream transformation.
Input fields
Add input fields to an Input transformation to define the fields that you want to pass into the mapplet. You
must add at least one input field to each Input transformation.
Add input fields on the Input Fields tab. To add a field, click Add Fields, and then enter the field name, data
type, precision, scale, and optional description. The description can contain up to 4000 characters.
When you use the mapplet in a Mapplet transformation, map at least one input field to the upstream
transformation.
198
Chapter 14
Java transformation
Extend Data Integration functionality with the Java transformation. The Java transformation provides a
simple, native programming interface to define transformation functionality with the Java programming
language.
You can use the Java transformation to quickly define simple or moderately complex transformation
functionality without advanced knowledge of the Java programming language. The Java transformation can
be an active or passive transformation.
The Secure Agent requires a Java Development Kit (JDK) to compile the Java code and generate byte code
for the transformation. Azul OpenJDK is installed with the Secure Agent, so you do not need to install a
separate JDK. Azul OpenJDK includes the Java Runtime Environment (JRE).
The Secure Agent uses the JRE to execute generated byte code at run time. When you run a mapping or
mapping task that includes a Java transformation, the Secure Agent uses the JRE to execute the byte code,
process input rows, and generate output rows.
To create a Java transformation, you write Java code snippets that define the transformation logic. Define
transformation behavior for a Java transformation based on the following events:
You cannot use the Java transformation with a Graviton-enabled cluster. For more information on a Graviton-
enabled cluster, see Data Integration Elastic Configuration.
Note: When you create a Java transformation, ensure that you review the Java code to verify that it is free
from potentially unsafe active content such as queries, remote scripts, or data connections before you run
the code in a mapping task.
199
Defining a Java transformation
To define a Java transformation, configure the transformation fields and properties, enter Java code snippets
on the Java tab, and compile the code.
Note: If you use third-party or custom Java packages, before you create a Java transformation, configure the
classpaths to use when you compile the code and when you run the mapping that contains the
transformation.
1. In the Mapping Designer, drag a Java transformation from the transformations palette onto the canvas
and connect it to the upstream and downstream transformations.
2. Configure incoming field rules and output fields for the transformation.
3. Configure the transformation properties.
4. Write the Java code snippets that define the transformation functionality.
5. Compile the code.
6. Locate and fix compilation errors.
Classpath configuration
If you use third-party or custom Java packages in the Java transformation, you must configure the classpath.
The Secure Agent includes the classes and resource files within the classpath when it compiles the Java
code. You do not need to configure the classpath if you use only built-in Java packages.
For example, you import the Java package converter in the Java transformation and define the package in
converter.jar. You must add converter.jar to the classpath before you compile the Java code. However, if you
import the package java.io, you do not need to set the classpath because java.io is a built-in Java package.
Set the classpath to each JAR file or class file directory associated with the Java package. Separate multiple
classpath entries with a semicolon (;) on Windows or a colon (:) on UNIX. The JAR or class files must be
accessible by the Secure Agent.
If you use the Java transformation in an elastic mapping, the files must be stored in the following directory
on the Secure Agent machine:
<Secure Agent installation directory>/ext/ctjars
Additionally, consider the following guidelines for elastic mappings:
• If the Secure Agent machine stops unexpectedly and the agent restarts on a different machine, you must
add the JAR or class files to the same directory on the new machine.
• If you update the JAR or class files on the Secure Agent machine, the files take effect the next time you
run a job on the elastic cluster.
• To prevent long-running jobs from failing, do not update the files on the Secure Agent machine more than
four times while you have jobs running.
If you use a serverless runtime environment, the files must be stored in the supplementary file location. For
more information about the supplementary file location, see the Administrator help.
The Secure Agent uses this classpath when you design and validate the Java transformation, run the
mapping from the Mapping Designer, or run the mapping task. This classpath applies to all mappings
and mapping tasks that run on the agent.
Set this property or set the CLASSPATH environment variable on the Secure Agent machine. You do not
need to set both classpath values.
The Secure Agent uses this classpath when you design and validate the Java transformation, run the
mapping from the Mapping Designer, or run the mapping task. This classpath applies to all mappings
and mapping tasks that run on the agent.
Set the CLASSPATH environment variable or set the JVMClassPath property for the Secure Agent. You
do not need to set both classpath values.
The Secure Agent uses this classpath when you design and validate the Java transformation and when
you run the mapping from the Mapping Designer. This classpath is not used when you run the mapping
through a mapping task.
Set the design-time classpath when you want to test the transformation and neither the JVMClassPath
property nor the CLASSPATH environment variable contain the required packages. If you configured the
JVMClassPath property or the CLASSPATH environment variable to include the required packages, then
you do not need to configure the design time classpath.
You configure the design-time classpath in the Java transformation advanced properties.
The Secure Agent uses this classpath when you run the mapping task. This classpath applies only to the
mapping task in which the property is set.
Set the Java Classpath session property when you want the classpath to apply to one mapping task but
not others. If you configured the JVMClassPath property or the CLASSPATH environment variable to
include the required packages, then you do not need to configure the Java Classpath session property.
Set the Java Classpath session property in the advanced session properties of the mapping task.
If you set multiple classpath values, the Secure Agent uses all of the classpaths that apply. For example, you
set the JVMClassPath property for the Secure Agent, the CLASSPATH environment variable, and the design
time classpath in the Java transformation. When you compile the Java code in the Java transformation or run
the mapping through the Mapping Designer, the Secure Agent uses all three classpaths. When you run the
mapping through a mapping task, the Secure Agent uses the JVMClassPath and the CLASSPATH
environment variable only.
Warning: If you set multiple classpaths, ensure that they do not create multiple copies of a class or resource
which can cause runtime errors.
Option Value
Type DTM
You configure the CLASSPATH environment variable differently on Windows and UNIX.
1. Open the Advanced System Properties from the Windows Control Panel.
2. Click Environment Variables.
3. Under System variables, click New.
4. Set the variable name to CLASSPATH and the variable value to the classpath.
5. Click OK.
Restart the Secure Agent after you configure the environment variable.
Incoming fields appear on the Incoming Fields tab. By default, the Java transformation inherits all incoming
fields from the upstream transformation. If you do not need to use all of the incoming fields, you can define
field rules to include or exclude certain fields. For more information about field rules, see “Field rules” on
page 21.
Add output fields on the Output Fields tab. Add output fields for the output data that you want to pass to the
downstream transformation. To add a field, click Add Field, and then enter the field name, data type,
precision, scale, and optional description. The description can contain up to 4000 characters. You can also
create output fields on the Outputs tab of the Java editor by clicking Create New Field.
• Primitive data types. The transformation initializes the value of the field variable to 0.
• Complex data types. The transformation initializes the field variable to null.
When a Java transformation reads input rows, it converts input field data types to Java data types. When a
Java transformation writes output rows, it converts Java data types to output field data types.
For example, the following processing occurs for an input field with the integer data type in a Java
transformation:
1. The Java transformation converts the integer data type of the input field to the Java primitive int data
type.
2. In the transformation, the transformation treats the value of the input field as the Java primitive int data
type.
3. When the transformation generates the output row, it converts the Java primitive int data type to the
integer data type.
The following table shows how the Java transformation maps the Data Integration transformation data types
to Java primitive and complex data types:
bigint long
binary byte[]
date/time BigDecimal
long (number of milliseconds since January 1, 1970 00:00:00.000 GMT)
decimal double
BigDecimal
double double
integer int
string String
text String
In Java, the String, byte[], and BigDecimal data types are complex data types, and the double, int, and long
data types are primitive data types.
The Java transformation sets null values in primitive data types to zero. You can use the isNull and the
setNull API methods in the On Input Row section of the Java editor to set null values in the input field to null
values in the output field. For an example, see “setNull” on page 226.
Note: The decimal data type maps to BigDecimal when high precision is enabled. BigDecimal cannot be used
with some operators, such as the + operator. If the Java code contains an expression that uses a decimal
field and the field is used with one of the operators, the Java code fails to compile.
The sort fields are one or more fields that you want to use as the sort criteria. Configure the sort order to sort
data in ascending or descending order.
If you configure a sort condition for data that is grouped into partitions, the mapping task sorts the data in
each partition.
When you specify multiple sort conditions, the mapping task sorts each condition sequentially. The mapping
task treats each successive sort condition as a secondary sort of the previous sort condition. You can
configure the order of sort conditions.
If you use a parameter for the sort condition, define the sort fields and the sort order when you run the
mapping or when you configure the mapping task.
Group by fields
In an elastic mapping, you can use group by fields to define how to group data into partitions before the Java
code runs.
When you configure a group by field, the mapping task groups rows with the same data into a partition. Then,
the Java code runs for each partition in the transformation. For example, the input row behavior is processed
for each partition and each row in the partition, and the end of data behavior is processed for each partition
after processing all rows in the partition.
When you select more than one group by field, the task creates a partition for each unique combination of
data in the group by fields.
If you do not configure a group by field, the Java code runs based on the data's default partitioning scheme.
If you use a parameter for the group by fields, define the group by fields when you run the mapping or when
you configure the mapping task.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Behavior Transformation behavior, either active or passive. If active, the transformation can generate
more than one output row for each input row. If passive, the transformation generates one
output row for each input row.
Default is Active.
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You can
choose terse, normal, verbose initialization, or verbose data. Default is normal.
Transformation The method in which the Secure Agent applies the transformation logic to incoming data. Use
Scope the following options:
- Row. Applies the transformation logic to one row of data at a time. Choose Row when the
results of the transformation depend on a single row of data.
- Transaction. Applies the transformation logic to all rows in a transaction. Choose Transaction
when the results of the transformation depend on all rows in the same transaction, but not on
rows in other transactions. For example, you might choose Transaction when the Java code
performs aggregate calculations on the data in a single transaction.
- All Input. Applies the transformation logic to all incoming data. When you choose All Input, the
Secure Agent drops transaction boundaries. Choose All Input when the results of the
transformation depend on all rows of data in the source. For example, you might choose All
Input when the Java code for the transformation sorts all incoming data.
For active transformations, default is All Input. For passive transformations, this property is
always set to Row.
Defines Update Specifies whether the transformation defines the update strategy for output rows. When enabled,
Strategy the Java code determines the update strategy for output rows. When disabled, the update
strategy is determined by the operation set in the Target transformation.
You can configure this property for active Java transformations.
Default is disabled.
Enable High Enables high precision to process decimal fields with the Java class BigDecimal. Enable this
Precision option to process decimal data types with a precision greater than 15 and less than 28.
Default is disabled.
In an elastic mapping, the Java transformation always uses high precision.
Use Nanoseconds Specifies whether the generated Java code converts the transformation date/time data type to
in Date/Time the Java BigDecimal data type, which has nanosecond precision.
When enabled, the generated Java code converts the transformation date/time data type to the
Java BigDecimal data type. When disabled, the code converts the date/time data type to the Java
long data type, which has millisecond precision.
Default is disabled.
Optional Determines whether the transformation is optional. If a transformation is optional and there are
no incoming fields, the mapping task can run and the data can go through another branch in the
data flow. If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data
flow, you add a transformation with a field rule so that only Date/Time data enters the
transformation, and you specify that the transformation is optional. When you configure the
mapping task, you select a source that does not have Date/Time data. The mapping task ignores
the branch with the optional transformation, and the data flow continues through another branch
of the mapping.
Default is enabled.
Design Time Classpath that the Secure Agent uses for custom or third-party packages when you design and
Classpath validate the transformation and when you run the mapping from the Mapping Designer.
This classpath is not used when you run the mapping through a mapping task.
Set the design-time classpath when you want to test the transformation and neither the
JVMClassPath property for the Secure Agent nor the CLASSPATH environment variable on the
Secure Agent machine contain the required packages. If you configured the JVMClassPath
property or the CLASSPATH environment variable to include the required packages, then you do
not need to configure this property.
A Java transformation runs the Java code that you define in the On Input Row section of the Java editor one
time for each row of input data.
• An active Java transformation generates multiple output rows for each input row in the transformation.
Use the generateRow method to generate each output row. For example, the transformation contains two
input fields that represent a start date and an end date. You can use the generateRow method to generate
an output row for each date between the start date and the end date.
• A passive Java transformation generates one output row for each input row in the transformation after
processing each input row.
You can change the transformation behavior. However, when you change the behavior, you must recompile
the Java code.
You can write the Java code to set the update strategy for output rows. The Java code can flag rows for
insert, update, delete, or reject. To define the update strategy with in the Java code, enable the Defines
Update Strategy option and use the setOutRowType method in the On Input Row section of the Java
editor to flag rows. For more information about setting the update strategy, see “setOutRowType” on
page 227.
You can configure the Target transformation to set the update strategy. To configure the Target
transformation to set the update strategy, disable the Defines Update Strategy option and configure the
target operation in the target properties.
By default, the Java transformation converts fields of type decimal to double data types with a precision of
15. If you want to process a decimal data type with a precision greater than 15, enable high precision to
process decimal fields with the Java class BigDecimal.
When you enable high precision, you can process decimal fields with precision less than 28 as BigDecimal.
The Java transformation converts decimal data with a precision greater than 28 to the double data type.
For example, a Java transformation has an input field of type decimal that receives a value of
40012030304957666903. If you enable high precision, the value of the field is treated as it appears. If you do
not enable high precision, the value of the field is 4.00120303049577 x 10^19.
If the mapping has hierarchical fields that contain child fields with decimal data types, the mapping runs
using low precision. If the same mapping contains a Java transformation with a decimal field, the mapping
task fails.
Processing subseconds
You can configure how the Java transformation processes subseconds for date/time data types. Define
subsecond handling on the Advanced tab.
By default, the generated Java code converts the transformation date/time data type to the Java long data
type, which has precision to the millisecond.
You can process subsecond data up to nanoseconds in the Java code. To process nanoseconds, enable the
Use Nanoseconds in Date/Time option. When you enable this option, the generated Java code converts the
transformation date/time data type to the Java BigDecimal data type, which has precision to the
nanosecond.
An elastic mapping supports precision to the microsecond. If a date/time value contains nanoseconds, the
trailing digits are truncated.
Enter Java code snippets in the following sections of the Java editor:
Import Packages
Import third-party Java packages, built-in Java packages, or custom Java packages.
Helper Code
Define variables and methods available to all sections except Import Packages.
On Input Row
At End of Data
Define the transformation behavior when it has processed all input data.
On Receiving Transaction
Define the transformation behavior when it receives a transaction notification. Use with active Java
transformations.
Access input data and set output data in the On Input Row section. For active transformations, you can also
set output data in the At End of Data and On Receiving Transaction sections.
The following image shows the Java tab with the Java editor expanded:
1. Inputs, Outputs, and APIs tabs. Use these tabs to add input and output fields as variables and to call API methods
in the Java code snippets. The fields and methods displayed on these tabs vary based on which section of the code
entry area is selected.
2. Go to list. Use to switch among the sections in the code entry area.
3. Minimize, Open Both, and Maximize icons. Use the Minimize and Maximize buttons to minimize and maximize the
transformation properties. Use the Open Both icon to open the Mapping Designer canvas and the transformation
properties at the same time.
4. Code entry area. Enter Java code snippets in the Import Packages, Helper Code, On Input Row, At End of Data, and
On Receiving Transaction sections.
5. Compilation results. Expand the compilation results to see detailed compilation results, compilation errors, and
view the full code.
Tip: To expand the transformation properties so that you can see the code entry area more fully, click
Maximize.
1. In the Go to list, select the section in which you want to enter a code snippet.
2. To access an input or output field in the snippet, select the field on the Inputs or Outputs tab, and click
Add.
You can also create output fields on the Outputs tab by clicking Create New Field.
3. To call a Java transformation API method in the snippet, select the method on the APIs tab, and click
Add.
The methods displayed on the APIs tab change based on which section is selected. For example, you
can use the getInRowType method only in the On Input Row section but not in other sections. Therefore,
this method is listed only when the On Input Row section is selected.
4. If necessary, configure the method input values.
5. Write appropriate Java code based on the section.
After you finish creating the Java code snippets, compile the code to validate the transformation.
For example, to import the Java I/O package, enter the following code in the Import Packages section:
import java.io.*;
You can import built-in, third-party, or custom Java packages. If you import third-party or custom Java
packages, you must add the packages to the classpath. For more information about configuring the
classpath, see “Classpath configuration” on page 200.
After you import Java packages, you can use the imported packages in the other sections.
You cannot declare or use static variables, instance variables, or user methods in the Import Packages
section.
Note: When you export a mapping or mapping task that contains a Java transformation, the jar or class files
that contain the third-party or custom packages required by the Java transformation are not included in the
export XML file. Therefore, when you import the mapping or task, you must copy the jar or class files that
contain the required third-party or custom packages to the Secure Agent machine.
After you declare variables and methods in the Helper Code area, you can use them in any code entry area
except the Import Packages area.
You can declare the following types of code, variables, and methods:
Within a static block, you can declare static variables and static code. Static code runs before any other
code in a Java transformation.
For example, the following code declares a static variable to store the error threshold for a Java
transformation:
static int errorThreshold;
User-defined static or instance methods
These methods extend the functionality of the Java transformation. Java methods that you declare in
the Helper Code section can use or modify output variables. You cannot access input variables from
Java methods in the Helper Code section.
For example, use the following code in the Helper Code section to declare a function that adds two
integers:
private int myTXAdd (int num1,int num2)
{
return num1+num2;
}
Access and use the following input and output field data, variables, and methods in the On Input Row section:
Access input and output field data as a variable by using the name of the field as the name of the
variable. For example, if “in_int” is an integer input field, you can access the data for this field by
referring as a variable “in_int” with the Java primitive data type int. You do not need to declare input and
output fields as variables.
Do not assign a value to an input field variable. If you assign a value to an input variable in the On Input
Row section, you cannot get the input data for the corresponding field in the current row.
Use any static variable or user-defined method that you declared in the Helper Code section.
For example, an active Java transformation has two input fields, BASE_SALARY and BONUSES, with an
integer data type, and a single output field, TOTAL_COMP, with an integer data type. You create a user-
defined method in the Helper Code section, myTXAdd, that adds two integers and returns the result.
Use the following Java code in the On Input Row section to assign the total values for the input fields to
the output field and generate an output row:
TOTAL_COMP = myTXAdd (BASE_SALARY,BONUSES);
generateRow();
When the Java transformation receives an input row, it adds the values of the BASE_SALARY and
BONUSES input fields, assigns the value to the TOTAL_COMP output field, and generates an output row.
To generate output rows in the At End of Data section, set the transformation scope for the transformation to
Transaction or to All Input on the Advanced tab. You cannot access or set the value of input field variables in
this section.
Access and use the following variables and methods in the At End of Data section:
Use the names of output fields as variables to access or set output data for active Java transformations.
User-defined methods
Use any user-defined method that you declared in the Helper Code section.
Call API methods provided by the Java transformation. For example, use the following Java code to write
information to the session log when the end of data is reached:
logInfo("Number of null rows for partition is: " + partCountNullRows);
• You cannot write output to standard output, but you can write output to standard error which appears in
the log files.
• You cannot pass binary null characters to an output field.
To avoid a mapping failure, you can add code to the Java transformation that replaces the binary null
characters with an alternative character before writing the data to the output field.
The code snippet in the On Receiving Transaction section is only executed if the Transaction Scope for the
transformation is set to Transaction. You cannot access or set the value of input field variables in this
section.
Access and use the following output data, variables, and methods from the On Receiving Transaction section:
Use the names of output fields as variables to access or set output data.
User-defined methods
Use any iuser-defined method that you declare in the Helper Code section.
For example, you want to read the first two columns of data from a delimited flat file. Create a mapping that
reads data from a delimited flat file and passes data to one or more output fields.
Source transformation
The source is a delimited flat file. Configure the source to pass each row as a single string to the Java
transformation. The source file contains the following data:
1a,2a,3a,4a,5a,6a,7a,8a,9a,10a
1b,2b,3b,4b,5b,6b,7b,8b,9b
1c,2c,3c,4c,5c,6c,7c
1d,2d,3d,4d,5d,6d,7d,8d,9d,10d
Java transformation
Use the On Input Row section of the Java editor to read each input row and pass the first two fields to
the output field, outputRow. Enter the following code in the On Input Row section:
// Collect the first two fields of the row and output them into outputRow.
String[] rowsSplit = row.split(",", 3);
if (rowsSplit.length >= 2) {
outputRow = rowsSplit[0] + "," + rowsSplit[1];
}
generateRow();
Configure the target to receive the output field, outputRow, from the Java transformation. After you run
the mapping, the target file has the following data:
1a,2a
1b,2b
1c,2c
1d,2d
When you create a Java transformation, it contains a Java class that defines the base functionality for the
transformation. When you compile the transformation, the Secure Agent adds the code that you enter in the
Java editor to the template class for the transformation. This generates the full class code for the
transformation.
The Secure Agent calls the JDK to compile the full class code. The JDK compiles the transformation and
generates the byte code for the transformation.
Note: In a mapplet, the Java transformation compiles based on the data types and APIs that you can use in
non-elastic mappings. If the code contains data types or APIs that you can use only in elastic mappings, such
as the invokeJExpression API method, the code fails to compile.
Before you can compile the code, you must complete the following tasks:
The compilation results shows the results of the compilation. Use the compilation results to identify and
locate Java code errors.
To view the full class code, click View Full Code in the compilation results. Data Integration displays the full
class code in the Full Code dialog box.
To download the full code, click Download Full Code in the Full Code dialog box.
• Find the source of the error in the Java code snippets or in the full class code for the transformation.
• Identify the type of error using the compilation results and the location of the error.
After you identify the source and type of error, fix the Java code on the Java tab and compile the
transformation again.
Java editor
If the error is located in a code snippet in Java editor, Data Integration lists the section and the line that
contains the error. Data Integration also highlights the source of the error in the Java editor.
Full code
If the error is located in the full code, Data Integration lists the error location as "Full Code" and lists the
line in the Full Code dialog box that contains the error.
You can locate errors in the Full Code dialog box, but you cannot edit the Java code. To fix errors that
you find in the Full Code dialog box, edit the code in the appropriate section. You might need to use the
Full Code dialog box to view errors caused by adding user code to the full class code for the
transformation.
Errors can occur in the user code in different sections of the Java editor. User code errors include
standard Java syntax and language errors. User code errors might also occur when Data Integration
adds the user code to the full class code.
For example, a Java transformation has an input field with a name of int1 and an integer data type. The
full code for the class declares the input field variable with the following code:
int int1;
However, if you use the same variable name in the On Input Row section, the Java compiler issues an
error for a redeclaration of a variable. To fix the error, rename the variable in the On Input Row section.
User code in sections of the Java editor can cause errors in non-user code.
Use a Java transformation to process employee data for a fictional company. The Java transformation reads
input rows from a flat file source and writes output rows to a flat file target. The source file contains
employee data, including the employee identification number, name, job title, and the manager identification
number.
The transformation finds the manager name for a given employee based on the manager identification
number and generates output rows that contain employee data. The output data includes the employee
identification number, name, job title, and the name of the employee’s manager. If the employee has no
manager in the source data, the transformation assumes the employee is at the top of the hierarchy in the
company organizational chart.
Note: The transformation logic assumes that the employee job titles are arranged in descending order in the
source file.
To create and run the mapping in this example, perform the following steps:
Source transformation
In the Source transformation, update the source field metadata as shown in the following table:
Java transformation
The Java transformation includes all incoming fields from the Source transformation.
EMP_ID_OUT integer 10 0
Target transformation
EMP_ID EMP_ID_OUT
EMP_NAME EMP_NAME_OUT
EMP_DESC EMP_DESC_OUT
EMP_PARENT_EMPNAME EMP_PARENT_EMPNAME
Enter the Java code snippets in the following sections of the Java editor:
Import Packages
Helper Code
Create a Map object, lock object, and boolean variables to track the state of data in the Java
transformation.
On Input Row
Enter code that defines the behavior of the Java transformation when it receives an input row.
Helper code
Declare user-defined variables and methods for the Java transformation in the Helper Code section.
The Helper Code section defines the following variables that are used by the Java code in the On Input Row
section:
Variable Description
empMap Map object that stores the identification number and employee name from the source.
lock Lock object used to synchronize the access to empMap across partitions.
generateRow Boolean variable used to determine whether an output row should be generated for the current input
row.
isRoot Boolean variable used to determine whether an employee is at the top of the company organizational
chart (root).
On Input Row
The Java transformation executes the Java code in the On Input Row section when the transformation
receives an input row. In this example, the transformation might or might not generate an output row, based
on the values of the input row.
// If input employee id and/or name is null, don't generate a output row for this
// input row.
generateRow = false;
} else {
if (isNull ("EMP_DESC"))
{
setNull("EMP_DESC_OUT");
} else {
EMP_DESC_OUT = EMP_DESC;
}
if(isParentEmpIdNull)
{
// This employee is the root for the hierarchy.
isRoot = true;
logInfo("This is the root for this hierarchy.");
setNull("EMP_PARENT_EMPNAME");
}
synchronized(lock)
{
// If the employee is not the root for this hierarchy, get the corresponding
// parent ID.
if(!isParentEmpIdNull)
EMP_PARENT_EMPNAME = (String) (empMap.get(new Integer (EMP_PARENT_EMPID)));
The compilation results displays the status of the compilation. If the Java code does not compile
successfully, correct the errors in the Java editor and recompile the Java code. After you successfully
compile the transformation, save and run the mapping.
To add an API method to a code snippet, click the APIs tab in the Java editor, select the method that you
want to add, and click Add. Alternatively, you can manually enter the API method in the Java code snippet.
You can add the following API methods to the Java code snippets in a Java transformation:
failSession
generateRow
getInRowType
incrementErrorCount
invokeJExpression
Invokes an expression and returns the value for the expression. Use only in an elastic mapping.
isNull
logError
logInfo
setNull
Sets the value of an output column in an active or passive Java transformation to null.
setOutRowType
Sets the update strategy for output rows. Can flag rows for insert, update, or delete.
221
failSession
Throws an exception with an error message and fails the session. Use failSession to terminate the session.
Use failSession in any section of the Java editor except Import Packages. Do not use failSession in a try/
catch block in the Java editor.
Use the following Java code to test the input field input1 for a null value and fail the session if input1 is
NULL:
if(isNull(”input1”)) {
failSession(“Cannot process a null value for field input1.”);
}
generateRow
Generates an output row for active Java transformations.
When you call generateRow, the Java transformation generates an output row using the current value of the
output field variables. If you want to generate multiple rows corresponding to an input row, you can call
generateRow more than once for each input row. If you do not use generateRow in an active Java
transformation, the transformation does not generate output rows.
Use generateRow in any section of the Java editor except Import Packages. You can use generateRow with
active transformations only. If you use generateRow in a passive transformation, the session generates an
error.
You can only use getInRowType in the On Input Row section of the Java editor. You can only use the
getInRowType method in active transformations that are configured to set the update strategy. If you use this
method in an active transformation that is not configured to set the update strategy, the session generates
an error.
rowType String Output Update strategy type. Value can be INSERT, UPDATE, DELETE, or REJECT.
Use the following Java code to propagate the input type of the current row if the row type is UPDATE or
INSERT and the value of the input field input1 is less than 100 or set the output type as DELETE if the value of
input1 is greater than 100:
// Set the value of the output field.
output1 = input1;
// Set row type to DELETE if the output field value is > 100.
if(input1 > 100
setOutRowType(DELETE);
incrementErrorCount
Increases the error count for the session. If the error count reaches the error threshold for the session, the
session fails.
Use incrementErrorCount in any section of the Java editor except Import Packages.
nErrors Integer Input Number of errors to increment the error count for the session.
Use the following Java code to increment the error count if an input field for a transformation has a null
value:
// Check whether input employee ID or name is null.
if (isNull ("EMP_ID_INP") || isNull ("EMP_NAME_INP"))
{
incrementErrorCount(1);
// If input employee ID or name is null, don't generate an output row for this
getInRowType 223
input row.
generateRow = false;
}
invokeJExpression
Invokes an expression and returns the value for the expression. Use only in an elastic mapping.
Use invokeJExpression in any section of the Java editor except Import Packages and Helper Code.
dataType - Output Data type that you want to cast the return
value to. By default, the return data type is
an object.
You can cast the return value to an integer,
double, string, or byte[] data type.
Use the following Java code to invoke the concat() method to concatenate the strings John and Smith:
(String)invokeJExpression("concat(x1,x2)", new Object [] { "John ", "Smith" });
The code returns the following string:
John Smith
Consider the following rules and guidelines for the invokeJExpression method:
• By default, the update strategy for return values is INSERT. To use a different update strategy, you must
define the update strategy in the Java code.
• If an argument, parameter, or return value is NULL, the value is treated as a null indicator.
For example, if the return value of the invoked expression is NULL and the return data type is a string, the
invokeJExpression method returns a string with a value of NULL.
• If an input parameter to the invoked expression is a date/time data type, you must pass the parameter as
a string and use the TO_DATE function to convert the string to a date/time data type.
For example, use the following argument to pass a date/time value to the invoked expression:
new Object [] { "TO_DATE("01/22/98", "MM/DD/YY")" }
isNull
Checks the value of an input column for a null value. Use isNull to check if data of an input column is NULL
before using the column as a value.
Use the following Java code to check the value of the SALARY input column before adding it to the output
field TOTAL_SALARIES:
// If value of SALARY is not null
if (!isNull("SALARY")) {
// Add to TOTAL_SALARIES.
TOTAL_SALARIES += SALARY;
}
or
// If value of SALARY is not null
String strColName = "SALARY";
if (!isNull(strColName)) {
// Add to TOTAL_SALARIES.
TOTAL_SALARIES += SALARY;
}
logError
Writes an error message to the session log.
Use logError in any section of the Java editor except Import Packages.
Use the following Java code to log an error if the input field is null:
// Log an error if BASE_SALARY is null.
if (isNull("BASE_SALARY")) {
isNull 225
logError("Cannot process a null salary field.");
}
The following message appears in the message log:
[JTX_1013] [ERROR] Cannot process a null salary field.
logInfo
Writes an informational message to the session log.
Use logInfo in any section of the Java editor except Import Packages.
Use the following Java code to write a message to the session log after the Java transformation processes a
message threshold of 1000 rows:
if (numRowsProcessed == messageThreshold) {
logInfo("Processed " + messageThreshold + " rows.");
}
The following message appears in the session log:
[JTX_1012] [INFO] Processed 1000 rows.
setNull
Sets the value of an output column in an active or passive Java transformation to NULL. Once you set an
output column to NULL, you cannot modify the value until you generate an output row.
Use setNull in any section of the Java editor except Import Packages.
Use the following Java code to check the value of an input column and set the corresponding value of an
output column to null:
// Check the value of Q3RESULTS input column.
if(isNull("Q3RESULTS")) {
setOutRowType
Sets the update strategy for output rows. The setOutRowType method can flag rows for insert, update, or
delete.
Use setOutRowType in the On Input Row section of the Java editor. You can use setOutRowType in active
transformations that are configured to set the update strategy. If you use setOutRowType in an active
transformation that is not configured to set the update strategy, the session generates an error and the
session fails.
rowType String Input Update strategy type. Value can be INSERT, UPDATE, or DELETE.
Use the following Java code to propagate the input type of the current row if the row type is UPDATE or
INSERT and the value of the input field input1 is less than 100, or set the output type as DELETE if the value
of input1 is greater than 100:
// Set the value of the output field.
output1 = input1;
// Set row type to DELETE if the output field value is > 100.
if(input1 > 100)
setOutRowType(DELETE);
setOutRowType 227
Chapter 16
Joiner transformation
The Joiner transformation can join data from two related heterogeneous sources. For example, you can use
the Joiner transformation to join account information from flat files with data from the Salesforce Account
object.
The Joiner transformation joins data based on the join conditions and the join type. A join condition matches
fields between the two sources. You can create multiple join conditions. A join type defines the set of data
that is included in the results.
When you link a transformation to the Joiner transformation, you connect it to the Master or Detail group. To
improve job performance, connect the transformation that represents the smaller data set to the Master
group.
To join more than two sources in a mapping, you can use multiple Joiner transformations. You can join the
output from the Joiner transformation with another source pipeline. You can add Joiner transformations to
the mapping until you join all source pipelines.
Field name conflicts can occur when you join sources with matching field names. You can resolve the
conflict in one of the following ways:
Join condition
The join condition defines when incoming rows are joined. It includes fields from both sources that must
match to join source rows.
You define one or more conditions based on equality between the master and detail data. For example, if two
sets of employee data contain employee ID numbers, the following condition matches rows with the same
employee IDs in both sets of data:
EMP_ID1 = EMP_ID2
Use one or more join conditions. Additional join conditions increase the time necessary to join the data.
When you use multiple join conditions, the mapping task evaluates the conditions in the order that you
specify.
Both fields in a condition must have the same data type. If you need to use two fields with non-matching data
types, convert the data types so they match.
228
For example, when you try to join Char and Varchar data, any spaces that pad Char values are included as
part of the string. Both fields might include the value "Shoes," but because the Char(40) field includes 35
trailing spaces, the values do not match. To ensure that the values match, change the data type of one field
to match the other.
When you use a Joiner transformation in an elastic mapping, the mapping becomes invalid if the join
condition contains a binary data type.
Note: The Joiner transformation does not match null values. To join rows with null values, you can replace
null values with default values, and then join on the default values.
Join type
The join type determines the result set that passes to the rest of the mapping.
Normal Join
Includes rows with matching join conditions. Discards rows that do not match the join conditions.
Master Outer
Includes all rows from the detail pipeline and the matching rows from the master pipeline. It discards the
unmatched rows from the master pipeline.
Detail Outer
Includes all rows from the master pipeline and the matching rows from the detail pipeline. It discards the
unmatched rows from the detail pipeline.
Full Outer
Includes rows with matching join conditions and all incoming data from the master pipeline and detail
pipeline.
Advanced properties
You can configure advanced properties for a Joiner transformation. The advanced properties control settings
such as the tracing level for session log messages, cache settings, null ordering, and whether the
transformation is optional or required.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You
can choose terse, normal, verbose initialization, or verbose data. Default is normal.
Cache Directory Specifies the directory used to cache master or detail rows and the index to these rows.
By default, Data Integration uses the directory entered in the Secure Agent $PMCacheDir
property for the Data Integration Server. If you enter a new directory, make sure that the
directory exists and contains enough disk space for the cache files. The directory can be on a
mapped or mounted drive.
Null Ordering in Null ordering in the master pipeline. Select Null is Highest Value or Null is Lowest Value.
Master
Null Ordering in Null ordering in the detail pipeline. Select Null is Highest Value or Null is Lowest Value.
Detail
Data Cache Size Data cache size for the transformation. Select one of the following options:
- Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Index Cache Size Index cache size for the transformation. Select one of the following options:
- Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Sorted Input Specifies that data is sorted. Select this option to join sorted data, which can improve
performance.
Master Sort Order Specifies the sort order of the master source data. Select Ascending if the master source data
is in ascending order. If you select Ascending, enable sorted input. Default is Auto.
Transformation Specifies how Data Integration applies the transformation logic to incoming data:
Scope - Transaction. Applies the transformation logic to all rows in a transaction. Choose
Transaction when a row of data depends on all rows in the same transaction, but does not
depend on rows in other transactions.
- All Input. Applies the transformation logic on all incoming data. When you choose All Input,
Data Integration drops incoming transaction boundaries. Choose All Input when a row of
data depends on all rows in the source.
- Row. Applies the transformation logic to one row of data at-a-time. Choose Row when a row
of data does not depend on any other row.
Optional Determines whether the transformation is optional. If a transformation is optional and there are
no incoming fields, the mapping task can run and the data can go through another branch in the
data flow. If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data
flow, you add a transformation with a field rule so that only Date/Time data enters the
transformation, and you specify that the transformation is optional. When you configure the
mapping task, you select a source that does not have Date/Time data. The mapping task
ignores the branch with the optional transformation, and the data flow continues through
another branch of the mapping.
Before you create a Joiner transformation, add Source transformations to the mapping to represent source
data. Include any other upstream transformations that you want to use.
If the data in the two pipelines include matching field names, rename one set of fields in a transformation
upstream from the Joiner transformation.
1. In the Transformation palette, drag a Joiner transformation onto the mapping canvas.
2. Connect an upstream transformation that represents one data set to the Master group of the Joiner
transformation.
To improve job performance, use the transformation that represents the smaller data set.
3. Connect an upstream transformation that represents the other data set to the Detail group of the Joiner
transformation.
4. On the General tab, enter a name and optional description for the transformation.
5. On the Incoming Fields tab, configure the field rules that define the data that enters the transformation.
6. On the Field Mappings tab, select the Join Type.
7. To create a join condition, the Join Condition, select Simple to create join conditions.
8. Click Add New Join Condition, and then select the master and detail fields to use.
You can create multiple join conditions.
You can add downstream transformations to the mapping and configure them. When the mapping is
complete, you can validate and save the mapping.
Labeler transformation
The Labeler transformation adds a labeler asset that you created in Data Quality to a mapping.
A labeler asset defines a set of operations that evaluates the types of information in an input field and
assigns a label to each type of data that it finds.
When you configure the transformation, you map an input field to a target field that links to the labeler asset.
When the mapping runs, the transformation searches the input field for values that match the labeling criteria
that the asset defines. The transformation writes the results of the labeling operation to two output fields.
For more information about the output fields, see “Labeler transformation output fields” on page 235.
A Labeler transformation is similar to a Mapplet transformation, as it allows you to add data transformation
logic that you designed elsewhere to a mapping. Like mapplets, labeler assets are reusable assets.
A Labeler transformation shows incoming and outgoing fields. It does not display the logic that the labeler
asset contains or allow you to edit the labeler asset. To edit the labeler asset, open it in Data Quality.
232
The following image shows the options that you use to select the labeler asset:
If you update an asset in Data Quality after you add it to a transformation, you may need to synchronize the
asset version in the transformation with the latest version.
To synchronize the asset versions, open the transformation in the mapping and select the transformation
name in the properties panel. For example, in a Cleanse transformation select Cleanse in the properties
panel. If synchronization is necessary, Data Integration displays a message that prompts you to synchronize
the assets.
• Manual. Manually link an incoming field to a transformation input field. Removes links for any
automatically mapped field.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
• Completely Parameterized. Use a parameter to represent the field mapping.
Choose the Completely Parameterized option when the labeler asset in the transformation is
parameterized or any upstream transformation in the mapping is parameterized.
• Partially Parameterized. Configure links in the mapping that you want to enforce and use a parameter
to allow other fields to be mapped in the mapping task. Or, use a parameter to configure links in the
mapping, and allow all fields and links to display in the task for configuration.
Parameter
Select the parameter to use for the field mapping, or create a new parameter. This option appears when
you select Completely Parameterized or Partially Parameterized as the field map option. The parameter
must be of type field mapping.
Do not use the same field mapping parameter in more than one Labeler transformation in a single
mapping.
Options
Controls how fields are displayed in the Incoming Fields and Target Fields lists.
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields and to manually configure other
field mappings. The Automap options appear when you select the Manual or Partially Parameterized
field map option.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap.
To unmap a single field, select the field to unmap and click Actions > Unmap on the context menu for the
field. To unmap one or more fields that you selected, click Unmap Selected on the Target Fields context
menu.
To clear all field mappings from the transformation, click Clear Mapping on the Target Fields context
menu.
LabeledOutput
A copy of the input field data in which any value that matches the labeling criteria is replaced with the
label that the asset specifies.
TokenizedData
A copy of the input field data. If the labeler asset reads a dictionary, the transformation may replace any
input value that matches a dictionary value with an alternative value from the dictionary. A Data Quality
user can configure the labeler asset to replace the input values with the alternative values.
The transformation copies any value in the input field that the labeling operation does not identify to each
output field.
LabeledOutput
A copy of the input field data in which any character that matches the labeling criteria is replaced with
the label that the asset specifies.
The field can contain character labels and characters from the input data to which a label does not
apply.
Lookup transformation
Use a Lookup transformation to retrieve data based on a specified lookup condition. For example, you can
use a Lookup transformation to retrieve values from a database table for codes used in source data.
When a mapping task includes a Lookup transformation, the task queries the lookup source based on the
lookup fields and a lookup condition. The Lookup transformation returns the result of the lookup to the target
or another transformation. You can configure the Lookup transformation to return a single row or multiple
rows. When you configure the Lookup transformation to return a single row, the Lookup transformation is a
passive transformation. When you configure the Lookup transformation to return multiple rows, the Lookup
transformation is an active transformation.
• Get a related value. Retrieve a value from the lookup table based on a value in the source. For example,
the source has an employee ID. Retrieve the employee name from the lookup table.
• Get multiple values. Retrieve multiple rows from a lookup table. For example, return all employees in a
department.
• Update slowly changing dimension tables. Determine whether rows exist in a target.
A connected Lookup transformation receives source data, performs a lookup, and returns data.
Cache the lookup source to optimize performance. If you cache the lookup source, you can use a static
or dynamic cache. You can also use a persistent or non-persistent cache.
By default, the lookup cache remains static and does not change as the mapping task runs. With a
dynamic cache, the task inserts or updates rows in the cache as the target table changes. When you
cache the target table as the lookup source, you can look up values in the cache to determine if the
values exist in the target.
By default, the lookup cache is also non-persistent. Therefore, Data Integration deletes the cache files
after the mapping task completes. If the lookup table does not change between mapping runs, you can
use a persistent cache to increase performance.
236
Lookup object
The Lookup object is the source object that Data Integration queries when it performs the lookup. The lookup
object is also called the lookup source.
Select the lookup source on the Lookup Object tab of the Properties panel. The properties that you configure
for the lookup source vary based on the connection type
The following image shows the Lookup Object tab for a relational lookup:
1. Lookup object details where you configure the connection, source type, lookup object, and multiple match behavior.
2. Select the lookup source from the mapping inventory.
In the Lookup Object Details area, select the connection, source type, and lookup object. You can also
create a new connection.
If your organization administrator has configured Enterprise Data Catalog integration properties, and you
have added objects to the mapping from the Data Catalog page, you can select the lookup source from
the Inventory panel. If your organization administrator has not configured Enterprise Data Catalog
integration properties or you have not performed data catalog discovery, the Inventory panel is empty.
For more information about data catalog discovery, see Mappings.
Use a parameter.
You can use an input parameter to define the connection or lookup object when you run the mapping
task. For more information about parameters, see Mappings.
You can use a custom query to reduce the number of columns to query. You might want to use a custom
query when the source object is large.
You must also specify the transformation behavior when the lookup returns multiple matches.
Property Description
Source Type Source type. For database lookups, the source type can be single object, parameter, or query. For
flat file lookups, the source type can be single object, file list, command, or parameter.
Lookup Object If the source type is a single object, this property specifies the lookup file, table, or object.
If the source property is a file list, this property specifies the text file that contains the file list.
If the source type is a command, this property specifies the sample file from which Data Integration
imports the return fields.
Parameter If the source type is a parameter, this property specifies the parameter.
Define Query If the source type is a query, displays the Edit Custom Query dialog box. Enter a valid custom query
and click OK.
Multiple Behavior when the lookup condition returns multiple matches. You can return all rows, any row, the
Matches first row, the last row, or an error.
If you choose all rows and there are multiple matches, the Lookup transformation is an active
transformation. If you choose any row, the first row, or the last row and there are multiple matches,
the Lookup transformation is a passive transformation.
Formatting File formatting options which are applicable if the lookup object is a flat file.
Options Opens the Formatting Options dialog box to define the format of the file. Configure the following
file format options:
- Delimiter. Delimiter character.
- Text Qualifier. Character to qualify text.
- Escape character. Escape character.
- Field labels. Determines if the task generates field labels or imports labels from the source file.
- First data row. The first row of data. The task starts the read at the row number that you enter.
Command If the source type is a command, this property specifies the command that generates the file list.
For more information about file lists and commands, see “File lists” on page 38. For more information about
parameters and file formats, see Mappings.
Uncached lookups
Some connector types do not support the multiple match policies Return first row and Return last row in
uncached lookups. If you select either of these policies, and the connector does not support the policy in
uncached lookups, Data Integration enables the Lookup Caching Enabled advanced property, and you
cannot edit it.
If the Lookup transformation uses a dynamic cache, you must configure the multiple match policy to
return an error. Other multiple match policies are not supported.
Salesforce lookups
When you perform a lookup against a Salesforce object, you can return any row or return an error.
When you use the Lookup transformation in an elastic mapping, you can return all rows, return any row,
or return an error. The multiple match policies Return first row and Return last row are not supported.
When you define the behavior for multiple matches in an elastic mapping to return an error, the Lookup
transformation drops duplicate rows and does not include the duplicate rows in the log files.
For more information about the multiple match policies supported by different connectors, see the help for
the appropriate connector.
Custom queries
You can create a custom query for database lookups. You might create a custom query to reduce the number
of columns to query.
To use a custom query as a lookup source, select Query as the source type, and then define the query. When
you define the query, enter an SQL SELECT statement to select the source columns that you want to use.
Data Integration uses the SQL statement to retrieve source column information.
When you use a custom query in a lookup transformation, use the following format for the SQL statement:
• For a relational database connection, use an alias for each column in the SQL statement, for example:
SELECT COL1 AS COL1, COL2 AS COL2, COL3 AS COL3 from TABLE_NAME
• For other types of database connections, use SQL that is valid for the source database. You can use
database-specific functions in the query.
To use a custom query as a lookup source, you must enable lookup caching.
Tip: Test the SQL statement you want to use on the source database before you create a custom query. Data
Integration does not display specific error messages for invalid SQL statements.
Lookup condition
The lookup condition defines when the lookup returns values from the lookup object. When you configure the
lookup condition, you compare the value of one or more fields from the data flow with values in the lookup
object.
A lookup condition includes an incoming field from the data flow, a field from the lookup object, and an
operator. For flat file and database connections, you can use the following operators in a lookup condition:
= (Equal to)
< (Less than)
> (Greater than)
<= (Less than or equal to)
>= (Greater than or equal to)
For other connections and for Lookup transformations that use a dynamic cache, you can use the = (Equal to)
operator in a lookup condition.
• When you enter multiple conditions, the mapping task evaluates the lookup conditions using the AND
logical operator to join the conditions. It returns rows that match all of the lookup conditions.
• When you include multiple conditions, to optimize performance enter the conditions in the following order:
1. = (Equal to)
2. < (Less than), <= (Less than or equal to), > (Greater than), >= (Greater than or equal to)
3. != (Not equal to)
• The lookup condition matches null values. When an input field is NULL, the mapping task evaluates the
NULL equal to null values in the lookup.
• An elastic mapping becomes invalid if the lookup condition contains a binary data type.
The Return Fields tab displays all fields from the selected lookup object. By default, the mapping includes all
fields in the list in the data flow. Remove fields that you do not want to use.
For Lookup transformations that use a dynamic cache, the task returns the NewLookupRow return field. You
cannot remove this field. For more information about the NewLookupRow return field, see “Dynamic cache
updates” on page 248.
You can edit the name of a field. You can also edit the metadata for a field. When you edit field metadata, you
can change the name, native data type, native precision, and native scale. When you change field metadata,
you cannot automatically revert your changes. Avoid making changes that can cause errors when you run the
task.
You can add a field to the field list if the field exists in the lookup object. To add a field, you need exact field
details, including the field name, data type, precision, and scale.
To restore the original fields from the lookup object, use the Synchronize icon. Synchronization restores
deleted fields, adds new fields, and retains added fields with corresponding fields in the lookup object.
Synchronization removes any added fields that do not have corresponding fields in the lookup object.
Synchronization does not revert any local changes to the field metadata.
The following table describes the options that you can use on the Return Fields tab:
Add Field icon Adds a field from the selected lookup object. Use to retrieve a field from the object that does not
display in the list.
Opens the New Field dialog box. Enter the exact field name, data type, precision, and scale, and
click OK.
Delete icon Deletes the field from the list, removing the field from the data flow.
Sort icon Sorts fields in native order, ascending order, or descending order.
Find field Enter a search string to find the fields with names that contain the string.
Synchronize icon Synchronizes the list of fields with the lookup object.
Note: If you select this option, you lose any changes you make to the metadata for return fields.
Ignore in When the transformation uses a dynamic cache, by default, Data Integration compares the values
Comparison in all lookup fields with the values in the associated incoming fields to determine whether to
update the row in the lookup cache.
Enable this property if you want Data Integration to ignore the field when it compares the values
before updating a row. You must configure the transformation to compare at least one field.
This property is displayed for each field when the Lookup transformation uses a dynamic cache.
Retain existing If field metadata changes after a mapping is saved, Data Integration uses the updated field
fields at runtime metadata when you run the mapping. Typically, this is the desired behavior. However, if the
mapping uses a native flat file connection and you want to retain the metadata used at design
time, enable the Retain existing fields at runtime option. When you enable this option, Data
Integration mapping tasks will use the field metadata that was used when you created the
mapping.
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You can
choose terse, normal, verbose initialization, or verbose data. Default is normal.
Lookup Source Name of the directory for a flat file lookup source. By default, Data Integration reads files from
File Directory the lookup source connection directory.
You can also use an input parameter to specify the source file directory.
If you use the service process variable directory $PMLookupFileDir, the task writes target files to
the configured path for the system variable. To find the configured path of a system variable, see
the pmrdtm.cfg file located at the following directory:
<Secure Agent installation directory>\apps\Data_Integration_Server\<Data
Integration Server version>\ICS\main\bin\rdtm
You can also find the configured path for the $PMLookupFileDir variable in the Data Integration
Server system configuration details in Administrator.
Lookup Source File name, or file name and path of the lookup source file.
File Name
Lookup SQL Overrides the default SQL statement to query the lookup table. Specifies the SQL statement you
Override want to use for querying lookup values.
Use with lookup cache enabled.
Lookup Source Restricts the lookups based on the value of data in any field in the Lookup transformation.
Filter Use with lookup cache enabled.
Lookup Caching Determines whether to cache lookup data during the runtime session. When you enable caching,
Enabled Data Integration queries the lookup source once and caches the values for use during the session,
which can improve performance. When you disable caching, a SELECT statement gets the lookup
values each time a row passes into the transformation.
Caching is enabled and is not editable in the following circumstances:
- When the lookup source type does not support uncached lookups.
- When you select a multiple match policy, but the lookup source type does not support the
policy in uncached lookups. For example, you cannot disable caching when you select Return
first row or Return last row as the multiple match policy for a lookup against an Amazon
Redshift V2 source.
Default is enabled.
This property is not displayed for flat file lookups because flat file lookups are always cached.
Lookup Cache Specifies the directory to store cached lookup data when you select Lookup Caching Enabled.
Directory Name The directory name can be an environment variable.
Lookup Cache Specifies whether to save the lookup cache file to reuse it the next time Data Integration
Persistent processes a Lookup transformation configured to use the cache.
Cache File Name Use with persistent lookup cache. Specifies the file name prefix to use with persistent lookup
Prefix cache files. Data Integration uses the file name prefix as the file name for the persistent cache
files that it saves to disk.
If the named persistent cache files exist, Data Integration builds the memory cache from the files.
If the named persistent cache files do not exist, Data Integration rebuilds the persistent cache
files.
Enter the prefix. Do not include a file extension such as .idx or .dat.
Re-cache from Use with persistent lookup cache. When selected, Data Integration rebuilds the persistent lookup
Lookup Source cache from the lookup source when it first calls the Lookup transformation instance.
Data Cache Size Data cache size for the transformation. Select one of the following options:
- Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Index Cache Size Index cache size for the transformation. Select one of the following options:
- Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Dynamic Lookup Determines whether to use a dynamic cache instead of a static cache. When you enable dynamic
Cache caching, the task updates the cache as it inserts or updates rows in the target so that the cache
and target remain in sync.
Use when lookup cache is enabled.
Output Old Value When you enable this property, when the task updates a row in the cache, it outputs the value that
On Update existed in the lookup cache before it updated the row. When it inserts a row, it returns a null
value.
Use when dynamic lookup cache is enabled.
Synchronize When you enable this property, the task retrieves the latest values from the lookup source and
dynamic cache updates the dynamic cache. This is helpful when multiple tasks that use the same lookup source
are running simultaneously.
Use when dynamic lookup cache is enabled.
Cache synchronization is not available for some connection types. For more information, see the
help for the appropriate connector.
Insert Else Applies to rows entering the Lookup transformation with the row type of insert. When enabled, the
Update mapping task inserts rows in the cache and updates existing rows. When disabled, the mapping
task does not update existing rows.
Use when dynamic lookup cache is enabled.
Lookup Source is When you enable this property, the lookup source does not change when the task runs.
Static
Datetime Format Sets the datetime format and field width. Milliseconds, microseconds, or nanoseconds formats
have a field width of 29. If you do not specify a datetime format here, you can enter any datetime
format for fields. Default is YYYY-MM-DD HH24:MI:SS. The format does not change the size of the
field.
Default is YYYY-MM-DD HH24:MI:SS. The Datetime format does not change the size of the field.
Thousand Specifies the thousand separator. Enter a comma (,) a period (.) or None.
Separator Default is None.
Decimal Specifies the decimal separator. Enter a comma (,) or a period (.).
Separator Default is period.
Case Sensitive Determines whether to enable case-sensitive string comparisons when you perform lookups on
String string columns in flat files. For relational uncached lookups, the column types that support case-
Comparison sensitive comparison depend on the database.
Case-sensitivity is automatically enabled for lookups in elastic mappings.
Null Ordering Determines how the null values are ordered. You can choose to sort null values high or low. By
default, null values are sorted high. This overrides configuration to treat nulls in comparison
operators as high, low, or null. For relational lookups, null ordering depends on the database
default value.
Sorted Input Indicates whether or not the lookup file data is in sorted order. This increases lookup
performance for file lookups. If you enable sorted input and the condition columns are not
grouped, the session fails. If the condition columns are grouped but not sorted, the lookup is
processed as if you did not configure sorted input.
Pre-build Lookup Specifies to build the lookup cache before the Lookup transformation receives data. Multiple
Cache lookup cache files can be built at the same time to improve performance.
Subsecond Sets the subsecond precision for datetime fields. For relational lookups, you can change the
Precision precision for databases that have an editable scale for datetime data. You can change the
subsecond precision for Oracle Timestamp, Informix Datetime, and Teradata Timestamp data
types.
Enter a positive integer value from 0 to 9. Default is 6 microseconds.
If you enable pushdown optimization in a task, the database returns the complete datetime value,
regardless of the subsecond precision setting.
Optional Determines whether the transformation is optional. If a transformation is optional and there are
no incoming fields, the mapping task can run and the data can go through another branch in the
data flow. If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data flow,
you add a transformation with a field rule so that only Date/Time data enters the transformation,
and you specify that the transformation is optional. When you configure the mapping task, you
select a source that does not have Date/Time data. The mapping task ignores the branch with the
optional transformation, and the data flow continues through another branch of the mapping.
The default query contains a SELECT statement that includes all lookup fields in the mapping. The SELECT
statement also contains an ORDER BY clause that orders all columns in the same order in which they appear
If you want to change the ORDER BY clause, add a WHERE clause, or transform the lookup data before it is
cached, you can override the default query. For example, you might use database functions to adjust the data
types or formats in the lookup table to match the data types and formats of fields that are used in the
mapping. Or, you might override the default query to query multiple tables.
Override the default query on the Advanced tab of the Lookup transformation. Enter the entire SELECT
statement in the Lookup SQL Override field. Use an alias for each column in the query. If you want to change
the ORDER BY clause, you must also append "--" to the end of the query to suppress the ORDER BY clause
that the mapping task generates.
Example
A Lookup transformation returns the following fields from Microsoft SQL Server table ALC_ORDER_DETAILS:
ORDERID=in_ORDERID
When you run the mapping task, the following query appears in the log file:
To override the ORDER BY clause and sort by PRODUCTID, enter the following query in the Lookup SQL
Override field on the Advanced tab:
When you run the mapping task again, the following query appears in the log file:
• You can override the lookup SQL query for relational lookups.
• If you override the lookup query, you must also enable lookup caching for the transformation.
• Enter the entire SELECT statement using the syntax that is required by the database.
• Enclose all database reserved words in quotes.
• Include all lookup and return fields in the SELECT statement.
If you add or subtract fields in the SELECT statement, the mapping task fails.
• Use an alias for each column in the query.
If you do not use column aliases, the mapping task fails with the following error:
Failed to initialize transformation [<Lookup Transformation Name>]
• To override the ORDER BY clause, append "--" at the end of the query.
The mapping task generates an ORDER BY clause, even when you enter one in the override. Therefore, you
must enter two dashes (--) at the end of the query to suppress the generated ORDER BY clause.
• If the ORDER BY clause contains multiple columns, enter the columns in the same order as the fields in
the lookup condition.
• If the mapping task uses pushdown optimization, you cannot override the ORDER BY clause or suppress
the generated ORDER BY clause with comment notation.
• If multiple Lookup transformations share a lookup cache, use the same lookup SQL override for each
Lookup transformation.
• When you configure a Lookup transformation that returns all rows, the mapping task builds the lookup
cache with sorted keys. When the transformation retrieves all rows in a lookup, the mapping task builds
the data cache with the keys in sorted order. The mapping task cannot retrieve all the rows from the
cache if the rows are not sorted. If the data is not sorted on the keys, you might get unexpected results.
• You cannot include parameters in the lookup SQL override.
• If you configure a lookup SQL override and a lookup source filter in the same transformation, the mapping
task ignores the filter.
To configure a lookup source filter, open the Advanced tab of the Lookup transformation, and enter the filter
in the Lookup Source Filter field. Do not include the WHERE keyword in the filter condition.
For example, you might need to retrieve the last name of every employee whose ID is greater than 510.
You configure the following lookup source filter on the EmployeeID field in the Lookup transformation:
When you add a lookup source filter to the Lookup query for a mapping task that uses pushdown
optimization, the mapping task creates a view to represent the SQL override. The mapping task runs an SQL
query against this view to push the transformation logic to the database.
Note: If you configure a lookup source filter and a lookup SQL override in the same transformation, the
mapping task ignores the filter.
When you enable lookup caching, a mapping task builds the lookup cache when it processes the first lookup
request. The cache can be static or dynamic. If the cache is static, the data in the lookup cache doesn't
change as the mapping task runs. If the task uses the cache multiple times, the task uses the same data. If
the cache is dynamic, the task updates the cache based on the actions in the task, so if the task uses the
lookup multiple times, downstream transformations can use updated data.
You can use a dynamic cache with most types of lookup sources. You cannot use a dynamic cache with flat
file or Salesforce lookups. For more information about using a dynamic cache with a specific type of lookup
source, see the help for the appropriate connector.
Based on the results of the lookup query, the row type, and the Lookup transformation properties, the
mapping task performs one of the following actions on the dynamic lookup cache when it reads a row from
the source:
The mapping task inserts the row when the row is not in the cache. The mapping task flags the row as
insert.
The mapping task updates the row when the row exists in the cache. The mapping task updates the row
in the cache based on the input fields. The mapping task flags the row as an update row.
The mapping task makes no change when the row is in the cache and nothing changes. The mapping
task flags the row as unchanged.
The dynamic Lookup transformation includes the return field, NewLookupRow, which describes the changes
the task makes to each row in the cache. Based on the value of the NewLookupRow, you can also configure a
Router or Filter transformation with the dynamic Lookup transformation to route insert or update rows to the
target table. You can route unchanged rows to another target table or flat file, or you can drop them.
You cannot use a parameterized source, target, or lookup with a Lookup transformation that uses a dynamic
cache.
Data Integration processes lookup conditions differently based on whether you configure the Lookup
transformation to use a static or dynamic cache.
The following table compares a Lookup transformation that uses a static cache to a Lookup transformation
that uses a dynamic cache:
The cache does not change during the task The task inserts or updates rows in the cache as it passes rows to the
run. target.
You can use a flat file, relational database, You cannot use a flat file or Salesforce connection type.
and other connection types such as
Salesforce for lookup.
When the lookup condition is true, the task When the lookup condition is true, the task either updates the row in
returns a value from the lookup table or the cache and target or leaves the cache unchanged. This indicates
cache. that the row is in the cache and target table.
When the condition is not true, the task When the lookup condition is not true, the task either inserts the row in
returns the default value. the cache and target or leaves the cache unchanged based on the row
type. This indicates that the row is not in the cache or target table.
0 Mapping task does not update or insert the row in the cache.
Note: This property only applies to rows entering the Lookup transformation where the row type is insert.
When a row of any other row type, such as update, enters the Lookup transformation, the Insert Else Update
property has no effect on how the mapping task handles the row.
If you do not enable Insert Else Update and the row type entering the Lookup transformation is insert, the
mapping task inserts the row into the cache if it is new, and makes no change to the cache if the row exists.
The following table describes how the mapping task changes the lookup cache when the row type of the
rows entering the Lookup transformation is insert:
Insert Else Update Row found in Data cache is Lookup Cache NewLookupRow Value
Option cache? different? Result
Enabled No - Insert 1
To synchronize the cache with the lookup source, enable the Synchronize Dynamic Cache property for the
Lookup transformation.
When you configure a Lookup transformation to synchronize the cache with the Lookup source, the Lookup
transformation performs a lookup on the lookup source. If the data does not exist in the Lookup source, the
Lookup transformation inserts the row into the lookup source before it updates the dynamic lookup cache.
The data might exist in the lookup source if another task inserted the row. To synchronize the lookup cache
to the lookup source, the task retrieves the latest values from the lookup source. The Lookup transformation
inserts the values from the Lookup source in the dynamic lookup cache.
For example, you have multiple tasks running simultaneously. Each task generates product numbers for new
product names. When a task generates a product number, the other tasks must use the same product
number to identify the product. The product number is generated once and inserted in the lookup source. If
another task processes a row containing the product, it must use the product number that is in the lookup
source. Each task performs a lookup on the lookup source to determine which product numbers have already
been generated.
When you configure the Lookup transformation to synchronize the cache with the lookup source, the task
performs a lookup on the dynamic lookup cache for insert rows. If data does not exist in the dynamic lookup
cache, the task performs a lookup on the lookup source. It then completes one of the following tasks:
• If data exists in the lookup source, the task inserts a row in the dynamic lookup cache with the columns
from the lookup source. It does not update the cache with the source row.
• If data does not exist in the lookup source, the task inserts the data into the lookup source and inserts the
row into the cache.
When you use a dynamic lookup cache, the mapping task writes to the lookup cache before it writes to the
target table. The lookup cache and target table can become unsynchronized if the task does not write the
data to the target. For example, the target database might reject the data.
Consider the following guidelines to keep the lookup cache synchronized with the lookup table:
• Use the Router transformation to pass rows to the cached target when the NewLookupRow value equals
one or two.
• Use the Router transformation to drop rows when the NewLookupRow value equals zero. Or, output the
rows to a different target.
Field mapping
When you use a dynamic lookup cache, map incoming fields with lookup cache fields on the Field Mapping
tab. The Field Mapping tab is only available when you configure the Lookup transformation to use a dynamic
cache.
You must map all of the incoming fields when you use a dynamic cache so that the cache can update as the
task runs. Optionally, you can map the Sequence-ID field instead of an incoming field if you want to create a
generated key for a field in the target object.
To create a generated key for a field in the target object, map the Sequence-ID field to a lookup cache field on
the Field Mapping tab. You can map the Sequence-ID field instead of an incoming field to lookup cache fields
with the integer or Bigint data type. For integer lookup fields, the generated key maximum value is
2,147,483,647. For Bigint lookup fields, the generated key maximum value is 9,223,372,036,854,775,807.
When you map the Sequence-ID field, Data Integration generates a key when it inserts a row into the lookup
cache.
1. When Data Integration creates the dynamic lookup cache, it tracks the range of values for each field that
has a sequence ID in the dynamic lookup cache.
2. When Data Integration inserts a row of data into the cache, it generates a key for a field by incrementing
the greatest sequence ID value by one.
3. When Data Integration reaches the maximum number for a generated sequence ID, it starts over at one.
Data Integration increments each sequence ID by one until it reaches the smallest existing value minus
one. If Data Integration runs out of unique sequence ID numbers, the mapping task fails.
Data Integration generates a sequence ID for each row it inserts into the cache.
When you run a mapping that uses a dynamic lookup cache, by default, Data Integration compares the values
in all lookup fields with the values in the associated incoming fields. Data Integration compares the values to
determine whether to update the row in the lookup cache. When a value in an incoming field differs from the
value in the lookup field, Data Integration updates the row in the cache.
If you do not want to compare all fields, you can choose the fields that you want Data Integration to ignore
when it compares fields. For example, the source data includes a column that indicates whether the row
contains data that you need to update. Enable the Ignore in Comparison property for all lookup fields except
the field that indicates whether to update the row in the cache and target table.
Configure the fields to be ignored on the Return Fields tab of the Lookup transformation. To ignore a field,
enable the Ignore in Comparison property for the field.
For example, you configure a Lookup transformation to perform a dynamic lookup on the employee table,
EMP, matching rows by EMP_ID. You define the following lookup SQL override:
SELECT EMP_ID, EMP_STATUS FROM EMP ORDER BY EMP_ID, EMP_STATUS WHERE EMP_STATUS = 4
When you first run the mapping, the mapping task builds the lookup cache from the target table based on the
lookup SQL override. All rows in the cache match the condition in the WHERE clause, EMP_STATUS = 4.
The mapping task reads a source row that meets the lookup condition you specify, but the value of
EMP_STATUS is 2. Although the target might have the row where EMP_STATUS is 2, the mapping task does
not find the row in the cache because of the SQL override. The mapping task inserts the row into the cache
and passes the row to the target table. When the mapping task inserts this row in the target table, you might
get inconsistent results when the row already exists. In addition, not all rows in the cache match the
condition in the WHERE clause in the SQL override.
To verify that you only insert rows into the cache that match the WHERE clause, you add a Filter
transformation before the Lookup transformation and define the filter condition as the condition in the
WHERE clause in the lookup SQL override.
You enter the following filter condition in the Filter transformation and the WHERE clause in the SQL override:
EMP_STATUS = 4
By default, Data Integration uses a non-persistent cache when you enable caching in a Lookup
transformation. When you use a non-persistent cache, Data Integration deletes the cache files at the end of
the mapping run. The next time you run the mapping, Data Integration builds the memory cache from the
database.
If the lookup table does not change between mapping runs, you can use a persistent cache. A persistent
cache can improve mapping performance because it eliminates the time required to read the lookup table.
The first time that Data Integration runs a mapping using a persistent lookup cache, it saves the cache files
to disk. The next time that Data Integration runs the mapping, it builds the memory cache from the cache
files.
Configure the Lookup transformation to use a persistent lookup cache in the transformation advanced
properties. To use a persistent cache, enable the Lookup Cache Persistent property.
You can configure the following options when you use a persistent cache:
When you use a persistent lookup cache, you can specify a name for the cache files.
To specify a name, enter the file name prefix in the Cache File Name Prefix field on the Advanced tab of
the Lookup transformation. Do not enter a suffix such as .idx or .dat.
If the lookup table changes occasionally, you can configure the Lookup transformation to rebuild the
lookup cache. When you do this, Data Integration rebuilds the lookup cache from the lookup source
when it first calls the Lookup transformation instance.
To configure the transformation to rebuild the cache, enable the Re-cache from Lookup Source property
on the Advanced tab of the Lookup transformation.
When you rebuild a cache, Data Integration creates new cache files, overwriting existing persistent cache
files. Data Integration writes a message to the session log when it rebuilds the cache.
If Data Integration cannot reuse the cache, it rebuilds the cache or fails the mapping task. The behavior can
differ based on whether the cache is named or unnamed.
The following table summarizes how Data Integration handles named and unnamed persistent caches when
the mapping changes between runs:
Data Integration cannot locate cache files. For example, the file no longer Rebuilds cache Rebuilds cache
exists.
Enable or disable the Enable High Precision option in the mapping task Fails mapping task Rebuilds cache
advanced session properties.
Edit the transformation in the Mapping Designer or Mapplet Designer, Fails mapping task Rebuilds cache
excluding editing the transformation description.
Edit the mapping, excluding the Lookup transformation. Reuses cache Rebuilds cache
Change the number of partitions in the pipeline that contains the Lookup Fails mapping task Rebuilds cache
transformation.
Change database connection or the file location used to access the lookup Fails mapping task Rebuilds cache
table.
Unconnected lookups
An unconnected Lookup transformation is a Lookup transformation that is not connected to other
transformations in a mapping. A transformation in the mapping pipeline calls the Lookup transformation with
a :LKP expression. The unconnected Lookup transformation returns one column to the calling
transformation.
You can use an unconnected Lookup transformation to perform a lookup against the following types of data
objects:
• Flat file
• Relational database
• Amazon Redshift V2
• Amazon S3 V2
• Google BigQuery V2
• Microsoft Azure Synapse SQL
• Snowflake Data Cloud
The following table lists the differences between connected and unconnected Lookup transformations:
Input values Receives input values directly from the mapping Receives input values from the result of
pipeline. a :LKP expression in another transformation.
Cache Cache includes all lookup columns used in the Cache includes all lookup/output fields in the
mapping. This includes columns in the lookup lookup condition and the lookup/return field.
condition and columns linked as output fields to Cannot use dynamic cache.
other transformations.
Can use static or dynamic cache.
Return values Returns multiple values from the same row. Returns the specified field for each row.
Lookup If there is no match for a lookup condition, Data If there is no match for the lookup condition,
conditions Integration returns the default value for all Data Integration returns NULL.
output fields. If there is a match, Data Integration returns
If there is a match, Data Integration returns the the result of the lookup condition to the
results of the lookup condition for all lookup/ return field.
output fields.
Output values Passes multiple output values to another Passes one output value to another
transformation. Links lookup/output fields to transformation. The lookup/output/return
another transformation. field passes the value to the transformation
that contains the :LKP expression.
1. On the General tab of the Lookup transformation, enable the Unconnected Lookup option.
2. Create the incoming fields.
On the Incoming Fields tab of the Lookup transformation, create an incoming field for each argument in
the :LKP expression. For each lookup condition you plan to create, you need to add an incoming field to
the Lookup transformation. You can create a different field for each condition, or use the same incoming
field in more than one condition.
3. Designate a return value.
You can pass multiple input values into a Lookup transformation and return one column of data. Data
Integration can return one value from the lookup query. Use the return field to specify the return value.
4. Configure a lookup expression in another transformation.
Supply input values for an unconnected Lookup transformation from a :LKP expression in a
transformation that uses expressions such as an Expression, Aggregator, Filter, or Router
transformation. The arguments are local input fields that match the Lookup transformation input fields
used in the lookup condition.
The arguments are local input fields that match the Lookup transformation input fields used in the lookup
condition.
For example, the following expression passes the ITEM_ID and PRICE fields to an unconnected Lookup
transformation named lkp_ItemPrices:
• The order in which you list each argument must match the order of the lookup conditions in the Lookup
transformation.
• The datatypes for the fields in the expression must match the datatypes for the input fields in the Lookup
transformation.
• The argument fields in the expression must be in the same order as the input fields in the lookup
condition.
• If you call a connected Lookup transformation in a :LKP expression, Data Integration marks the mapping
invalid.
First, you configure a Lookup Condition, which is an expression that identifies what rows to return from the
lookup table. For example, create a Simple Lookup Condition to find all the records where the CUSTOMER_ID
Lookup Field is equal to the Incoming Field, CUSTOMER_IN.
Based on this condition, the Lookup finds all the rows where the customer ID is equal to the customer
number that is passed to the Lookup transformation.
You can also add multiple conditions. For example, if you add this condition, the Lookup returns only the
orders that are greater than $100.00.
The Lookup returns data only when all conditions are true.
A dynamic cache is helpful when the source table contains a large amount of data or it contains duplicate
primary keys.
The following example illustrates the advantage of using a dynamic cache rather than a static cache when
the source table includes duplicate primary keys.
You want to update your payroll table with data from your Human Resources department. The payroll table
includes the following data, where ID is the primary key:
ID Name Location
1 Abhi USA
2 Alice UK
In the mapping, you specify the Human Resources department's table to be the source. The source table
includes the following data:
ID Name Location
1 Abhi India
2 Alice UK
3 James Japan
3 James USA
You create a mapping task to update the payroll table. When the mapping task begins, it creates the cache
file that contains the rows in the target table. As the task processes the rows, it flags the first row as an
update and it updates the cache. It flags the third row as an insert and inserts the row in the cache. It flags
the fourth row as an update because the row exists in the cache.
If you follow the same scenario using a static cache, the task flags the fourth row as an insert. The cache
does not contain the row for James because it does not update as the task processes the rows. The target
database produces an error because it cannot handle two rows with the same primary key.
For example, you need to load some sales order data from SAP transactional tables to a relational table in
your data warehouse. The SAP tables contain numeric IDs for values such as the sales group and sales
office. In the data warehouse table, you want to replace the numeric IDs with the corresponding names in
your local language. The name that is associated with each ID is stored in a reference table. Use an
unconnected Lookup transformation to retrieve the names from the reference table.
Source transformation
Use a Source transformation to specify the tables from which to extract data.
On the Source tab, configure the source connection and select the tables from which you want to extract
data.
Optionally, use an Expression transformation to rename fields and replace null values.
On the Incoming Fields tab, use the Named Fields field selection criteria to select the fields that you
want to load to the target table. If required, rename the selected fields to give them more meaningful
names.
On the Expression tab, create output fields to replace the null values. For example, to replace null values
for the sales group code and sales office code with spaces, you might create the following output fields:
Use an unconnected Lookup transformation to retrieve the descriptions from the reference table.
On the Incoming Fields tab, create an incoming field for each value that you need to pass to the Lookup
transformation to retrieve the data that you want. For example, to pass the domain name, language, and
code value to the Lookup transformation, create the in_domain_name, in_language, and in_lookup_code
fields.
On the Lookup Object tab, configure the lookup connection and select the reference table that you want
to use as the lookup table.
On the Lookup Condition tab, specify the lookup condition for each incoming field. For example:
domain_name = in_domain_name
language_code = in_language
lookup_code = in_lookup_code
On the Return Fields tab, select the field in the reference table that you want to return. For example, to
return a description, you might select lookup_description as the return field.
Use an Expression transformation to call the unconnected Lookup transformation and retrieve the name
that is associated with each ID value.
On the Incoming Fields tab, include all fields from the upstream transformation.
On the Expression tab, create an output field to retrieve each description from the Lookup
transformation. Call the Lookup transformation with a :LKP expression. For example, to retrieve the
sales_group :LKP.lkp_Descriptions('sales_group','en',in_sales_group)
sales_office :LKP.lkp_Descriptions('sales_office','en',in_sales_office)
Target transformation
On the Target tab, configure the target connection and select the relational table to which you want to
load data.
On the Field Mapping tab, map the output fields from the upstream transformation to the appropriate
target fields. For example, to map the sales_group and sales_office output fields from the second
Expression transformation to the SALES_GROUP and SALES_OFFICE target fields, configure the
following field mapping:
SALES_GROUP sales_group
SALES_OFFICE sales_office
The following video shows a use case for how you can use the Machine Learning transformation to
incorporate a machine learning model into data integration jobs in your organization:
Before you use the Machine Learning transformation, verify that the following requirements are met:
• The machine learning model is deployed on a machine learning platform, such as Amazon SageMaker or
Azure Machine Learning, and a REST endpoint is available to get predictions from the model.
• An API collection has a POST request to access the REST endpoint using a REST V3 connection.
For information about API collections, see Components.
To use the Machine Learning transformation, your organization must have the appropriate licenses.
Deploy the model as a REST endpoint according to your machine learning platform:
Amazon SageMaker
In Amazon SageMaker, use Amazon API Gateway and AWS Lambda to deploy the model as an endpoint.
For more information, refer to the instructions in the following AWS Machine Learning blog post:
https://aws.amazon.com/blogs/machine-learning/call-an-amazon-sagemaker-model-endpoint-using-
amazon-api-gateway-and-aws-lambda/
For more information about real-time endpoints, refer to the Microsoft Azure documentation.
After you deploy the model as a REST endpoint, create an API collection and configure a REST API request to
access the endpoint.
259
Accessing the machine learning model
Define how the Machine Learning transformation accesses the machine learning model on the Model tab.
Select a REST API request from an API collection. You can use the same REST V3 connection as the API
collection, or you can select a different REST V3 connection.
You cannot use the following authorization methods in the REST V3 connection because they are not
available in the Machine Learning transformation:
The API collection must have a POST request to access the machine learning model. If you change the
request type and synchronize the API collection, the Machine Learning transformation clears the Model,
Request Mapping, and Response Fields tabs. You must select a different POST request and reconfigure the
transformation.
The request schema fields come from the request schema in the REST API request from the API collection.
The working field name is the key that the Machine Learning transformation passes in the REST API request.
If the REST API can't process a special character in the field name, the working field name replaces the
special character. For example, underscores are replaced with a period character (.).
The names of incoming fields and request schema fields can differ, but the data types and hierarchies must
match. All request schema fields must have a mapped field, even if the fields are not mandatory in the REST
API.
Note that only parent fields appear in the request mapping. To review child fields, refer to the incoming fields
in the Machine Learning transformation and the request schema fields in the API collection.
For example, the following image shows a request schema in an API collection:
The following image shows incoming fields in a Machine Learning transformation with a hierarchy that
matches the request schema:
If the hierarchies don't match, use a Hierarchy Processor transformation before the Machine Learning
transformation to restructure the incoming data according to the request schema fields. For more
information, see Chapter 12, “Hierarchy Processor transformation” on page 151.
Method to map fields to the target. Use one of the following options:
• Manual. Manually link incoming fields to target fields. Selecting this option removes links for
automatically mapped fields. To map fields manually, drag a field from the incoming fields list and
position it next to the appropriate field in the target fields list. Or, you can map selected fields, unmap
selected fields, or clear all of the mappings using the Actions menu.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
• Completely Parameterized. Use a parameter to represent the field mapping. In the task, you can
configure all field mappings.
• Partially Parameterized. Configure links in the mapping that you want to enforce and use a parameter
to allow other fields to be mapped in the mapping task. Or, use a parameter to configure links in the
mapping, and allow all fields and links to display in the task for configuration.
To see more information on field mapping parameters, see Mappings.
Parameter
New Parameter
Show Fields
Controls the fields that appear in the Incoming Fields list. Show all fields, unmapped fields, or mapped
fields.
Automap
Links fields with matching names. Allows you to link matching fields, and then manually configure other
field mappings.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single
field, select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
Action menu
• Map Selected. Links the selected incoming field with the selected target field.
• Unmap Selected. Clears the link for the selected field.
• Clear Mapping. Clears all field mappings.
Show
Determines how field names appear in the target fields list. Use technical field names or labels.
The response schema fields come from the response schema in the REST API request from the API
collection. The working field name is the key that the Machine Learning transformation reads from the REST
API response. The working field name might replace special characters that the REST API can't process. The
field name is the corresponding output field that the Machine Learning transformation uses to represent the
response data in the mapping.
Additionally, the ouput fields that are passed to downstream transformations can't contain special characters
except for the following characters: @ # ( ). The Machine Learning transformation replaces the special
characters with an underscore (_).
To create a bulk request, the Machine Learning transformation selects the highest-level array field in the
request schema. In the JSON request body for the bulk request, the transformation combines request rows
as elements of the selected array field so that one JSON request body contains data for multiple requests.
You can configure bulk request options to determine how much data each bulk request contains.
For example, the request schema might have the following structure:
The Machine Learning transformation selects the data array as the highest-level array field to combine
requests. If you configure each bulk request to send 2 MB of data to the machine learning model, the
Machine Learning transformation configures the data array in the JSON request body to include data for 2
MB of request rows.
The highest-level array cannot have sibling array fields. If the highest-level array has a sibling field of a
primitive data type, the data in the sibling field will not be combined. Instead, one random record in the
sibling field will be sent to the machine learning model.
To create a bulk response, the machine learning endpoint must combine response rows as elements of the
highest-level array in the response schema. The Machine Learning transformation parses the array into
output rows. Review the bulk request options to verify which field the Machine Learning transformation will
parse.
To use bulk requests in the Machine Learning transformation, the machine learning endpoint must be
configured to accept bulk requests and send bulk responses.
Option Description
Combine Requests The transformation combines request rows as elements of this array field and sends one
on Field request to the machine learning model. The request schema determines this field
automatically.
Request Size Maximum size of each request. To choose a size, consult the best practices on your machine
learning platform.
Parse Requests on The machine learning endpoint combines response rows as elements of this array field. Then,
Field the transformation parses the array into output rows. The response schema determines this
field automatically.
The following table describes the types of proxies that are available:
None Bypasses the proxy server configured at the agent, Spark, or connection level.
Note: A platform proxy is not available and you must configure a Spark proxy instead. A platform proxy
considers the proxy configured at the agent level, but elastic mappings that use the Machine Learning
transformation refer to the Spark engine for proxy details.
API proxies do not apply if the Machine Learning transformation runs in a serverless runtime environment.
Troubleshooting
Use Monitor to access log files to troubleshoot REST API requests and responses in the Machine Learning
transformation. Set the tracing level to verbose data to view the details.
The following table lists the details that are available in each log file:
Spark executor log If the mapping fails, the following details become available for each failed row:
- Response code*
- Request details, such as the URL, headers, and request body
- Response details, such as the headers and response body
The following details are available for each successful row:
- Response code
- Time to receive the response
For more information about log files for elastic mappings, see the Monitor help.
• Connection issues when there is a connection timeout, the route cannot be found, or there is an unknown
host.
• Syntax issues in the URL.
• SSL handshake issues such as certificate issues during SSL handshakes with the server.
The following table describes the action that occurs for each response code:
Mapplet transformation
The Mapplet transformation inserts a mapplet that you created in Data Integration, imported from
PowerCenter, or generated from an SAP asset into a mapping. Each Mapplet transformation can contain one
mapplet. You can add multiple Mapplet transformations to a mapping or mapplet.
The Mapplet transformation can be active or passive based on the transformation logic within the mapplet.
An active mapplet includes at least one active transformation. An active mapplet can return a number of
rows that is different from the number of source rows passed to the mapplet. A passive mapplet includes
only passive transformations. A passive mapplet returns the same number of rows that are passed from the
source.
For example, you want to create a mapping that passes customer records to a target if the customers
pass a credit check. You create a Web Services transformation to run a credit check on each customer.
You include the Web Services transformation in a mapplet and use the mapplet in a mapping to perform
the credit check.
For example, you have different fact tables that require a series of dimension keys. You create a mapplet
that contains a series of Lookup transformations to find each dimension key. You include the mapplet in
different fact table mappings instead of re-creating the lookup logic in each mapping.
The Mapplet transformation shows the mapplet incoming and outgoing fields. It does not display the
transformations that the mapplet contains.
269
2. If the mapplet includes one or more input groups, configure the incoming fields.
By default, the transformation inherits all incoming fields from the upstream transformation. You can
define a field rule to limit or rename the incoming fields. If the mapplet contains multiple input groups,
configure incoming fields for each input group.
For information about configuring incoming fields for a transformation, see “Incoming fields” on page
20.
3. If the mapplet includes one or more input groups, configure field mappings to define how data moves
from the upstream transformation to the Mapplet transformation.
If the mapplet contains multiple input groups, configure the field mappings for each input group.
4. If the mapplet contains one or more output groups, verify the mapplet output fields on the Output Fields
tab. Connect at least one output group to a downstream transformation.
Selecting a mapplet
Select the mapplet that you want to use in the Mapplet transformation on the Mapplet tab of the Properties
panel. You can select a mapplet that you created or imported into Data Integration, or you can select a
mapplet that is included in an installed bundle.
1. In the Properties panel for the Mapplet transformation, click the Mapplet tab.
2. Click Select.
3. Open the project and folder that contains the mapplet and click Select.
Mapplets in installed bundles are in the Add-On Bundles project.
The selected mapplet appears in the Properties panel.
If the mapplet that you select does not include a source, configure the incoming fields and field mappings
after you select the mapplet.
If the mapplet that you select does not contain a target, configure the output fields and field mappings after
you select the mapplet.
The input group for which you want to configure field mappings. This option appears when the Mapplet
transformation has multiple input groups.
Method of mapping fields to the Mapplet transformation. Select one of the following options:
• Manual. Manually link incoming fields to Mapplet transformation input fields. Removes links for
automatically mapped fields.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
Options
Controls how fields are displayed in the Incoming Fields and Mapplet Input Fields lists. Configure the
following options:
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields, and then manually configure other
field mappings.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single
field, select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
Action menu
• Map Selected. Links the selected incoming field with the selected mapplet input field.
• Unmap Selected. Clears the link for the selected field.
• Clear Mapping. Clears all field mappings.
Mapplet parameters
When you select a mapplet that contains parameters, the parameters are renamed in the Mapplet
transformation. You can view the corresponding parameter names on the Parameter tab of the Properties
panel.
In the Mapplet transformation, mapplet parameter names are prefixed with the Mapplet transformation
name.
You can edit the properties of the mapplet parameters, but you cannot change the parameter type or delete
the parameters. To delete a parameter, open the mapplet and remove the parameter.
To edit the parameter properties, click the new parameter name on the Parameters tab or on the Parameters
panel. When you change the parameter properties in a Mapplet transformation, the changes do not affect the
mapplet.
The Mapping Designer displays the name, type, precision, scale, and origin for each output field in each
output group. You cannot edit the transformation output fields. If you want to exclude output fields from the
data flow or rename output fields before you pass them to a downstream transformation, configure the field
rules in the downstream transformation.
The mapplet that you use in the Mapplet transformation might contain transformations with names that
conflict with the names of transformations in the mapping. To avoid name conflicts with transformations in
the mapping, Data Integration prefixes the names of transformations in the mapplet with the Mapplet
transformation name at run time.
For example, a mapplet contains an Expression transformation named Expression_1. You create a mapping
and use the mapplet in the Mapplet transformation Mapplet_Tx_1. When you run the mapping, the Expression
transformation is renamed to Mapplet_Tx_1_Expression_1.
Data Integration truncates transformation names that contain more than 80 characters.
Note: Data Integration only renames transformations in mapplets when the mapplet is used in a mapping
created after the April 2022 release.
Synchronizing a mapplet
If the interface of a mapplet changes after it has been added to a Mapplet transformation, you must
synchronize the mapplet to get the changes. Synchronize a mapplet on the Mapplet tab.
Mappings and mapplets that use the mapplet are invalid until the mapplet is synchronized. If you run a
mapping task that includes a changed mapplet, the task fails.
When you synchronize a mapplet, the updates might cause validation errors in other transformations in the
mapping or mapplet.
You cannot synchronize a mapplet that you imported into Data Integration from PowerCenter or SAP.
Normalizer transformation
The Normalizer transformation is an active transformation that transforms one incoming row into multiple
output rows. When the Normalizer transformation receives a row that contains multiple-occurring data, it
returns a row for each instance of the multiple-occurring data.
For example, a relational source includes four fields with quarterly sales data. You can configure a
Normalizer transformation to generate a separate output row for each quarter.
When the Normalizer transformation returns multiple rows from an incoming row, it returns duplicate data for
single-occurring incoming columns.
When you configure a Normalizer transformation, you define Normalizer properties on the following tabs of
the Properties panel:
• Normalized Fields tab. Define the multiple-occurring fields and specify additional fields that you want to
use in the mapping.
• Field Mapping tab. Connect the incoming fields to the normalized fields.
To use the Normalizer transformation, you need the appropriate license.
Normalized fields
Define the fields to be normalized on the Normalized Fields tab. You can also include other incoming fields
that you want to use in the mapping.
When you define normalized fields, you can create fields manually or select fields from a list of incoming
fields. When you create a normalized field, you can set the data type to String or Number, and then define the
precision and scale.
Note: In an elastic mapping, you can use any primitive data type.
When incoming fields include multiple-occurring fields without a corresponding category field, you can create
the category field to define the occurs for the data. For example, to represent three fields with different types
of income, you can create an Income category field and set the occurs value to 3.
Occurs configuration
Configure the occurs value for a normalized field to define the number of instances the field occurs in
incoming data.
To define a multiple occurring field, set the occurs value for the field to an integer greater than one. When you
set an occurs value to greater than one, the Normalizer transformation creates a generated column ID field
for the field. The Normalizer transformation also creates a generated key field for all normalized data.
274
The Normalizer transformation also uses the occurs value to create a corresponding set of output fields. The
output fields display on the Field Mapping tab of the Normalizer transformation. The naming convention for
the output fields is <occurs field name>_<occurs number>.
To define a single-occurring field, set the occurs value for the field to one. Define a single-occurring field to
include incoming fields that do not need to be normalized in the normalized fields list.
Use one of the following methods to process groups of multiple-occurring fields with different occurs values.
You can use multiple-occurring fields with different occurs values when you write the normalized data to
different targets.
For example, the source data includes an Expenses field with four occurs and an Income field with three
occurs. You can configure the mapping to write the normalized expense data to one target and to write
the normalized income data to a different target.
You can configure the multiple-occurring fields to use the same number of occurs, and then use the
generated fields that you need. When you use the same number of occurs for multiple-occurring fields,
you can write the normalized data to the same target.
For example, when the source data includes an Expenses field with four occurs and an Income field with
three occurs, you can configure both fields to have four occurs.
When you configure the Normalizer field mappings, you can connect the four expense fields and the
three income fields, leaving the unnecessary income output field unused. Then, you can configure the
mapping to write all normalized data to the same target.
Generated keys
The Normalizer transformation generates key values for normalized data.
Generated keys fields appear on the Normalized Fields tab when you configure the field to have more than
one occurrence.
The mapping task generates the following fields for normalized data.
Generated Key
A key value that the task generates each time it processes an incoming row. When a task runs, it starts
the generated key with one and increments by one for each processed row.
The Normalizer transformation uses one generated key field for all data to be normalized.
Generated Column ID
A column ID value that represents the instance of the multiple-occurring data. For example, if an
Expenses field that includes four occurs, the task uses values 1 through 4 to represent each type of
occurring data.
When you configure the Normalizer field mappings, complete the following steps:
1. Map the multiple-occurring incoming fields that you want to normalize to the corresponding output fields
that the Normalizer transformation created.
Note: Map at least one output field for each set of normalized fields.
2. Map incoming fields to all normalized fields with a single occurrence.
Show Fields
Controls the fields that appear in the Incoming Fields list. Show all fields, unmapped fields, or mapped
fields.
Automap
Links fields with matching names. Allows you to link matching fields, and then manually configure other
field mappings.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single
field, select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
Action menu
• Map Selected. Links the selected incoming field with the selected target field.
• Unmap Selected. Clears the link for the selected field.
• Clear Mapping. Clears all field mappings.
Property Description
Tracing Detail level of error and status messages that Data Integration writes in the session log. You can
Level choose terse, normal, verbose initialization, or verbose data. Default is normal.
Optional Determines whether the transformation is optional. If a transformation is optional and there are no
incoming fields, the mapping task can run and the data can go through another branch in the data flow.
If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data flow, you
add a transformation with a field rule so that only Date/Time data enters the transformation, and you
specify that the transformation is optional. When you configure the mapping task, you select a source
that does not have Date/Time data. The mapping task ignores the branch with the optional
transformation, and the data flow continues through another branch of the mapping.
• To write normalized data to the target, map the multiple occurring field to a target field in the Target
transformation field mapping.
• To include generated keys or generated column IDs in target data, create additional target fields as
required, and then map the fields in the Target transformation field mapping.
When you configure a Normalizer transformation, you define the field mappings between the incoming fields
and the normalized fields. When you use a parameter for a source object, field names do not appear in the
list of incoming fields until you use a Named Field rule to add fields.
To add fields, on the Incoming Fields tab of the Normalizer transformation, create a Named Fields rule. In the
Include Named Fields dialog box, click Add and enter the name of an incoming field.
Create fields to represent all of the source fields that you want to use in the Normalizer transformation.
To include the store number data in the mapping, from the menu, select Generate From Incoming Fields and
select StoreNo. Use the default occurs value of one because this field does not include multiple-occurring
data.
The following image shows the Normalized Field tab after adding both fields:
Notice that when you set the QuarterlySales occurs value to four, the Normalizer creates the generated
column ID field and the generated key field.
In the Normalized Fields list, the Normalizer replaces the multiple-occurring QuarterlySales field with
corresponding fields to hold the normalized data: QuarterlySales_1, QuarterlySales_2, QuarterlySales_3, and
QuarterlySales_4. The list also includes the StoreNo field.
Connect the incoming fields to the StoreNo and QuarterlySales normalized fields as follows:
In the Aggregator transformation, use the default All Fields rule to pass all fields from the Normalizer to the
Aggregator.
To group data by store number, add a group by field on the Group By tab, and then select the StoreNo field.
The following image shows the Group By tab with the StoreNo group by field:
On the Aggregate tab, create a Decimal output field named AnnualSales_byStore. To configure the output
field, use the QuarterlySales field in the following aggregate expression: SUM(QuarterlySales). The
QuarterlySales field represents all of the normalized quarterly data.
Use the default All Fields rule to pass all fields from the Aggregator to the Target transformation.
On the Target tab, select the target connection and the target object.
On the Field Mapping tab, the incoming fields list includes the AnnualSales_byStore field created in the
Aggregator transformation, and the StoreNo field that passed through the mapping from the source.
The incoming fields list also includes the QuarterlySales and generated key columns created by the
Normalizer. These fields do not need to be written to the target.
Task results
When you run the task, the mapping task normalizes the source data, creating one row for each quarter. The
task groups the normalized data by store, and then aggregates the quarterly unit sales for each store.
Output transformation
The Output transformation is a passive transformation that you use to pass data from a mapplet to a
downstream transformation.
Add output fields to the Output transformation to define the data fields you want to pass from the mapplet.
You must add at least one output field to each output transformation. You can add multiple output
transformations to a mapplet. Each Output transformation becomes an output group when you use the
mapplet in a Mapplet transformation. You must connect at least one output group to a downstream
transformation. You can connect an output group to multiple downstream transformations.
Output fields
Add output fields to an Output transformation to define the data fields you want to pass from the Mapplet to
the downstream transformation. You must add at least one output field to each Output transformation.
Add output fields on the Output Fields tab of the properties panel. To add a field, click Add Field, and then
enter the field name, data type, precision, and scale.
When you use the mapplet in a Mapplet transformation, map at least one output field to the downstream
transformation.
Field mapping
Map fields to configure how data moves from the upstream transformation to the Output transformation.
The Field Mapping tab includes a list of incoming fields and a list of target fields.
281
Field Map Options
Method of mapping fields to the Mapplet transformation. Select one of the following options:
• Manual. Manually link incoming fields to Mapplet transformation input fields. Removes links for
automatically mapped fields.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
Options
Controls how fields are displayed in the Incoming Fields and Output Fields lists. Configure the following
options:
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields, and then manually configure other
field mappings.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single
field, select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
Action menu
• Map Selected. Links the selected incoming field with the selected mapplet input field.
• Unmap Selected. Clears the link for the selected field.
• Clear Mapping. Clears all field mappings.
Parse transformation
The Parse transformation adds a parse asset that you created in Data Quality to a mapping.
A parse asset defines a set of operations that identify tokens in an input field based on the content or
structure of the token. In a parsing operation, a token is a discrete word or string.
The Parse transformation parses the tokens to output fields that the asset specifies. When you configure the
transformation, you map an input field to the appropriate target field in the transformation. When the
mapping runs, the transformation searches the input field for tokens that meet the parsing criteria and writes
the tokens to the associated output fields. If the transformation can identify an input data value but a defined
output field is not available, the transformation may write the value to a predefined field for overflow data. If
the transformation cannot identify a value in the input data, it may write the value to a predefined field for
unparsed data. The asset that you add to the transformation determines the number of overflow and
unparsed data fields that the transformation creates.
A Parse transformation is similar to a Mapplet transformation, as it allows you to add data transformation
logic that you designed elsewhere to a mapping. Like mapplets, parse assets are reusable assets.
A Parse transformation shows incoming and outgoing fields. It does not display the logic that the parse asset
contains or allow to you edit the parse asset. To edit the parse asset, open it in Data Quality.
283
The following image shows the options that you use to select the parse asset:
Note: If you update an asset in Data Quality after you add it to a transformation, you may need to synchronize
the asset version in the transformation with the latest version. For more information about data quality asset
synchronization, see “Synchronizing data quality assets” on page 89.
• Manual. Manually link an incoming field to a transformation input field. Removes links for any
automatically mapped field.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
• Completely Parameterized. Use a parameter to represent the field mapping.
Choose the Completely Parameterized option when the parse asset in the transformation is
parameterized or any upstream transformation in the mapping is parameterized.
• Partially Parameterized. Configure links in the mapping that you want to enforce and use a parameter
to allow other fields to be mapped in the mapping task. Or, use a parameter to configure links in the
mapping, and allow all fields and links to display in the task for configuration.
Parameter
Select the parameter to use for the field mapping, or create a new parameter. This option appears when
you select Completely Parameterized or Partially Parameterized as the field map option. The parameter
must be of type field mapping.
Do not use the same field mapping parameter in more than one Parse transformation in a single
mapping.
Options
Controls how fields are displayed in the Incoming Fields and Target Fields lists.
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields and to manually configure other
field mappings. The Automap options appear when you select the Manual or Partially Parameterized
field map option.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can undo all automapped field mappings by clicking Automap > Undo Automap.
To unmap a single field, select the field to unmap and click Actions > Unmap on the context menu for the
field. To unmap one or more fields that you selected, click Unmap Selected on the Target Fields context
menu.
To clear all field mappings from the transformation, click Clear Mapping on the Target Fields context
menu.
The Parse transformation can create some or all of the following types of output field:
Contain data values that meet the parsing criteria that the asset defines.
Overflow fields
Contain data values that meet the parsing criteria but for which a corresponding output field is not
available. The Parse transformation writes a value to an overflow field when all appropriate output fields
for the value are already populated.
Unparsed field
A field that contains any value that does not meet the parsing criteria that the asset defines.
• The type and number of output fields depends on the parsing operations that the user configures in the
parse asset.
• When the asset specifies a regular expression or a dictionary, the transformation creates one or more
output fields for the data that each regular expression or dictionary parses successfully.
The user who configures the asset determines the number of output fields for each regular expression or
dictionary operation. Each regular expression or dictionary operation is called a step in the asset
configuration.
• When the asset specifies pattern-based parsing, the transformation creates a range of output fields that
represent the types of information that the pattern logic might find.
A pattern-based parsing operation can generate output fields for the following types of information:
- Person names, such as first names, family names, name prefixes, and name suffixes.
- Label values that represent the pattern that the parsing operation identified in the input data row. The
Data Quality user can use the labels to enhance the pattern logic in the asset.
The asset logic determines the number of output fields for parsed data.
• When the asset specifies a regular expression or a dictionary, the transformation may create a single
overflow field for all overflow data. Or, the transformation may create an overflow field for each regular
expression or dictionary that the asset defines. The user can update the asset properties to determine the
policy for overflow fields.
When the asset specifies a pattern parsing operation, the transformation may or may not create a single
overflow field. The presence or absence of the overflow field depends on the locale that the asset
specifies for the input data. For example, the Parse transformation creates an overflow field for pattern
parsing operations when the asset specifies the locale as Portugal or Brazil. The Data Quality user sets
the locale.
Python transformation
In an elastic mapping, you can use the Python transformation to define transformation functionality using the
Python programming language. The Python transformation can be an active or passive transformation.
You can use the Python transformation to define simple or complex transformation functionality. You can
also use the Python transformation to implement machine learning. For example, you can load a pre-trained
model through a resource file and use the model to classify input data or to create predictions.
To create a Python transformation, you write the following types of Python code snippets:
• Pre-partition Python code that runs one time before it processes any input rows.
• Main Python code that runs when the transformation receives an input row.
• Post-partition Python code that runs after the transformation processes all input rows.
To use the Python transformation, your organization must have the appropriate licenses.
You cannot use the Python transformation with a Graviton-enabled cluster. For more information on a
Graviton-enabled cluster, see Data Integration Elastic Configuration.
Note: When you create a Python transformation, ensure that you review the Python code to verify that it is
free from potentially unsafe active content such as queries, remote scripts, or data connections before you
run the code in a mapping task.
If you have a custom Python installation, you cannot perform a test run of the mapping. You must create a
mapping task based on the elastic mapping and provide advanced.custom.property for the advanced
session property.
Install Python and add resource files based on the type of runtime environment:
Runtime environment
Add the Python installation in the following directory on the Secure Agent machine:
<Secure Agent installation directory>/ext/python/
If you reference resource files in the Python code, add the resource files to the same directory. To
maintain consistency, you can store the resource files in a dedicated folder named python_resources.
288
Consider the following guidelines:
• If the Secure Agent machine stops unexpectedly and the agent restarts on a different machine, you
must add the Python installation and resource files to the same directory on the new machine.
• If you update the Python installation or resource files on the Secure Agent machine, the files take
effect the next time that you run a job on the elastic cluster.
• To prevent long-running jobs from failing, do not update the files on the Secure Agent machine more
than four times while you have jobs running.
Install Python and add resource files in the supplementary file location.
If you update the Python installation or resource files, you must redeploy the serverless runtime
environment for the changes to take effect.
For more information about the supplementary file location, see Administrator in the Administrator help.
Add output fields for the output data that you want to pass to the downstream transformation. To add a field,
click Add Field, and then enter the field name, data type, precision, and scale. You can also create output
fields on the Outputs tab of the Python editor by clicking Create New Field.
After you add fields to the transformation, you can use the field names as variables in the Python code.
You can configure one or more input fields as partition keys. Data Integration uses the partition keys to
repartition the data before the code runs. If you do not add an incoming fields as a partition key, the data is
processed using its default partitioning scheme.
When a Python transformation reads input rows, it converts incoming field data types to Python data types.
When a Python transformation writes output rows, it converts Python data types to output field data types.
1. The Python transformation converts the double data type in the incoming field to the Python float data
type.
2. The Python transformation uses the value in the incoming field as the value for the Python float data
type.
3. When the transformation generates the output row, it converts the Python float data type to the double
data type.
The following table shows how the Python transformation maps transformation data types to Python data
types:
Integer Int
Decimal Float
Double Float
Timestamp Datetime
String Str
text Text
For example, you create an incoming field with the integer data type and an output field with the string data
type. You define the Python code to process the data in the incoming field and write the data to the output
field. In the Python code, you can use the function str() to convert the integer data type in the incoming field
and write the output as a string data type in the output field.
Partition keys
In an elastic mapping, you can use partition keys to define how to group data into partitions before the
Python code runs.
You can configure one or more input fields as partition keys. Data Integration Elastic uses the partition keys
to repartition the data before the code runs. If you do not add an incoming field as a partition key, the data is
processed using its default partitioning scheme.
• An active transformation can change the number of rows that pass through it.
To define the number of rows in the output, call the generateRow() method in the code to generate each
output row. You might choose to generate multiple output rows from a single input row or generate a
single output row from multiple input rows. For example, if the transformation contains two incoming
fields that represent a start date and an end date, you can call the generateRow() method to generate an
output row for each date between the start date and the end date.
• A passive transformation cannot change the number of rows that pass through the transformation. The
transformation calls the generateRow() method to generate an output row after processing each input
row.
Resource files
The Python transformation uses resource files and the Python code to define the transformation
functionality. If you use a pre-trained model, you specify the pre-trained model as a resource file in the Python
transformation.
A file that contains the resources that you access in the Python code.
The file can be a pre-trained model that has been trained on a larger data set outside Data Integration.
You can use the pre-trained model to classify data or make predictions based on the data that you pass
to the Python transformation. You can access the pre-trained model in the Python code.
Python code
The Python code that the Python transformation uses to process data that you pass to the
transformation. When you write Python code, you might reconstruct input variables, load a pre-trained
model, or define output variables.
Enter Python code snippets in the following sections of the Python editor:
Defines code that can be interpreted one time and shared among all rows of data.
Defines how the Python transformation behaves when it receives an input row while processing a
partition. The Python transformation processes the code on the Main Python Code section for each
partition and each row.
Defines how the Python transformation behaves after it processes all input data in a partition. You can
call the generateRow() method to generate output rows.
• Define variables before you use them. For example, you cannot reference a variable in the Pre-Partition
Python Code section if the variable is defined in the Main Python Code section.
• Call the incoming field name to access incoming fields.
• The Python code must assign a value to each output field.
• To define how the transformation writes data from the incoming fields to output fields, set the output field
to the value of the incoming field.
For example, write output_field = incoming_field to write the data from the incoming field
incoming_field to the output field output_field.
• To access resource files, use the variable resourceFilesArray. Specify the resource file using an index
such as resourceFilesArray[0].
• The Mapping Designer does not validate Python code.
The following image shows the Python tab with the Python editor expanded:
1. Inputs and Outputs tabs. Use these tabs to add incoming fields and output fields as variables in the
Python code snippets. The fields and methods displayed on these tabs vary based on which section of
the code entry area is selected.
2. Go to list. Use to switch among the sections in the code entry area.
3. Minimize, Open Both, and Maximize icons. Use the Minimize and Maximize buttons to minimize and
maximize the transformation properties. Use the Open Both icon to open the Mapping Designer canvas
and the transformation properties at the same time.
4. Code entry area. Enter Python code snippets in the Pre-Partition Python Code, Main Python Code, and
Post-Partition Python Code sections.
Tip: To expand the transformation properties so that you can see the code entry area more fully, click
Maximize.
1. Select the section in which you want to enter a code snippet in the Go to list.
2. To access an incoming field or output field in the snippet, select the field on the Inputs or Outputs tab,
and click Add.
You can also create output fields on the Outputs tab by clicking Create New Field.
3. Write appropriate Python code based on the section.
In the Python transformation, enter the relative path of the resource file. For example, if the resource file is
stored in <Secure Agent installation directory>/ext/python/folder1/myfile, then the relative path
would be /folder1/myfile.
For example when you specify several resource files, you reference the first resource file in the Python code
using resourceFilesArray[0]. You reference the second resource using resourceFilesArray[1].
SensorLocation LastReadingTime
To add an ID column and assign ID values to each sensor, perform the following tasks:
Create a Python transformation. On the Advanced tab, set the behavior to Passive.
Pass data from upstream transformations in the mapping to the Python transformation.
After you pass the data to the Python transformation, it contains the following incoming fields:
• SensorLocation
• LastReadingTime
Use the Output Fields tab in the Python transformation to create the output field SensorID_out to
represent the ID column.
Additionally, create the following output fields to pass incoming data to downstream transformations:
• SensorLocation_out
• LastReadingTime_out
In the Main Python Code section, set the ID value for each row that is processed and write the data to
the output fields using the following code:
SensorID="".join(str(x) for x in map(ord, SensorLocation))
SensorID_out = SensorID
SensorLocation_out = SensorLocation
LastReadingTime_out = LastReadingTime
If the output fields in the Python transformation are linked directly to a Write transformation, the target
contains the following data after you run the mapping:
You can use the Python transformation to determine which employee earns the highest salary in their
department.
The following table shows the data that your organization might collect:
To use the Python transformation to determine which employee earns the highest salary in their department,
perform the following tasks:
Create a Python transformation. On the Advanced tab, set the behavior to Active.
Pass the following fields from upstream transformations in the mapping to the Python transformation:
• DepartmentName
• DepartmentID
Partition the data by department to track the highest salary within each department. To partition the data
by department, add the incoming field DepartmentID as a partition key on the Partition Keys tab.
Create the following output fields on the Output Fields tab to pass data to downstream transformations:
• DepartmentName_out
• DepartmentID_out
• EmployeeName_out
• SalaryIndex_out
• EmployeeSince_out
Declare a map variable outputmap to associate each department ID with the employee in the department
who has the highest salary.
For each input row that passes through the Python transformation, define code that checks if the salary
of the employee is higher than the maximum salary of the previous rows that have been processed. If the
salary of the employee is higher, update the employee who has the maximum salary in the department.
outputmap.setdefault(DepartmentID, None)
updateMax = False
if updateMax == True:
employee_data = {'SalaryIndex':SalaryIndex,'EmployeeName':EmployeeName,
'EmployeeSince':EmployeeSince,'DepartmentName':DepartmentName}
outputmap[DepartmentID] = employee_data
Step 7. Write the data to the output files.
In the Post-Partition Python Code section of the Python tab, use the data in the map variable outputmap
to generate a row for the employee that has the highest salary in each department.
If the output fields in the Python transformation are linked directly to a Target transformation, the target
contains the following data after you run the mapping:
To perform your research, you must classify data on the length and width of the flower sepals and petals by
flower species. To classify the data, you developed a pre-trained model outside of Data Integration.
To operationalize the pre-trained model in an elastic mapping, complete the following tasks:
1. Create a mapping that contains a passive Python transformation and list the pre-trained model as a
resource file.
2. Write a Python script that accesses the pre-trained model.
3. Pass the data on flower sepals and petals to the Python transformation to classify the data by foxglove
species.
The following table shows sample sepals and petals data that you can pass to the Python transformation:
sepal_width decimal 10
petal_length decimal 10
true_class string 50
Resource File
For example, you might use a pre-trained model that is stored in the file foxgloveDataMLmodel.pkl in
the following path:
Specify the Python code in the Pre-Partition Python Code and Main Python Code sections.
Use the Pre-Partition Python Code section to import libraries, load the resource file, and initialize
variables.
For example, you might enter the following code in the Pre-Partition Python Code section:
from sklearn import svm
from sklearn.externals import joblib
import numpy as np
clf = joblib.load(resourceFileArrays[0])
classes = ['common', 'woolly']
Use the Main Python Code section to define how the Python transformation uses the pre-trained model
to evaluate each row of data.
For example, you might enter the following code in the Main Python Code section:
input = [sepal_length, sepal_width, petal_length, petal_width]
input = np.array(input).reshape(1,-1)
pred = clf.predict(input)
predicted_class = classes[pred[0]]
sepal_length_out = sepal_length
sepal_width_out = sepal_width
petal_length_out = petal_length
petal_width_out = petal_width
true_class_out = true_class
Rank transformation
The Rank transformation selects the top or bottom range of data. Use the Rank transformation to return the
largest or smallest numeric values in a group. You can also use the Rank transformation to return strings at
the top or bottom of the mapping sort order.
For example, you can use a Rank transformation to select the top 10 customers by region. Or, you might
identify the three departments with the lowest expenses in salaries and overhead.
The Rank transformation differs from the transformation functions MAX and MIN because the Rank
transformation returns a group of values, not just one value. While the SQL language provides many functions
designed to handle groups of data, identifying top or bottom strata within a set of rows is not possible using
standard SQL functions.
The Rank transformation is an active transformation because it can change the number of rows that pass
through it. For example, you configure the transformation to select the top 10 rows from a source that
contains 100 rows. In this case, 100 rows pass into the transformation but only 10 rows pass from the Rank
transformation to the downstream transformation or target.
When you run a mapping that contains a Rank transformation, Data Integration caches input data until it can
perform the rank calculations.
You set the Session Sort Order property in the advanced session properties for the mapping task. You can
select binary or a specific language such as Danish or Spanish. If you select binary, Data Integration
calculates the binary value of each string and sorts the strings using the binary values. If you select a
language, Data Integration sorts the strings alphabetically using the sort order for the language.
299
The following image shows the Session Sort Order property in the advanced session properties for a
mapping task:
Rank caches
When you run a mapping that contains a Rank transformation, Data Integration creates data and index cache
files to run the transformation. By default, Data Integration stores the cache files in the directory entered in
the Secure Agent $PMCacheDir property for the Data Integration Server.
You can change the cache directory and cache sizes on the Advanced tab of the Rank transformation.
Data Integration creates the following caches for the Rank transformation:
• Data cache that stores row data based on the group by fields.
• Index cache that stores group values as configured in the group by fields.
When you run a mapping that contains a Rank transformation, Data Integration compares an input row with
rows in the data cache. If the input row out-ranks a cached row, Data Integration replaces the cached row
with the input row. If you configure the Rank transformation to rank across multiple groups, Data Integration
ranks incrementally for each group that it finds.
If you create multiple partitions in the Source transformation, Data Integration creates separate caches for
each partition.
1. In the Mapping Designer, drag a Rank transformation from the transformation palette onto the canvas
and connect it to the upstream and downstream transformations.
2. Configure the transformation fields.
By default, the transformation inherits all incoming fields from the upstream transformation. If you do
not need to use all of the incoming fields, you can configure field rules to include or exclude certain
fields.
3. Configure the rank properties.
Select the field that you want to rank by, the rank order, and the number of rows to rank.
4. Optionally, configure rank groups.
You can configure the Rank transformation to create groups for ranked rows.
5. Optionally, configure the transformation advanced properties.
You can update the cache properties, tracing level for log messages, transformation scope, case-
sensitivity for string comparisons, and whether the transformation is optional.
Incoming fields
Incoming fields appear on the Incoming Fields tab. By default, the Rank transformation inherits all
incoming fields from the upstream transformation. If you do not need to use all of the incoming fields,
you can define field rules to include or exclude certain fields. For more information about field rules, see
“Field rules” on page 21.
RANKINDEX
After the Rank transformation identifies all rows that belong to a top or bottom rank, it assigns rank
index values. Data Integration creates the RANKINDEX field to store the rank index value for each row in
a group.
For example, you create a Rank transformation to identify the five retail stores in the company with the
highest monthly gross sales. The store with the highest sales receives a rank index of 1. The store with
the next highest sales receives a rank index of 2, and so on. If two stores have the same gross sales,
they receive the same rank index, and the transformation skips the next rank index.
For example, in the following data set, the Long Beach and Anaheim stores have the same gross sales,
so they are assigned the same rank index:
1 Anaheim 100000
3 Riverside 90000
When measuring a bottom rank, such as the 10 lowest priced products in the inventory, the Rank
transformation assigns a rank index from lowest to highest. Therefore, the least expensive item receives
a rank index of 1.
The RANKINDEX is an output field. It appears on the Incoming Fields tab of the downstream
transformation.
Rank By
Specify the field that you want to use for ranking in the Rank By field.
For example, you create a Rank transformation to rank the top 10 employees in each department based
on salary. The EMP_SALARY field contains the salary for each employee. Select EMP_SALARY as the
Rank By field.
Rank Order
Specify the rank order in the Rank Order field. Select Top or Bottom.
Number of Rows
Specify the number of rows to include in each rank group in the Number of Rows field. For example, to
rank the top 10 employees in each department based on salary, enter 10 in the Number of Rows field.
To define rank groups, select one or more incoming fields as Group By Fields. For each unique value in a rank
group, the transformation creates a group of rows that fall within the rank definition (top or bottom, and
number in each rank).
Note: Define rank groups to improve performance in an elastic mapping that processes a large volume of
data. When you define rank groups, processing is distributed across multiple worker nodes. If you do not
define rank groups, the data is processed on one worker node. Depending on the volume of data,
performance is impacted and the mapping might fail due to a lack of storage space on the EBS volume that is
attached to the worker node.
For example, you create a Rank transformation that ranks the top five salespersons grouped by quarter. The
rank index numbers the salespeople from 1 to 5 for each quarter as follows:
1 Alexandra B. 10000 1
2 Boris M. 9000 1
3 Chanchal R. 8000 1
4 Dong T. 7000 1
5 Elias M. 6000 1
1 Elias M. 11000 2
2 Boris M. 10000 2
3 Alexandra B. 9050 2
4 Dong T. 7500 2
5 Frances Z. 6900 2
If you define multiple rank groups, the Rank transformation groups the ranked rows in the order in which the
fields are selected in the Group By Fields list.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Cache Directory Directory where Data Integration creates the data cache and index cache files. By default, Data
Integration stores the cache files in the directory entered in the Secure Agent $PMCacheDir
property for the Data Integration Server.
If you change the cache directory, verify that the directory exists and contains enough disk
space for the cache files.
To increase performance during cache partitioning, enter multiple directories separated by
semicolons. Cache partitioning creates a separate cache for each partition that processes the
transformation.
Default is $PMCacheDir.
Rank Data Cache Data cache size for the transformation. Select one of the following options:
Size - Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Rank Index Cache Index cache size for the transformation. Select one of the following options:
Size - Auto. Data Integration sets the cache size automatically. If you select Auto, you can also
configure a maximum amount of memory for Data Integration to allocate to the cache.
- Value. Enter the cache size in bytes.
Default is Auto.
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You
can choose terse, normal, verbose initialization, or verbose data. Default is normal.
Transformation The method in which Data Integration applies the transformation logic to incoming data. Select
Scope one of the following values:
- Transaction. Applies the transformation logic to all rows in a transaction. Choose
Transaction when the results of the transformation depend on all rows in the same
transaction, but not on rows in other transactions.
- All Input. Applies the transformation logic to all incoming data. When you choose All Input,
Data Integration drops transaction boundaries. Select All Input when the results of the
transformation depend on all rows of data in the source.
Default is All Input.
Case Sensitive Specifies whether Data Integration uses case-sensitive string comparisons when it ranks
String Comparison strings. To ignore case in strings, disable this option. Default is enabled.
Optional Determines whether the transformation is optional. If a transformation is optional and there are
no incoming fields, the mapping task can run and the data can go through another branch in the
data flow. If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data
flow, you add a transformation with a field rule so that only Date/Time data enters the
transformation, and you specify that the transformation is optional. When you configure the
mapping task, you select a source that does not have Date/Time data. The mapping task
ignores the branch with the optional transformation, and the data flow continues through
another branch of the mapping.
Default is enabled.
Consider the following guidelines when you pass hierarchical fields to the Rank transformation:
Source Data
The following table shows the source data:
Mapping Configuration
Configure the mapping as shown in the following image:
Rank tab
Field Value
Rank By ORDER_AMT
Number of Rows 3
Group By tab
Router transformation
The Router transformation is an active transformation that you can use to apply a condition to incoming data.
In a Router transformation, Data Integration uses a filter condition to evaluate each row of incoming data. It
tests the conditions of each user-defined group before processing the default group. If a row meets more
than one group filter condition, Data Integration passes the row multiple times. You can either drop rows that
do not meet any of the conditions or route those rows to a default output group.
If you need to test the same input data based on multiple conditions, use a Router transformation in a
mapping instead of creating multiple Filter transformations to perform the same task.
The following table compares the Router transformation to the Filter transformation:
Conditions Test for multiple conditions in a single Router Test for one condition per
transformation Filter transformation
Handle rows that do not meet the Route rows to the default output group or drop Drop rows that do not meet
condition rows that do not meet the condition the condition
Incoming data Process once with a single Router Process in each Filter
transformation transformation
The following figure shows a mapping with a Router transformation that filters data based on region and
routes it to a different target, either NA, EMEA, or APAC. The transformation routes data for other regions to
the default target:
308
Working with groups
You use groups in a Router transformation to filter the incoming data.
Data Integration uses the filter conditions to evaluate each row of incoming data. It tests the conditions of
each user-defined group before processing the default group. If a row meets more than one group filter
condition, Data Integration passes the row to multiple groups.
Input
Data Integration copies properties from the input group fields to create the fields for each output group.
Output
• User-defined groups. Create a user-defined group to test a condition based on incoming data. A user-
defined group consists of output ports and a group filter condition. Create one user-defined group for
each condition that you want to specify. Data Integration processes user-defined groups that are
connected to a transformation or a target.
• Default group. The default group captures rows that do not satisfy any group condition. You cannot
edit, delete, or define a group filter condition for the default group. If all of the conditions evaluate to
FALSE, Data Integration passes the row to the default group. If you want to drop rows that do not
satisfy any group condition, do not connect the default group to a transformation or a target.
You can modify a user-defined output group name. Click the row to open the Edit Output Group dialog
box.
• You can connect each output group to one or more transformations or targets.
• You cannot connect more than one group to one target or a single input group transformation.
• You can connect more than one output group to a downstream transformation if you connect each output
group to a different input group.
• If you want Data Integration to drop all rows in the default group, do not connect it to a transformation or
a target in a mapping.
A group filter condition returns TRUE or FALSE for each row that passes through the transformation,
depending on whether a row satisfies the specified condition. Zero (0) is the equivalent of FALSE, and any
non-zero value is the equivalent of TRUE.
• Passes the rows of data that evaluate to TRUE to each transformation or target that is associated with
each user-defined group.
The Router transformation can pass data through multiple output groups. For example, if the data meets
three output group conditions, the Router transformation passes the data through three output groups.
• Passes the row to the default group if all of the conditions evaluate to FALSE.
You cannot configure a group filter condition for the default group. However, you can add an Expression
transformation to perform a calculation and handle the rows in the default group.
6. Click the + sign to add a row for each condition that you want to apply to this group.
7. Choose a Field Name, Operator, and Value for each condition.
8. Click OK to save the conditions.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Tracing Detail level of error and status messages that Data Integration writes in the session log. You can
Level choose terse, normal, verbose initialization, or verbose data. Default is normal.
Optional Determines whether the transformation is optional. If a transformation is optional and there are no
incoming fields, the mapping task can run and the data can go through another branch in the data flow.
If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data flow, you
add a transformation with a field rule so that only Date/Time data enters the transformation, and you
specify that the transformation is optional. When you configure the mapping task, you select a source
that does not have Date/Time data. The mapping task ignores the branch with the optional
transformation, and the data flow continues through another branch of the mapping.
You can use the hierarchical fields as pass-through fields. You can also use hierarchical fields in an
advanced filter condition or use a hierarchical field with a complex operator to access primitive child fields in
the filter condition. For more information about complex operators, see Function Reference.
Consider the following guidelines when you use hierarchical fields in a filter condition:
• Group data by different country attributes and route each group to different target tables based on
conditions that test for the region.
• Group inventory items by different price categories and route each group to different target tables based
on conditions that test for low, medium, and high prices.
The following figure shows a mapping with a Router transformation that filters data based on these
conditions:
Create three output groups and specify the group filter conditions on the Output Groups tab as shown in the
following table:
NA region = ‘NA’
The default group includes data for all customers that are not in the NA, EMEA, or APAC region.
When the Router transformation processes an input row with item_price=510, it routes the row to both output
groups.
If you want to pass the data through a single output group, define the filter conditions so that they do not
overlap. For example, you might change the filter condition for PriceGroup1 to item_price <= 500.
A rule specification is a set of one or more logical operations that analyze data according to business criteria
that you define. The rule specification generates an output that indicates whether the data satisfies the
business criteria. The rule specification can also update the data that it analyzes. You define the logical
operations as IF/THEN/ELSE statements in Data Quality.
Each Rule Specification transformation can contain a single rule specification. You can add multiple Rule
Specification transformations to a mapping.
To use the Rule Specification transformation, you need the appropriate license.
1. Connect the Rule Specification transformation to a Source transformation or other upstream object.
2. On the Rule Specification tab, select the rule specification that you want to include in the
transformation.
313
The following image shows the options that you use to select the rule specification:
Note: Consider the following rules and guidelines when you use a parameter to identify the rule specification
that the transformation uses:
• If you use a parameter to identify the rule specification, you must use a parameter to define the field
mappings on the Field Mapping tab. The parameter must be of type string.
• If you use a parameter to identify the rule specification, any downstream object in the mapping that
connects to the Rule Specification transformation must be completely parameterized.
Note: If you update an asset in Data Quality after you add it to a transformation, you may need to synchronize
the asset version in the transformation with the latest version. For more information about data quality asset
synchronization, see “Synchronizing data quality assets” on page 89.
• Manual. Manually link incoming fields to transformation input fields. Removes links for automatically
mapped fields.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
• Completely Parameterized. Use a parameter to represent the field mapping. In the task, you can
configure all field mappings.
Choose the Completely Parameterized option when the rule specification in the transformation is
parameterized or any upstream transformation in the mapping is parameterized
• Partially Parameterized. Configure links in the mapping that you want to enforce and use a parameter
to allow other fields to be mapped in the mapping task. Or, use a parameter to configure links in the
mapping, and allow all fields and links to display in the task for configuration.
Parameter
Select the parameter to use for the field mapping, or create a new parameter. This option appears when
you select Completely Parameterized or Partially Parameterized as the field map option. The parameter
must be of type field mapping.
Do not use the same field mapping parameter in more than one Rule Specification transformation in a
single mapping.
Options
Controls how fields are displayed in the Incoming Fields and Rule Specification Input Fields lists.
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields and to manually configure other
field mappings. The Automap options appear when you select the Manual or Partially Parameterized
field map option.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap.
To unmap a single field, select the field to unmap and click Actions > Unmap on the context menu for the
field. To unmap one or more fields that you selected, click Unmap Selected on the Rule Specification
Input Fields context menu.
To clear all field mappings from the transformation, click Clear Mapping on the Rule Specification Input
Fields context menu.
The tab displays the name, type, precision, and scale for each output field. The output field names are the
names of the rule sets in the rule specification.
You cannot edit the output field properties in the Rule Specification transformation. To edit the properties,
open the rule specification in Data Quality.
Sequence Generator
transformation
The Sequence Generator transformation is a passive and connected transformation that generates numeric
values. Use the Sequence Generator to create unique primary key values, replace missing primary keys, or
cycle through a sequential range of numbers.
The Sequence Generator transformation contains pass-through fields and two output fields, NEXTVAL and
CURRVAL. In an elastic mapping, the transformation contains one output field, NEXTVAL. You can connect
the output fields to one or more downstream transformations.
The mapping task generates a numeric sequence of values each time the mapped fields enter a connected
transformation. You set the range of numbers in the Mapping Designer. You can change the initial number in
the sequence when you run the task.
After the task completes, you can see the current value and the initial value for a Sequence Generator
transformation in the mapping task details.
To use the Sequence Generator transformation, you need the appropriate license.
Note: If you use the Sequence Generator transformation in an elastic mapping that runs in an AWS
environment, make sure that the Spark driver can communicate with the Secure Agent. For more information,
see Administrator in the Administrator help.
The sequence begins with the Initial Value that you specify.
You can establish a range of values for the Sequence Generator transformation. If you use the cycle
option, the Sequence Generator transformation repeats the range when it reaches the end value. For
example, if you set the sequence range to start at 10 and end at 50, and you set an increment value of
10, the Sequence Generator transformation generates the following values: 10, 20, 30, 40, 50. The
sequence starts over again at 10.
317
Continue an existing sequence of numbers.
Each time you run the mapping task, the task updates the value to reflect the last-generated value plus
the Increment By value. If you want the numbering to start over each time you run the task, you can
enable the Reset configuration property.
Generate a sequence of unique numbers for multiple partitions using the same Sequence Generator.
You can specify the number of sequential values the mapping task caches at a time so that the task
does not generate the same numbers for each partition. You cannot generate a cyclic sequence of
numbers when you use the same Sequence Generator for multiple partitions.
You can connect a Sequence Generator transformation to any transformation. If the mapping contains both
fields, you do not need to map both of the output fields. If you do not map one of the output fields, the
mapping task ignores the unmapped field.
NEXTVAL field
Use the NEXTVAL field to generate a sequence of numbers based on the Initial Value and Increment By
properties.
Map the NEXTVAL field to an input field in a Target transformation or other downstream transformation to
generate a sequence of numbers. If you do not configure the Sequence Generator to cycle through the
sequence, the NEXTVAL field generates sequence numbers up to the configured End Value.
If you map the NEXTVAL field to multiple transformations, the mapping task generates the same sequence or
a unique sequence of numbers for each downstream transformation based on the mapping type and whether
incoming fields are disabled.
The following table lists the situations where the Sequence Generator transformation generates the same
sequence or a unique sequence of numbers:
* To generate the same sequence of numbers when incoming fields are disabled, you can place an Expression
transformation between the Sequence Generator and the transformations to stage the sequence of numbers.
NEXTVAL CURRVAL
1 2
2 3
3 4
4 5
5 6
Typically, you map the CURRVAL field when the NEXTVAL field is already mapped to a downstream
transformation in the map. If you map the CURRVAL field without mapping the NEXTVAL field, the mapping
task generates the same number for each row.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Use Shared Enable to generate sequence values using a shared sequence. When enabled, the sequence starts
Sequence with the Current Value of the shared sequence.
For information about shared sequences, see Components.
Default is disabled.
Increment By The difference between two consecutive values in a generated sequence. For example, if Increment
By is 2 and the existing value is 4, then the next value generated in the sequence will be 6.
Default is 1.
Maximum value is 2,147,483,647.
End Value Maximum value that the mapping task generates. If the sequence reaches this value during the task
run and the sequence is not configured to cycle, the run fails.
Maximum value is 9,223,372,036,854,775,807.
If you connect the NEXTVAL field to a downstream integer field, set the End Value to a value no larger
than the integer maximum value. If the NEXTVAL exceeds the data type maximum value for the
downstream field, the mapping run fails.
In an elastic mapping, set the end value to at least the maximum number of rows that you process.
Initial Value The value you want the mapping task to use as the first value in the sequence. If you want to cycle
through a series of values, the value must be greater than or equal to the Start Value and less than
the End Value.
Default is 1.
Cycle If enabled, the mapping task cycles through the sequence range. If disabled, the task stops the
sequence at the configured End Value. The session fails if the task reaches the End Value and still
has rows to process.
Default is disabled.
Cycle Start Start value of the generated sequence that you want the mapping task to use if you use the Cycle
Value option. When the sequence values reach the End Value, they cycle back to this value.
Default is 0.
Maximum value is 9,223,372,036,854,775,806.
Number of Number of sequential values the mapping task caches for each run. Each subsequent run uses a new
Cached batch of values. The task discards unused sequences for the batch. The mapping task updates the
Values repository as it caches each value. When set to 0, the task does not cache values.
Use this option when multiple partitions use the same Sequence Generator at the same time to
ensure each partition receives unique values.
Default is 0.
This option is not available when the Cycle property is enabled.
In an elastic mapping, you cannot set the number of cached values. However, your organization
administrator can optimize how values are cached. For more information, contact Informatica Global
Customer Support. (Ref 619019)
Reset If enabled, the mapping task generates values based on the original Initial Value for each run.
Default is disabled.
Property Description
Tracing Detail level of error and status messages that Data Integration writes in the session log. You can
Level choose terse, normal, verbose initialization, or verbose data. Default is normal.
Optional Determines whether the transformation is optional. If a transformation is optional and there are no
incoming fields, the mapping task can run and the data can go through another branch in the data
flow. If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data flow, you
add a transformation with a field rule so that only Date/Time data enters the transformation, and you
specify that the transformation is optional. When you configure the mapping task, you select a source
that does not have Date/Time data. The mapping task ignores the branch with the optional
transformation, and the data flow continues through another branch of the mapping.
Disable Disable incoming fields to connect only the generated sequence to a downstream transformation. If
incoming you disable incoming fields, you must connect at least one field from another transformation to the
fields downstream transformation.
When you disable incoming fields, at least one field from another transformation must be connected to the
downstream transformation along with the Sequence Generator fields. For example, if the mapping contains
a Sequence Generator transformation and a Source transformation and the Sequence Generator
transformation is connected to a Target transformation, you must connect at least one field from the Source
transformation to the Target transformation.
The following image shows a mapping where incoming fields are disabled in the Sequence Generator
transformation:
• When you map the NEXTVAL or CURRVAL output fields, ensure that the data type of the mapped field is
appropriate.
• When you run the mapping in the Mapping Designer, the current value is not saved so each time you run
the mapping, it begins with the initial value.
• When you run the task in the mapping task wizard, you can edit the current value to start the sequence
with a specified value.
• You cannot use a Sequence Generator transformation in a mapplet.
You are gathering customer data and need to assign customer IDs to each customer. The CustomerData.csv
flat file contains your source customer data. You create a mapping that includes the Sequence Generator
transformation to create customer IDs, using the following process:
1. You create a copy of the CustomerData.csv file to use as the target and then add the cust_id field to the
file to hold the generated customer ID values. You name the file CustomerData_IDs.csv.
2. You create a connection that has access to the CustomerData.csv and CustomerData_IDs.csv files.
3. You create a mapping in the Mapping Designer and add a Source transformation to the mapping. You
configure the transformation to use the CustomerData.csv file.
4. You add a Sequence Generator transformation to the mapping.
On the transformations palette, the Sequence Generator transformation is labeled "Sequence."
5. You want a simple sequence starting with 1, so on the Sequence tab, you set the Initial Value to 1 and
the Increment By value to 1. This setting starts the sequence at 1 and increments the value by 1, for
example, 1, 2, 3.
6. You add a Target transformation to the mapping and configure the transformation to use the
CustomerData_IDs.csv file that you created.
7. You connect the Source transformation to the Sequence Generator transformation and the Sequence
Generator transformation to the Target transformation:
8. In the Target transformation, you map the NEXTVAL output field to the cust_id field.
9. You save the mapping and create a mapping task in the mapping task wizard. The Current Value is 1
because you have not run the mapping yet and the Initial Value is 1.
Sorter transformation
Use a Sorter transformation to sort data in ascending or descending order, according to a specified sort
condition. You can configure the Sorter transformation for case-sensitive sorting and for distinct output. The
Sorter transformation is a passive transformation.
You can use the Sorter transformation to increase performance with other transformations. For example, you
can sort data that passes through a Lookup or an Aggregator transformation configured to use sorted
incoming fields.
When you create a Sorter transformation, specify fields as sort conditions and configure each sort field to
sort in ascending or descending order. You can use a parameter for the sort condition and define the value of
the parameter when you configure the mapping task.
Note: In an elastic mapping, make sure that the following conditions are true for the Sorter transformation to
take effect:
Sort conditions
Configure the sort condition to specify the sort fields and the sort order. The mapping task uses the sort
condition to sort the data.
The sort fields are one or more fields that you want to use as the sort criteria. Configure the sort order to sort
data in ascending or descending order. If the mapping is not an elastic mapping, you can also override the
sort order using the advanced session properties when you schedule the mapping task.
When you specify multiple sort conditions, the mapping task sorts each condition sequentially. The mapping
task treats each successive sort condition as a secondary sort of the previous sort condition. You can
configure the order of sort conditions.
If you use a parameter for the sort condition, define the sort fields and the sort order when you run the
mapping or when you configure the mapping task.
326
Sorter caches
The mapping task passes all incoming data into the Sorter transformation before it performs the sort
operation. The mapping task uses cache memory to process Sorter transformations. If the mapping task
cannot allocate enough memory, the mapping fails.
By default, the mapping task determines the cache size at run time. Before starting the sort operation, the
mapping task allocates the amount of memory configured for the Sorter cache size.
Configure the Sorter cache size with a value less than the amount of available physical RAM on the machine
that hosts the Secure Agent. Allocate at least 16 MB (16,777,216 bytes) of physical memory to sort data with
a Sorter transformation. When you allocate memory to the Sorter cache, consider other transformations in
the mapping and the volume of data in the mapping task.
If the amount of incoming data is greater than the Sorter cache size, the mapping task temporarily stores
data in the work directory. When storing data in the Sorter transformation work directory, the mapping task
requires disk space of at least twice the amount of incoming data.
When you configure the tracing level to Normal, the mapping task writes the memory amount that the Sorter
transformation uses to the session log.
Advanced properties
You can specify additional sort criteria in the Sorter transformation advanced properties. The mapping task
applies the properties to all sort fields. The Sorter transformation properties also determine the system
resources that the mapping task allocates when it sorts data.
Note: The properties that appear in the transformation depend on the mapping type.
You can configure the following advanced properties for a Sorter transformation:
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You can
choose terse, normal, verbose initialization, or verbose data. Default is normal.
Sorter Cache Size The maximum amount of memory required to perform the sort operation. The mapping task
passes all incoming data into the Sorter transformation before it performs the sort operation. If
the mapping task cannot allocate enough memory, the mapping fails.
You can configure a numeric value for the sorter cache. Allocate at least 16 MB of physical
memory. Default is Auto.
Case Sensitive Determines whether the mapping task considers case when sorting data. When you enable a
case-sensitive sort, the mapping task sorts uppercase characters higher than lowercase
characters. Default is Case Sensitive.
Work Directory The mapping task uses the work directory to create temporary files while it sorts data. After the
mapping task sorts data, it deletes the temporary files.
You can specify any directory on the Secure Agent machine to use as a work directory. Allocate
at least 16 MB (16,777,216 bytes) of physical memory for the work directory. You can configure a
system parameter or a user-defined parameter in this field. Default is the TempDir system
parameter.
Distinct Treats output rows as distinct. If you configure the Sorter transformation for distinct output
rows, the mapping task configures all fields as part of the sort condition. The mapping task
discards duplicate rows compared during the sort operation.
Null Treated Low Treats a null value as lower than any other value. For example, if you configure a descending sort
condition, rows with a null value in the sort field appear after all other rows.
Transformation The transaction is determined by the commit or rollback point. The transformation scope
Scope specifies how the mapping task applies the transformation logic to incoming data:
- Transaction. Applies the transformation logic to all rows in a transaction. Choose Transaction
when the results of the transformation depend on all rows in the same transaction, but not on
rows in other transactions.
- All Input. Applies the transformation logic to all incoming data. When you choose All Input, the
mapping task drops incoming transaction boundaries. Choose All Input when the results of the
transformation depend on all rows of data in the source.
Optional Determines whether the transformation is optional. If a transformation is optional and there are
no incoming fields, the mapping task can run and the data can go through another branch in the
data flow. If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data
flow, you add a transformation with a field rule so that only Date/Time data enters the
transformation, and you specify that the transformation is optional. When you configure the
mapping task, you select a source that does not have Date/Time data. The mapping task ignores
the branch with the optional transformation, and the data flow continues through another branch
of the mapping.
The Product_Orders table contains information about all the orders placed by customers.
Add a Sorter transformation to the mapping canvas, and connect it to the data flow. Sort the product orders
by order ID and item ID.
The following image shows a sort condition with the order ID and item ID fields configured to sort in
descending order:
Enable a null treated low sort so that the mapping task considers null values to be lower than other values.
After the mapping task sorts the data, it passes the following rows out of the Sorter transformation:
You need to find out the total amount and the item quantity for each order. You can use the result of the
Sorter transformation as an input to an Aggregator transformation to increase performance. Add the
Aggregator transformation to the mapping, and connect the transformation to the data flow. Group the fields
in the Aggregator transformation by the Order ID, and add an expression to sum the orders by price.
When you pass the data from the Sorter transformation, the Aggregator transformation groups the order ID to
calculate the total amount for each order.
OrderID Sum
45 54.06
43 217.17
41 36.2
Write the order summaries to an object in the data warehouse that stores all order totals.
SQL transformation
Use the SQL transformation to call a stored procedure or function in a relational database or to processes
SQL queries midstream in a pipeline. The transformation can call a stored procedure or function, process a
saved query, or process a query that you create in the transformation SQL editor.
The SQL transformation can process the following types of SQL statements:
A stored procedure is a precompiled collection of database procedural statements and optional flow
control statements, similar to an executable script. Stored procedures reside in the database and run
within the database. A stored function is similar to a stored procedure, except that a function returns a
single value.
When the SQL transformation processes a stored procedure or function, it passes input parameters to
the stored procedure or function. The stored procedure or function passes the return value or values to
the output fields of the transformation.
You can configure the SQL transformation to process a saved query that you create in Data Integration
or you can enter a query in the SQL editor. The SQL transformation processes the query and returns rows
and database errors.
You can pass strings or parameters to the query to define dynamic queries or change the selection
parameters. You can output multiple rows when the query has a SELECT statement.
You can call a stored procedure or function with the following types of SQL transformations:
Connected SQL transformation
The transformation is connected to the mapping pipeline. The stored procedure or function runs on a
row by row basis and can return a single output parameter or multiple output parameters.
You can map the incoming fields of the SQL transformation to the input fields of a stored procedure. The
output fields in the SQL transformation consist of the stored procedure output parameters or return
values.
332
A return value is a code or text string that you define in the stored procedure. For example, a stored
procedure can return a value that indicates the date the stored procedure was run. When a stored
procedure has a return value, the SQL transformation has a return value field.
The SQL transformation is not connected to the mapping pipeline. An Expression transformation calls
the SQL transformation with a stored procedure expression, or the stored procedure runs before or after
the mapping.
You can configure the expression to return the stored procedure output to expression output fields and
variables. You can call the stored procedure from multiple expressions and nest stored procedures.
• Check the status of a target database before loading data into it.
• Determine if enough space exists in a database.
• Perform a specialized calculation.
• Retrieve data by a value.
• Drop and re-create indexes.
• Remove temporary tables.
• Verify that a table exists in a database
You can use a stored procedure to perform a calculation that you would otherwise make part of a mapping.
For example, if you have a stored procedure to calculate sales tax, perform that calculation in an SQL
transformation instead of re-creating the calculation in an Expression transformation.
When you run a mapping, the SQL transformation passes input parameters to the stored procedure. The
stored procedure passes the return value or values to the output fields of the transformation.
You have a stored procedure that matches user IDs with user names in the database. You add an SQL
transformation to your mapping, select the stored procedure, and map the userId incoming field with the
userId input field in the stored procedure. You check the Output Fields tab for the SQL transformation to
confirm that it includes the username field. When you run the mapping, the username value is returned with
the user ID.
You have a stored procedure that calculates employee salary increases. The stored procedure returns the
new salary and the percentage of increase. You add an unconnected SQL transformation and select the
stored procedure.
You then add an Expression transformation to the mapping pipeline. In the Expression transformation, you
add a variable field to capture the new salary. You add an output field and use the stored procedure function
to configure the expression. You configure the arguments so that the output field returns the increase
percentage and you create a second output field to return the new salary. You then map the new output fields
to the downstream transformation.
You process a stored procedure with a connected SQL transformation when you need data from an input field
sent as an input parameter to the stored procedure, or you need the results of a stored procedure sent as an
output parameter to another transformation.
You process a stored procedure with an unconnected SQL transformation when you need the stored
procedure to run before or after a mapping, run nested stored procedures, or call the stored procedure
multiple times.
The following table describes when you would use a connected or unconnected SQL transformation to
process a stored procedure:
Run a stored procedure every time a row passes through the SQL transformation. Connected or
unconnected
Run a stored procedure based on data that passes through the mapping such as when a Unconnected
specific field does not contain a null value.
Pass parameters to the stored procedure and receive a single output parameter. Connected or
unconnected
Pass parameters to the stored procedure and receive multiple output parameters. Connected or
Note: To get multiple output parameters from an unconnected SQL transformation, you unconnected
must create variables for each output parameter.
You use an Expression transformation to call the unconnected SQL transformation with an :SP expression.
Or, you configure the SQL transformation to invoke a stored procedure before or after a mapping run. For
example, you might use an unconnected SQL transformation to remove temporary source tables after the
mapping receives data from the source.
You might also use an unconnected SQL transformation when you want to call a stored procedure multiple
times in a mapping.
When you call a stored procedure from an expression, you configure the expression to return the stored
procedure output values to fields in the expression. Use one of the following methods to return the output
values:
When you use the PROC_RESULT variable, Data Integration assigns the value of the return parameter directly
to the output field, which you can write to a target. You can also assign one output parameter to
PROC_RESULT and the other parameter to a variable.
Use expression variables to access OUT or INOUT parameters in the stored procedure. If the stored
procedure returns multiple output parameters, you must create variables for each output parameter.
For example, the following expression calls a stored procedure called GET_NAME_FROM_ID:
:SP.GET_NAME_FROM_ID(inID, PROC_RESULT)
inID can be either an input field in the stored procedure or a variable in the Expression transformation. When
you run the mapping, Data Integration applies the value of PROC_RESULT to the output field for the
expression.
If the stored procedure returns multiple output parameters, you must create expression variables for each
output parameter. For example, if the stored procedure also returns a title, create a variable field called
varTitle1 in the Expression transformation and use the field as the expression for an output field called Title.
You write the following expression:
:SP.GET_NAME_FROM_ID(inID, varTitle1, PROC_RESULT)
The following image shows how you configure the Expression transformation:
Data Integration returns output parameters in the order they are declared in the stored procedure. In this
example, Data Integration applies the value of the first output field in the stored procedure to varTitle1 and
passes it to the Title field in the Expression transformation. It applies the value of the second stored
procedure output field to the output field for the expression.
• Source Pre-load. The stored procedure runs before the mapping retrieves data from the source.
• Source Post-load. The stored procedure runs after the mapping retrieves data from the source.
• Target Pre-load. The stored procedure runs before the mapping sends data to the target.
• Target Post-load. The stored procedure runs after the mapping sends data to the target.
On the Advanced tab, configure the stored procedure type and enter the call text for the stored procedure.
The call text is the name of the stored procedure followed by any applicable input parameters in parentheses.
If there are no input parameters, you must include an empty pair of parentheses. Do not include the SQL
statement EXEC or use the :SP keyword.
For example, to call the stored procedure Drop_Table, enter the following call text:
Drop_Table()
To pass a string input parameter, enter it without quotes. If the string has spaces in it, enclose the parameter
in double quotes. For example, if the stored procedure Drop_Table requires a table name as an input
parameter, enter the following call text:
Drop_Table(Customer_list)
Configure the Source transformation to load the source data that you want to use.
SQL_Add_Salary transformation
Configure the first SQL transformation to call the ADD_SALARY stored procedure.
On the SQL tab, select the connection that contains the ADD_SALARY stored procedure and then select
the stored procedure.
SQL_Increase transformation
Configure the second SQL transformation to call the SALARY_INCREASE stored procedure.
On the SQL tab, select the connection that contains the SALARY_INCREASE stored procedure and then
select the stored procedure.
The stored procedure has one input field for the employee name and returns the new salary in the output
field.
Expression_Add_Salary transformation
Configure the first Expression transformation to call the SQL_Add_Salary transformation. Create a
variable field for the input parameter and an output field to capture the output of the stored procedure.
On the Expression tab, add a variable field named ID and configure its value as the EMP_ID field in the
source. Create an output field called salary to capture the return value of the ADD_SALARY stored
procedure in the first SQL transformation. Configure the salary field to call the ADD_SALARY stored
procedure with the following expression:
:SP.SQL_Add_Salary(ID, PROC_RESULT)
The expression takes the variable field ID as the input parameter of the stored procedure and returns the
salary value to the SALARY output field.
Expression_Increase transformation
Configure the second Expression transformation to call the SQL_increase transformation. Add a variable
field called CurrentSalary and configure its value as the incoming salary field. Add a output field called
newSalary to capture the return value of the SALARY_INCREASE stored procedure. Configure the
newSalary field to call the SALARY_INCREASE stored procedure with the following expression:
:SP.SQL_increase(CurrentSalary, PROC_RESULT)
The expression takes the variable field CurrentSalary as the input parameter of the stored procedure and
returns the new salary to the newSalary output field.
The following image shows how you configure the Expression transformation:
Target transformation
Configure the Target transformation to create a target file at run time.
When you run the mapping, you get the following results:
EMP_ID, EMP_NAME, salary, newSalary
1001, John, 400, 480
1002, Alice, 500, 600
1003, Mary, 400, 480
1004, Mark, 700, 840
1005, Stephan, 600, 720
When you enter a query, you can format the SQL and validate the syntax. Alternatively, you can create a string
parameter to define the query in the mapping task.
The query statement does not change, but you can use query parameters to change the data. Data
Integration prepares the SQL query once and runs the query for all input rows.
You can change the query statements and the data. Data Integration prepares an SQL query for each
input row.
To change the data in the query, you configure query parameters and bind them to input fields in the
transformation. When you bind a parameter to an input field, you identify the field by name in the query.
Enclose the field name in question marks (?). The query data changes based on the value of the data in the
input field.
Example
The following static SQL query uses query parameters that bind to the Employee_ID and Dept input fields of
an SQL transformation:
SELECT Name, Address FROM Employees WHERE Employee_Num = ?Employee_ID? and Dept = ?Dept?
The source has the following rows:
Employee_ID Dept
100 Products
123 HR
130 Accounting
Data Integration generates the following query statements from the rows:
SELECT Name, Address FROM Employees WHERE Employee_ID = ‘100’ and DEPT = ‘Products’
SELECT Name, Address FROM Employees WHERE Employee_ID = ‘123’ and DEPT = ‘HR’
SELECT Name, Address FROM Employees WHERE Employee_ID = ‘130’ and DEPT = ‘Accounting’
When you configure output fields for database columns, you must configure the data type of each database
column that you select. Select a native data type from the list. When you select the native data type, Data
Integration configures the transformation data type for you.
The native data type in the transformation must match the database column data type. Data Integration
matches the column data type in the database with the native database type in the transformation at run
time. If the data types do not match, Data Integration generates a row error.
To change a query statement, configure a string variable in the query for the portion of the query that you
want to change. To configure the string variable, identify an input field by name in the query and enclose the
name in tilde characters (~). The query changes based on the value of the data in the field.
The transformation input field that contains the query variable must be a string data type. You can use string
substitution to change the query statement and the query data.
When you create a dynamic SQL query, Data Integration prepares a query for each input row. You can pass
the following types of dynamic queries in an input field:
Full query
You can substitute the entire SQL query with query statements from source data.
Partial query
You can substitute a portion of the query statement, such as the table name.
To pass the full query, configure the source to pass the full query in an output field. Then, configure the SQL
transformation to receive the query in the Query_Field input field.
Data Integration replaces the ~Query_Field~ variable in the dynamic query with the SQL statements from the
source. It prepares the query and sends it to the database to process. The database executes the query. The
SQL transformation returns database errors to the SQLError output field.
When you pass the full query, you can pass more than one query statement for each input row. For example,
the source might contain the following rows:
DELETE FROM Person WHERE LastName = ‘Jones’; INSERT INTO Person (LastName, Address)
VALUES ('Smith', '38 Summit Drive')
DELETE FROM Person WHERE LastName = ‘Jones’; INSERT INTO Person (LastName, Address)
VALUES ('Smith', '38 Summit Drive')
DELETE FROM Person WHERE LastName = ‘Russell’;
You can pass any type of query in the source data. When you configure SELECT statements in the query, you
must configure output fields for the database columns that you retrieve from the database. When you mix
SELECT statements and other types of queries, the output fields that represent database columns contain
null values when no database columns are retrieved.
For example, the following dynamic query contains a string variable, ~Table_Field~:
SELECT Emp_ID, Address from ~Table_Field~ where Dept = ‘HR’
The source might pass the following values to the Table_Field column:
Table_Field
Employees_USA
Employees_England
Employees_Australia
If you configure the SQL transformation to process a query, you can configure passive mode when you create
the transformation. Configure passive mode in the transformation advanced properties.
When you configure the transformation as a passive transformation and a SELECT query returns more than
one row, Data Integration returns the first row and an error to the SQLError field. The error states that the SQL
transformation generated multiple rows.
If the SQL query has multiple SQL statements, Data Integration executes all statements but returns data for
the first SQL statement only. The SQL transformation returns one row. The SQLError field contains the errors
from all SQL statements. When multiple errors occur, they are separated by semicolons (;) in the SQLError
field.
The following table lists the statements that you can use in an SQL query in the SQL transformation:
Data Manipulation EXPLAIN PLAN Writes the access plan for a statement into the database Explain tables.
Data Manipulation LOCK TABLE Prevents concurrent application processes from using or changing a
table.
Data Control Language REVOKE Removes access privileges for a database user.
Transaction Control COMMIT Saves a unit of work and performs the database changes for that unit of
work.
Transaction Control ROLLBACK Reverses changes to the database since the last COMMIT.
• The number and the order of the output fields must match the number and order of the fields in the query
SELECT clause.
• The native data type of an output field in the transformation must match the data type of the
corresponding column in the database. Data Integration generates a row error when the data types do not
match.
• When the SQL query contains an INSERT, UPDATE, or DELETE clause, the transformation returns data to
the SQLError field, the pass-through fields, and the NumRowsAffected field when it is enabled. If you add
output fields, the fields receive NULL data values.
• When the SQL query contains a SELECT statement and the transformation has a pass-through field, the
transformation returns data to the pass-through field whether or not the query returns database data. The
SQL transformation returns a row with NULL data in the output fields.
• When the number of output fields is more than the number of columns in the SELECT clause, the extra
fields receive a NULL value.
• When the number of output fields is less than the number of columns in the SELECT clause, Data
Integration generates a row error.
• You can use string substitution instead of parameter binding in a query. However, the input fields must be
string data types.
The following image shows the Properties panel of an SQL transformation that is configured to process a
stored procedure:
Configure the transformation using the following tabs on the Properties panel:
General
Incoming Fields
Define field rules that determine the data to include in the transformation.
SQL
Define the database connection and the type of SQL that the transformation processes: either a stored
procedure, stored function, or query.
If you configure the transformation to process a stored procedure, stored function, or saved query, you
select the stored procedure, stored function, or saved query on this tab.
If you configure the transformation to process a stored procedure, you can choose to run the
transformation as an unconnected transformation.
If you configure the transformation to process a user-entered query, the SQL editor appears on this tab.
Enter the query in the SQL editor.
Input Fields
For transformations that process stored procedures, displays the stored procedure input fields.
Field Mapping
For stored procedure and stored functions, specify how to map incoming fields to the input fields of the
selected stored procedure or function.
You do not configure the field mapping for queries or unconnected SQL transformations.
For stored procedures, stored functions, and saved queries, displays a preview of the SQL
transformation output fields. For user-entered queries, configure output fields for the columns retrieved
from the database.
For queries, the output fields also include the SQLError field, the optional NumRowsAffected field, and
optional pass-through fields.
Advanced
Define advanced properties for the transformation. Advanced properties differ based on the type of SQL
that the transformation processes.
Note: Field name conflicts must be resolved in an upstream transformation. You cannot use field name
conflict resolution rules in an SQL transformation.
The steps for configuring the transformation vary based on the type of SQL that the transformation
processes.
1. In the Properties panel of the SQL transformation, click the SQL tab.
2. Select the connection to the database.
You can select the connection or use a parameter.
Note: If you want to parameterize the connection, create the parameter after you select the stored
procedure or function.
3. Set the SQL type to Stored Procedure or Stored Function.
4. Click Select to select the stored procedure or function from the database, or enter the exact name of the
stored procedure or function to call.
The stored procedure or function name is case-sensitive.
Note: If you add a new stored procedure to the database while you have the mapping open, the new
stored procedure does not appear in the list of available stored procedures. To refresh the list, close and
reopen the mapping.
5. If the transformation processes a stored procedure and you want to run the transformation in
unconnected mode, select Unconnected Stored Procedure.
6. If you want to parameterize the connection, click New Parameter and enter the details for the connection
parameter.
1. In the Properties panel of the SQL transformation, click the SQL tab.
Entering a query
You can configure the SQL transformation to process a user-entered query on the SQL tab of the SQL
transformation. Optionally, you can parameterize the query. When you parameterize the query, you enter the
full query in the mapping task.
1. In the Properties panel of the SQL transformation, click the SQL tab.
2. Select the connection to the database or use a parameter.
3. Set the SQL type to SQL Query.
4. Set the query type to Entered Query.
5. If you do not want to parameterize the query, enter the query in the query editor.
Incoming fields are listed on the Fields tab. To add a field to the query, select the field and click Add.
You can format the SQL and validate the syntax.
Note: The syntax validation performs a general SQL syntax check but does not verify the SQL against the
database. The validation can return a syntax error even though the SQL is valid for the database. In this
case, you can still save and run the mapping.
If you update the incoming fields after you configure the query, open the SQL tab to refresh the changes.
6. If you want to parameterize the query, perform the following steps:
a. Open the Parameters tab and create a new string parameter.
b. Select the parameter, and then click Add to add the parameter to the query editor.
When you add the parameter to the query editor, Data Integration encloses it in dollar sign
characters ($).
Do not format the SQL or validate the query.
Configure field mapping on the Field Mapping tab of the Properties panel.
Method of mapping fields to the SQL transformation. Select one of the following options:
• Manual. Manually link incoming fields to the store procedure or function's input fields. Removes links
for automatically mapped fields.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
Show Fields
Controls the fields that appear in the Incoming Fields list. Show all fields, unmapped fields, or mapped
fields.
Automap
Links fields with matching names. Allows you to link matching fields, and then manually configure other
field mappings.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name
field with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single
field, select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
Action menu
• Map selected. Links the selected incoming field with the selected stored procedure or function input
field.
• Unmap selected. Clears the link for the selected field.
• Clear all. Clears all field mappings.
Show
Determines how field names appear in the Stored Procedure Input Fields list. Use technical field names
or labels.
Information on the Output Fields tab varies based on the SQL type.
When the SQL query contains a SELECT statement, the transformation returns one row for each
database row that it retrieves.
For user-entered queries, you must configure an output field for each column in the SELECT statement.
The output fields must be in the same order as the columns in the SELECT statement.
SQLError field
Data Integration returns row errors to the SQLError field when it encounters a connection or syntax error.
It returns NULL to the SQLError field when no SQL errors occur.
For example, the following SQL query generates a row error from an Oracle database when the
Employees table does not contain Product_ID:
SELECT Product_ID FROM Employees
Data Integration generates one row. The SQLError field contains the following error text in one line:
ORA-0094: “Product_ID”: invalid identifier Database driver error... Function Name:
Execute SQL Stmt: SELECT Product_ID from Employees Oracle Fatal Error
When a query contains multiple statements, and you configure the SQL transformation to continue on
SQL error, the SQL transformation might return rows from the database for one query statement, but
return database errors for another query statement. The SQL transformation returns any database error
in a separate row.
NumRowsAffected field
You can enable the NumRowsAffected output field to return the number of rows affected by the INSERT,
UPDATE, or DELETE query statements in each input row. Data Integration returns the NumRowsAffected
for each statement in the query. NumRowsAffected is disabled by default.
When you enable NumRowsAffected and the SQL query does not contain an INSERT, UPDATE, or DELETE
statement, NumRowsAffected is zero in each output row.
The following table lists the output rows that the SQL transformation generates when you enable
NumRowsAffected:
UPDATE, INSERT, DELETE only One row for each statement with the NumRowsAffected for the
statement.
DDL queries such as CREATE, DROP, One row with zero NumRowsAffected.
TRUNCATE
When a query contains multiple statements, Data Integration returns the NumRowsAffected for each
statement. NumRowsAffected contains the sum of the rows affected by each INSERT, UPDATE, and
DELETE statement in an input row.
Data Integration returns one row from the DELETE statement. NumRowsAffected is equal to one. It
returns one row from the SELECT statement, NumRowsAffected is zero. It returns one row from the
INSERT statement with NumRowsAffected equal to one.
Pass-through fields
Define incoming fields as pass-through fields to pass data through the SQL transformation. The SQL
transformation returns data from pass-through fields whether or not the SQL query returns rows.
When the source row contains a SELECT statement, the SQL transformation returns the data in the pass-
through field in each row it returns from the database. If the query result contains multiple rows, the SQL
transformation repeats the pass through field data in each row.
When a query returns no rows, the SQL transformation returns the pass-through column data and null
values in the output fields. For example, queries that contain INSERT, UPDATE, and DELETE statements
return no rows. If the query has errors, the SQL transformation returns the pass-through column data, the
SQLError message, and null values in the output fields.
To define a pass-through field, click Add in the Pass-Through Fields area, and then select the field you
want to pass through the SQL transformation. When you configure an incoming field as a pass-through
field, Data Integration adds the field with the suffix "_output" in the Pass-Through Fields area.
If you configure a field as a pass-through field and then change the field name in the source, Data
Integration does not update the pass-through field name and no data is passed through the field. In the
SQL transformation, delete the old pass-through field and configure the updated incoming field as a
pass-through field.
Advanced properties
Configure advanced properties for the SQL transformation on the Advanced tab. The advanced properties
vary based on whether the transformation processes a stored procedure or function or a query.
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You can
choose terse, normal, verbose initialization, or verbose data. Default is normal.
Subsecond Subsecond precision for datetime fields. You can change the precision for databases that have an
Precision editable scale for datetime data.
If you enable pushdown optimization, the database returns the complete datetime value, regardless
of the subsecond precision setting.
Enter a positive integer value from 0 to 9. Default is 6 microseconds.
Stored For unconnected transformations, determines when the stored procedure runs.
Procedure Select one of the following options:
Type - Target Pre Load. Runs before the target is loaded.
- Target Post Load. Runs after the target is loaded.
- Normal. Runs on a row-by-row basis.
- Source Pre Load. Runs before the mapping receives data from the source.
- Source Post Load. Runs after the mapping receives data from the source.
Call Text For unconnected transformations with stored procedure type Target Pre/Post Load or Source Pre/
Post Load, enter the call text for the stored procedure.
The call text is the stored procedure name followed by the input parameters in parentheses. If there
are no input parameters, you must include an empty pair of parentheses. Do not include the SQL
statement EXEC or use the :SP keyword.
Does not apply to Normal stored procedure types.
Property Description
Continue on SQL Continues processing the remaining SQL statements in a query after an SQL error occurs.
Error within Row Enable this option to ignore SQL errors in a statement. Data Integration continues to run the
rest of the SQL statements for the row. The SQL transformation does not generate a row error,
but the SQLError field contains the failed SQL statement and error messages.
Tip: Disable this option to debug database errors. Otherwise, you might not be able to
associate errors with the query statements that caused them.
Default is disabled.
Max Output Row The maximum number of rows that the SQL transformation can output from a SELECT query.
Count To configure unlimited rows, set this property to zero. Default is 600.
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You
can choose terse, normal, verbose initialization, or verbose data. Default is normal.
Transformation The method in which Data Integration applies the transformation logic to incoming data.
Scope Select one of the following options:
- Row. Applies the transformation logic to one row of data at a time. Choose Row when the
results of the transformation depend on a single row of data.
- Transaction. Applies the transformation logic to all rows in a transaction. Choose
Transaction when a row of data depends on all rows in the same transaction, but does not
depend on rows in other transactions.
- All Input. Applies the transformation logic on all incoming data. When you choose All Input,
Data Integration drops incoming transaction boundaries. Choose All Input when a row of
data depends on all rows in the source.
Default is Row.
To create an intelligent structure model, use Intelligent Structure Discovery. Intelligent Structure Discovery
determines the underlying structure of a sample data file and creates a model of the structure.
Intelligent Structure Discovery creates the intelligent structure model based on a sample of your input data.
You can create models from the following input types:
• Text files, including delimited files such as CSV files and complex files that contain textual hierarchies
• Machine generated files such as weblogs and clickstreams
• JSON files
• XML files
• ORC files
• Avro files
• Parquet files
• Microsoft Excel files
• Data within PDF form fields
• Data within Microsoft Word tables
• XSD files
• Cobol copybooks
Preview Notice: Creating intelligent structure models based on Cobol copybooks is available for preview.
Preview functionality is supported for evaluation purposes but is unwarranted and is not supported in
production environments or any environment that you plan to push to production. Informatica intends to
include the preview functionality in an upcoming release for production use, but might choose not to in
accordance with changing market or technical circumstances. For more information, contact Informatica
Global Customer Support.
You can refine the intelligent structure model and customize the structure of the output data. You can edit
the nodes in the model to combine, exclude, flatten, or collapse them.
The Structure Parser transformation can process input from source transformations efficiently and
seamlessly based on the intelligent structure model that you select. When you add a Structure Parser
transformation to a mapping, you associate it with the intelligent structure model.
When you use a Structure Parser transformation in a mapping you can select a Source transformation based
on a flat file to process local input files. Or you can select a Source transformation based on a Hadoop Files
V2 connection to stream input files in HDFS, using Hortonworks Data Platform or Cloudera connection, or to
process input from local file systems.
352
To use the Structure Parser transformation, you need the appropriate license.
You can also select a sample file when you configure the Structure Parser transformation, and Intelligent
Structure Discovery creates the model based on the file.
When you provide a sample file, Intelligent Structure Discovery determines the underlying structure of the
information and creates a model of the structure. You associate the Intelligent structure model with a
Structure Parser transformation in a mapping.
The Structure Parser contains the following Field Mapping fields in the Structure Parser Input Fields panel:
Data
Used to map the data field in the source transformation to the data field in the Structure Parser
transformation. Map this field for a Hadoop Files source and for a Structure Parser transformation that
is used midstream.
File Path
Used to map the file path or reference file in the source transformation to the file path field in the
Structure Parser transformation.
Determines how field names appear in the Structure Parser transformation input fields list. Use technical
field names or labels.
• Manual. Manually link incoming fields to target fields. Removes links for automatically mapped fields.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You can't manually link fields with this option.
Note: You can't use parameterized field names with the Structure Parser transformation. Do not select
the options Completely Parameterized or Partially Parameterized.
Automap
Links fields with matching or similar names. Allows you to link matching fields, and then manually
configure other field mappings.
Controls the fields that appear in the Incoming Fields list. Show all fields, unmapped fields, or mapped
fields.
Output fields
When you select an intelligent structure to use in a Structure Parser transformation, the intelligent structure
model output fields appear on the Output Fields tab of the Properties panel.
The output fields on the Output Fields tab are grouped into the following groups:
• Unidentified group. This group contains the data that was not identified by the intelligent structure. You
might want to pass this data to a target file for further analysis.
• Output groups. One or more groups that contain the data that the intelligent structure model identifies.
You can select a group or groups to transfer pass-through fields to. Pass-through fields are fields that you
don't map in the transformation field mapping and that the transformation transfers to the selected group or
groups as is. Use this option if you want to use the pass-through fields later in the mapping, for example, to
pass a timestamp field to the next downstream transformation. The Structure Parser transformation passes
the pass-through fields to each selected group.
The Output Fields tab displays the name, type, precision, scale, and origin of each output field in the output
groups. The origin of a field shows the path of the respective node in the intelligent structure model. If a
name of a node in the intelligent structure model contains special characters, the Secure Agent replaces
them with an underscore (_) character, and the Output Fields tab displays the revised name as the output
field name. The Output Fields tab doesn't display the origin of fields with revised names.
To edit the precision of an output field, click the precision value and enter the precision you require.
Note: Intelligent Structure Discovery doesn't enforce precision and scale on decimal fields.
You can't edit the transformation output fields. If you want to exclude output fields from the data flow or
rename output fields before you pass them to a downstream transformation or target, configure the field
rules for the downstream transformation.
Advanced properties
You can configure advanced properties for a Structure Parser transformation. The advanced properties
control the transformation scope.
Property Description
Transformation Specifies how Data Integration applies the transformation logic to incoming data. Select one of
Scope the following options:
- Transaction. Applies the transformation logic to all rows in a transaction. Choose Transaction
when a row of data depends on all rows in the same transaction, but does not depend on rows
in other transactions.
- All Input. Applies the transformation logic on all incoming data. When you choose All Input,
Data Integration drops incoming transaction boundaries. Choose All Input when a row of data
depends on all rows in the source.
- Row. Applies the transformation logic to one row of data at a time. Choose Row when the
results of the transformation depend on a single row of data.
• When you select a mapping that contains a Structure Parser transformation in the Mapping Designer, and
the associated intelligent structure model changed after it was associated with the transformation, a
message appears in the Mapping Designer. Click the link that is provided in the message to update the
model so that it complies with the mapping.
• If you update an intelligent structure model that was created with an older version of Intelligent Structure
Discovery, where all output fields were assigned a string data type, the Structure Parser transformation
might change field data types during the update. If, for any of the affected fields, the downstream
• Before you use the Parquet output type, set the HADOOP_HOME and hadoop.home.dir environment
variables.
• When you use an AVRO output type, the transformation doesn't generate output for input fields with
identical names.
• The transformation creates ORC output based on the local date and time. Processing the same input in
different locations or environments might result in output with different times and time formats.
• We recommend that you use a binary output type to create large XML and JSON files.
To associate an intelligent structure model with the transformation, you can select an existing model or
create a new model. If you create a new model, you can select one of the following actions:
• Design new. You create the model in the Intelligent Structure Model page. For more information about
creating an intelligent structure model, see Components.
• Auto-generate from a sample. You select a sample file and Intelligent Structure Discovery creates the
model and saves it in the location that you select.
You can also select and view a model on the Intelligent Structure Model page before you use it.
When you configure a Structure Parser transformation, you select the data output type. The transformation
can generate the following output types:
• Relational
• JSON
1. Add a Structure Parser transformation to the mapping and configure general settings.
2. On the Properties panel, click Structure Parser and choose one of the following options to associate an
intelligent structure model with the transformation:
Option Description
New > Design New Create a new model in the Intelligent Structure Model page.
New > Auto-generate from Select a sample file and a location for the model and click Create. Intelligent
sample file Structure Discovery creates the model and saves it in the selected location.
3. Select the output type for the Structure Parser transformation from the Output As list.
4. If you select the JSON output type, you can select Include empty tags to add all the model tags to the
output at run time, including tags that don't exist in the input. The transformation adds tags that don't
exist in the input as empty tags with a NULL value.
5. On the Properties panel, click Incoming Fields and configure the incoming fields.
6. On the Properties panel, click Field Mappings and configure field mapping.
7. Optionally, on the Properties panel, click Advanced and configure the transformation scope.
8. Link the previous transformation on the map to the Structure Parser transformation
9. Link the Structure Parser transformation to the downstream transformation and select an output group.
1. In the Properties panel of the Structure Parser transformation, click the Structure Parser tab.
2. Click Select.
The Select Intelligent Structure dialog box appears.
3. In the Explore list, select an intelligent structure model.
4. To search for an intelligent structure model, select the search criteria, enter the characters to search for
in the Find field.
You can search for an intelligent structure model by name or description. You can sort the intelligent
structure models by name, type, description, tags, status, or date last modified.
You can create a Hadoop Files source with a Hortonworks Data Platform or Cloudera connection to provide
input to a Structure Parser transformation. For more information about the Hadoop Files sources, see the
Hadoop Files V2 Connector help.
Hadoop Files V2 Connector can process a single file or an entire directory. When the connector processes a
directory, it processes the files recursively.
To ensure that the Structure Parser transformation can process output from the Hadoop Files source, define
the Field Mapping settings correctly in the Structure Parser transformation. Map the Data and File Path fields
in the Incoming Field panel to the Data and File Path fields in the Structure Parser Input Fields panel. You
can map the fields automatically or manually.
The Source transformation uses a reference file that contains a file path or a list of file paths to one or more
files that you want the Structure Parser transformation to process. Ensure that you use a reference file when
you configure the Source transformation. On the Source tab, select the reference file in the Object field.
1. On the Source transformation Source tab, select Formatting Options near the Object field.
2. Select None for the Text Qualifier.
3. Select Auto-generate for the Field Labels.
4. Click OK.
u On the Structure Parser transformation Field Mapping tab, map the field in the Incoming Field panel to
the File Path field in the Structure Parser Input Fields panel.
You need to configure an intelligent structure model that analyzes unstructured data and discovers its
structure.
The following image shows the log file that you want to parse:
To parse the log file, use a Structure Parser transformation in a mapping to transform the data in the log file.
In the Mapping Designer, you add a source object that is flat file that contains the path to the log file you
want to parse.
You connect the source object to the Structure Parser transformation. To map the incoming data to the fields
of the transformation, select the Structure Parser transformation. In the Field Mapping tab, map Path to the
Structure Parser Input Fields Field Path.
Add a text file target transformation for the parsed output group that you want to process named TargetFile.
Add a separate text file target transformation for data that was not identified named Unidentified.
Run the mapping to write the data in a structured format to the TargetFile transformation. The mapping
sends any data that was not identified by the intelligent structure to the Unidentified transformation.
The following image shows the parsed data output file from the TargetFile transformation:
If you need to further parse the data, you can include additional Structure Parser transformations midstream
that will parse the output from the preceding parser.
Transaction Control
transformation
The Transaction Control transformation is an active transformation that commits or rolls back sets of rows
during a mapping run. Use the Transaction Control transformation to commit or roll back transactions from
transactional targets such as relational, XML, Amazon Redshift, and REST V2 targets. You can also use the
transformation in a mapping to write data to a different flat file each time that Data Integration starts a new
transaction.
You might want to use a Transaction Control transformation when you process large amounts of data. You
can use the Transaction Control transformation to commit the data at certain intervals to prevent data loss.
For example, you run a mapping that processes thousands of records in a table that is sorted by order type.
You might want to commit the data each time that the mapping processes a different order type.
In a Transaction Control transformation, a transaction is the row or set of rows bound by commit or roll back
rows. A transaction can be based on a group of rows that are ordered on a common key, such as employee ID
or order entry date. The number of rows in each transaction can vary.
You define a transaction by specifying the transaction control condition in the transformation. Based on
whether the condition is met, you can choose to commit rows, roll back rows, or continue processing data
without changing the transaction boundaries.
When you run the mapping task, Data Integration evaluates the transaction control condition for each row
that enters the transformation. When it evaluates a commit row, it commits all rows in the transaction to the
targets. When Data Integration evaluates a roll back row, it rolls back all rows in the transaction from the
targets.
If the mapping has a flat file target that is created at run time, you can generate an output file each time Data
Integration starts a new transaction. You can dynamically name each target flat file.
To use the Transaction Control transformation, you need the appropriate license.
Example
You want to use transaction control to write order information based on the order type. You want to ensure
that all orders of a specific type are written to a different target file. To accomplish this, you create the
following mapping:
362
Source transformation
Sorter transformation
Create the following transaction control condition to commit data when the Integration Service
encounters a new order entry date:
Property Value
Target transformation
Create a new file target at run time and specify a dynamic file name. Use the following expression for the
target name to create a different target file for each order type:
'Orders_ '||ORDER_TYPE||'.csv'
Note: Before you specify the transaction control condition, verify that the incoming data is sorted. Incoming
data must be sorted by the fields that you use in the transaction control condition.
You can use one of the following types of conditions to test the row data:
Use an If Field Value Changes condition when you want to test whether a field value changes. For
example, if ORDER_DATE changes, then commit the transaction before writing the current row to the
target. Otherwise, continue processing the data.
Advanced
Use an advanced condition when you want to use an expression to test the row data. For example, if
NEW_FILE_FLAG=’Y’ AND DEPT_ID>1000, then commit the transaction but include the current row in the
next transaction. Otherwise, continue processing the data.
Configure the expression in the If part of the condition. The expression can use fields, parameters, built-
in functions, and user-defined functions.
Parameterized
You can use an expression parameter to represent the condition. Enter the parameter value when you
run the mapping task or enter the value in a parameter file.
You can specify the following actions for the Then and Else parts of the condition based on the test results:
Action Description
Continue Data Integration does not perform any transaction change for this row.
Transaction
Commit Before Data Integration commits the transaction, begins a new transaction, and writes the current row
to the target. The current row is in the new transaction.
Commit After Data Integration writes the current row to the target, commits the transaction, and begins a new
transaction. The current row is in the committed transaction.
Rollback Before Data Integration rolls back the current transaction, begins a new transaction, and writes the
current row to the target. The current row is in the new transaction.
Rollback After Data Integration writes the current row to the target, rolls back the transaction, and begins a
new transaction. The current row is in the rolled back transaction.
The Then and Else parts of the condition must contain different actions.
Transaction Control transformations can be effective or ineffective for the downstream transformations and
targets in the mapping. The Transaction Control transformation becomes ineffective for downstream
The following image shows a valid mapping with a Transaction Control transformation that is effective for a
Sorter transformation, but ineffective for the target:
In this example, TransactionControl_1 is ineffective for the target, but effective for the Sorter transformation.
The transformation scope for the Sorter transformation is Transaction. It uses the transaction boundaries
defined by TransactionControl_1. The transformation scope for the Aggregator transformation is All Input. It
drops transaction boundaries defined by TransactionControl_1. Transaction control transformation
TransactionControl_2 is an effective Transaction Control transformation for the target.
The following image shows a valid mapping with both an ineffective and an effective Transaction Control
transformation:
Data Integration processes TransactionControl_1, evaluates the transaction control expression, and creates
transaction boundaries. The mapping does not include a transformation that drops transaction boundaries
between TransactionControl_1 and Target_1, making TransactionControl_1 effective for Target_1. Data
Integration uses the transaction boundaries defined by TransactionControl_1 for Target_1.
However, the mapping includes a transformation that drops transaction boundaries between
TransactionControl_1 and Target_2, which makes TransactionControl_1 ineffective for Target_2. When Data
Integration processes the Aggregator transformation, with transformation scope set to All Input, it drops the
transaction boundaries defined by TransactionControl_1 and outputs all rows in an open transaction. Then
If a roll back occurs in TransactionControl_1, Data Integration rolls back only rows from Target_1. It does not
roll back any rows from Target_2.
The following image shows an invalid mapping with both an ineffective and an effective Transaction Control
transformation:
The mapping is invalid because Target_1 is not connected to an effective Transaction Control
transformation.
• Incoming data must be sorted by the fields that you use in the transaction condition. Place a Sorter
transformation upstream of the Transaction Control transformation or use a sorted data source.
• Configuring the transaction control condition to perform frequent commits can affect performance.
• If the mapping includes an XML target, and you choose to append or create a new document on commit,
the input groups must receive data from the same transaction control point.
• Transaction Control transformations that are connected to any target that does not support batch or
transaction real-time processing are ineffective for those targets.
• You must connect each target instance to a Transaction Control transformation.
• You can connect multiple targets to the same Transaction Control transformation.
• You can connect only one effective Transaction Control transformation to a target.
• You cannot place a Transaction Control transformation in a pipeline branch that starts with a Sequence
Generator transformation.
• If you use a dynamic Lookup transformation and a Transaction Control transformation in the same
mapping, a rolled-back transaction might result in unsynchronized target data.
• Either all targets or no targets in the mapping should be connected to an effective Transaction Control
transformation.
Property Description
Tracing Level Detail level of error and status messages that Data Integration writes in the session log. You can
choose terse, normal, verbose initialization, or verbose data. Default is normal.
Union transformation
The Union transformation is an active transformation that you use to merge data from multiple pipelines into
a single pipeline.
For data integration patterns, it is common to combine two or more data sources into a single stream that
includes the union of all rows. The data sources often do not have the same structure, so you cannot freely
join the data streams. The Union transformation enables you to make the metadata of the streams alike so
that you can combine the data sources in a single target.
The Union transformation merges data from multiple sources similar to the UNION ALL SQL statement. For
example, you might use the Union transformation to merge employee information from ADP with data from a
Workday employee object.
You can add, change, or remove specific fields when you merge data sources with a Union transformation.
At run time, the mapping task processes input groups in parallel. It concurrently reads the sources connected
to the Union transformation and pushes blocks of data into the input groups of the transformation. As the
mapping runs, it merges data into a single output group based on the field mappings.
The following table identifies some key differences between the Union transformation and Joiner
transformation, which also merges data from multiple sources. Factor these differences into your mapping
design:
Remove duplicate rows No. You can use a Router or Filter transformation Yes
downstream from the Union transformation to
remove duplicates.
Combine records based No. The Union Transformation is equivalent to a Yes. The Joiner transformation
on a join condition UNION ALL statement in SQL, which combines supports Normal, Right Outer, Left
data vertically from multiple sources. Outer, and Full Outer JOINs.
Include multiple input Yes. You can define multiple input groups and Yes. You can define two input
groups one output group. groups, Master and Detail.
368
Requirement Union transformation Joiner transformation
Merge different data All of the source columns must have similar data At least one column in the sources
types types. The number of columns in each source to be joined must have the same
must be the same. data type.
• Before you add a Union transformation to a mapping, add all Source transformations and include the other
upstream transformations that you want to use.
• You can use a Sequence Generator transformation upstream from a Union transformation if you connect
both the Sequence Generator and a Source transformation to one input group of the Union
transformation.
Input groups
By default, a Union transformation has two input groups. If you want to merge data from more than two
sources, add an input group for each additional source. Each group can have different field rules for each
upstream transformation.
• The Union transformation initializes its output fields based on fields in the first source that you connect to
an input group.
• Each input group can use a different field mapping mode or parameter.
• You can parameterize the field mappings or define the field mapping for each input group.
To add an input group, in the Mapping Designer, connect an upstream transformation to the "New Group"
group of the Union transformation. You can also add input groups on the Incoming Fields tab of the Union
transformation.
You can rename input groups. You can also delete input groups as long as there are at least two remaining
input groups. Rename and delete input groups on the Incoming Fields tab.
• After you initialize the output fields, you cannot change the output fields by connecting or disconnecting
the input group.
• You can manually add output fields if you add them before you connect one of the Union transformation
input groups.
• When you add an output field, you define the field name, data type, precision, scale, and optional
description. The description can contain up to 4000 characters.
• If you connect the Union transformation to an upstream transformation that does not pass in any fields,
the output fields are not initialized.
• At run time, the mapping passes null values to output fields that are not in a field mapping.
Field mappings
The Union transformation can merge data from multiple source pipelines. The sources can have the same set
of fields, have some matching fields, or use parameterized field mappings.
When you work with field mappings in a Union transformation, note the following:
• You must use input groups where the fields have the identical name, type, precision, and scale.
• You can edit, remove, or manually add some of the output fields.
• As part of the field mapping, you choose an input group and specify the parameter from the input group.
• You can use parameters for fields in all input groups.
• You can parameterize the field mapping or map by field name for each input group. At run time, the task
adds an exact copy of the fields from the input group as output fields.
If you want Data Integration to automatically link fields with the same name and you also want to manually
map fields, select the Manual option and click Automap.
• Exact Field Name. Data Integration matches fields of the same name.
• Smart Map. Data Integration matches fields with similar names. For example, if you have an incoming
field Cust_Name and a target field Customer_Name, Data Integration automatically links the Cust_Name field
with the Customer_Name field.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact Field
Name to match fields with the same name and then use Smart Map to map fields with similar names.
You can undo all automapped field mappings by clicking Automap > Undo Automap. To unmap a single field,
select the field to unmap and click Actions > Unmap.
Data Integration highlights newly mapped fields. For example, when you use Exact Field Name, Data
Integration highlights the mapped fields. If you then use Smart Map, Data Integration only highlights the
fields mapped using Smart Map.
Note: The properties that appear in the transformation depend on the mapping type.
Property Description
Tracing Detail level of error and status messages that Data Integration writes in the session log. You can
Level choose terse, normal, verbose initialization, or verbose data. Default is normal.
Optional Determines whether the transformation is optional. If a transformation is optional and there are no
incoming fields, the mapping task can run and the data can go through another branch in the data flow.
If a transformation is required and there are no incoming fields, the task fails.
For example, you configure a parameter for the source connection. In one branch of the data flow, you
add a transformation with a field rule so that only Date/Time data enters the transformation, and you
specify that the transformation is optional. When you configure the mapping task, you select a source
that does not have Date/Time data. The mapping task ignores the branch with the optional
transformation, and the data flow continues through another branch of the mapping.
• id
• last
• first
• email
• phone
Note: Remember that the data to be merged with a Union transformation must have the same data type,
precision, and scale.
1. Ensure that the source files reside in a location accessible to your Secure Agent.
2. Define a connection to access the .csv files.
3. Create a mapping in the Mapping Designer.
4. Add two Source transformations to the mapping to connect to data in the .csv files.
5. Add a Union transformation and connect the Source transformations to it.
6. In the Union transformation Properties, perform the following steps for each input group:
a. In the Field Rules section, click the group you want to configure.
b. (Optional) For the incoming fields, select the fields you want to merge in the output.
The following image shows the selected fields in the first input group:
If you do not specify a rule to exclude fields, at run time, the task ignores any fields that you do not
map to the output fields.
c. Edit the Output field names in the Union transformation, to correspond to the field names that you
want in the target:
Note: You can also select fields, change metadata, add other fields, or convert the field types, for
example, from integer to number.
Velocity transformation
Use the Velocity transformation in a mapping to convert hierarchal input from one format to another without
flattening the data. The transformation can convert JSON or XML data to JSON, XML, or text output such as
plain text, email, or HTML.
You might want to use the Velocity transformation to convert hierarchical data to comply with a downstream
API call or for use in a downstream application such as a campaign management system or machine learning
algorithm. You can also use the Velocity transformation to process row data such as a JSON BLOB in a
database.
To convert the data, the Velocity transformation uses an Apache Velocity script that you provide. The script
can contain Velocity Template Language (VTL) statements, Data Integration built-in functions, and Data
Integration unconnected lookup expressions.
You can pass data to the Velocity transformation in one of the following ways:
You can pass data directly to the Velocity transformation through the input field. The data must be a
JSON BLOB or string or an XML BLOB or string.
Pass the name and location of the file that contains the data you want to convert.
If you want the Velocity transformation to convert data stored in a flat file such as an XML or JSON file,
pass the file name and location to the transformation through the input field. The data must be a string
that contains the file path and file name. The file must be a delimited flat file that is accessible from the
Secure Agent machine.
375
refers to the data in the template. If the data to be converted is binary data or is in a file, you also select the
code page.
Property Description
Input Field Field in the upstream transformation that contains the input for the Velocity transformation. The field
that you select must contain either the data to be converted or the file path and name of the flat file
that contains the data.
Since the input to the transformation must be character or binary data, this field displays only string,
text, and binary fields from the upstream transformation.
Input Type Type of data in the selected input field. Select one of the following input types:
- Buffer. Select this option when the input field contains the data that you want to convert such as a
JSON BLOB or XML string.
- File. Select this option when the input field contains the path to a flat file that contains the data to
be converted.
Format Format type of the data that you want to process, either JSON or XML.
Type
Variable The variable that you define in the template to refer to the data to be processed. Do not include a
Name in leading dollar sign or other special character.
Template For example, in the following template, the variable name "root" is used to refer to the data to be
processed:
<xml>
#foreach ($child in $root.getRootElement().getChildren() )
<$child.getChild("name").getText()>
<id>$child.getChild("id").getText()</id>
<size>$child.getChild("size").getText()</size>
</$child.getChild("name").getText()>
#end
</xml>
The following table describes the file input format advanced property:
Property Description
Code Page Code page of the data that you want to convert. This field is enabled if the input type is File or if the
input field contains binary data.
Select a code page if the code page of the data that you want to convert differs from the code page of
the Secure Agent machine. Otherwise, select Default.
For example, the following text shows the contents of a source file that reads data from an XML file named
Products.xml:
C:\IICS_XML_SourceFiles\Products.xml
1. On the Source tab, select the file that contains the file path as the source object, and then click
Formatting Options.
2. In the Formatting Options dialog box, ensure that the flat file type is Delimited, and configure the
following properties:
Property Value
3. Click OK.
Velocity template
You create the Velocity template on the Velocity tab. To create the template, enter or paste the template in
the template editor. Then click Validate to validate the syntax.
The template can contain VTL statements, Data Integration built-in functions, and Data Integration
unconnected lookup expressions. For more information about the Velocity Template Language, see the
Apache Velocity documentation.
$function.call('<function name>',<argument>,<argument>,...)
For example, the following template code returns the system time stamp in an XML comment:
The following function call passes the property $vendor.name as an argument to the INITCAP function to
capitalize the first letter of each word in the vendor name:
$function.call('InitCap',$vendor.name)
For example, the following expression passes the properties $item.id and $item.category as arguments to an
unconnected lookup Transformation named lkp_ItemPrices:
$function.call(':LKP.lkp_ItemPrices',$item.id,$item.category)
To validate the syntax, click Validate. Data Integration checks the template and displays any syntax errors. If
the template syntax is invalid, you can save the mapping but the mapping is invalid.
Validation does not test the template output or find runtime errors. You can test the template output on the
Test tab.
To test the template you created, select a runtime environment, enter or paste a portion of the incoming data
into the Sample Input field, and then click Test. You can select any runtime environment in which the Data
Integration Server is enabled except the Hosted Agent.
Sample input can contain up to 1000000 characters. For best results, ensure that the sample input is a
representative sample of the data that you want to convert.
Sample output appears in the Sample Output field. When you test a template, Data Integration does not
process built-in function or lookup expression calls. In the sample output, these expressions appear exactly
as they were entered it in the template. If the template does not generate valid output, an error message
appears in the Sample Output field.
The precision is the maximum number of characters that the returned character string can contain. Data
Integration truncates the returned character string at the precision value. By default, the output field precision
is 1000000 characters. To change the output field precision, enter a different value in the Precision field.
To specify no header in the output file, open the Target tab of the Target transformation. In the Advanced
properties, set the header options to No Header.
To specify no text qualifier, on the Target tab, click Formatting Options next to the Object field. In the
Formatting Options dialog box, set the text qualifier to None.
The Velocity transformation uses the following parsers based on the type of data you want to process:
JSON data
To parse JSON data, the Velocity transformation uses the Java package org.json. The transformation
passes the data that you want to process to the JSONObject(java.lang.String source) constructor to
construct a JSONObject object within Java. For more information about the JSONObject constructor, see
javadoc.io.
XML data
To parse XML data, the Velocity transformation uses the Java package org.jdom2.input. The
transformation uses the SAX parser that this package provides and creates a SAXBuilder object in Java.
For more information about the SAX parser, see the JDOM v2.0.6 API Specification.
Examples
The following examples show how to use the Velocity transformation to convert XML data from one format
to another and to convert and augment JSON data.
The XML file that you want to convert, products.xml, contains data in the following format:
<?xml version="1.0" encoding="UTF-8"?>
<document>
<product>
<id>1</id>
<name>milk</name>
<size>16 oz</size>
</product>
<product>
<id>2</id>
<name>water</name>
<size> 8 oz</size>
</product>
</document>
You want to convert the XML so that each product is in its own element, for example, <milk>...</milk>. You
also want to append a timestamp in a comment inside the XML file.
To convert the file, first create a source file that contains the path to the XML file that you want to convert so
that the mapping can read data from the XML file. Then, create the mapping.
Create a text file called "filepath.txt" that contains the following line:
C:\XMLSources\products.xml
After you create the source file, create the mapping. The following image shows the mapping:
Source transformation
Configure the Source transformation to read data from the file that contains the file path, and configure
the source formatting options.
On the Source tab, select a connection that can access the source file, set the source type to Single
Object, and select filepath.txt as the source object.
Property Value
Velocity transformation
Property Value
On the Velocity Template tab, enter the following template in the template editor and validate the syntax:
<?xml version="1.0" encoding="UTF-8"?>
<!--Generation date: $function.call('Systimestamp')-->
## Convert each product to an element:
<products>
#foreach ($child in $root.getRootElement().getChildren() )
<$child.getChild("name").getText()>
<id>$child.getChild("id").getText()</id>
<size>$child.getChild("size").getText()</size>
</$child.getChild("name").getText()>
#end
</products>
Target transformation
Configure the Target transformation to write data to a flat file target created at run time. To ensure that
the target file contains no header and no text qualifier character, configure the formatting options.
On the Target tab, select the connection and set the target type to Single Object. Click Select next to the
Object field. Select Create New at Runtime, and enter the file name "products_converted.xml."
When you run the mapping, the target file, products_converted.xml, contains the following XML:
<?xml version="1.0" encoding="UTF-8"?>
<!--Generation date: 06/17/2020 17:45:41.341526-->
<products>
<milk>
<id>1</id>
<size>16 oz</size>
</milk>
<water>
<id>2</id>
<size> 8 oz</size>
</water>
</products>
Examples 381
JSON conversion example
You have vendor data from an online review service that is stored in a JSON BLOB in a database. You want to
filter the records, convert the data to a different format, and write it to a JSON file. You also want to augment
the data with additional attributes.
The database contains a JSON BLOB that contains data in the following format:
{"items":[
{"business_id":"1SWheh84yJXfytovILXOAQ","name":"Rancho Golf Club","address":"1200 E
Camino Acequia
Drive","postal_code":"85016","stars":3.0,"review_count":5,"is_open":0,"attributes":
{"GoodForKids":"False"},"categories":"Golf, Active Life","hours":null},
{"business_id":"QXAEGFB4oINsVuTFxEYKFQ","name":"Garden Blessing Chinese
Restaurant","address":"25 Eglinton Avenue W","postal_code":"L5R
3E7","stars":2.5,"review_count":128,"is_open":1,"attributes":
{"RestaurantsReservations":"True","GoodForMeal":"{'dessert': False, 'latenight': False,
'lunch': True, 'dinner': True, 'brunch': False, 'breakfast':
False}","BusinessParking":"{'garage': False, 'street': False, 'validated': False, 'lot':
True, 'valet':
False}","Caters":"True","NoiseLevel":"u'loud'","RestaurantsTableService":"True","Restaura
ntsTakeOut":"True","RestaurantsPriceRange2":"2","OutdoorSeating":"False","BikeParking":"F
alse","Ambience":"{'romantic': False, 'intimate': False, 'classy': False, 'hipster':
False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual':
True}","HasTV":"False","WiFi":"u'no'","GoodForKids":"True","Alcohol":"u'full_bar'","Resta
urantsAttire":"u'casual'","RestaurantsGoodForGroups":"True","RestaurantsDelivery":"False"
},"categories":"Specialty Food, Restaurants, Dim Sum, Imported Food, Food, Chinese,
Ethnic Food, Seafood","hours":
{"Monday":"9:0-0:0","Tuesday":"9:0-0:0","Wednesday":"9:0-0:0","Thursday":"9:0-0:0","Frida
y":"9:0-1:0","Saturday":"9:0-1:0","Sunday":"9:0-0:0"}},
{"business_id":"gnKjwL_1w79qoiV3IC_xQQ","name":"Fujiyama Japanese
Cuisine","address":"111 Johnston Rd, Ste
15","postal_code":"28210","stars":4.0,"review_count":170,"is_open":1,"attributes":
{"GoodForKids":"True","NoiseLevel":"u'average'","RestaurantsDelivery":"False","GoodForMea
l":"{'dessert': False, 'latenight': False, 'lunch': True, 'dinner': True, 'brunch':
False, 'breakfast':
False}","Alcohol":"u'beer_and_wine'","Caters":"False","WiFi":"u'no'","RestaurantsTakeOut"
:"True","BusinessAcceptsCreditCards":"True","Ambience":"{'romantic': False, 'intimate':
False, 'touristy': False, 'hipster': False, 'divey': False, 'classy': False, 'trendy':
False, 'upscale': False, 'casual': True}","BusinessParking":"{'garage': False, 'street':
False, 'validated': False, 'lot': True, 'valet':
False}","RestaurantsTableService":"True","RestaurantsGoodForGroups":"True","OutdoorSeatin
g":"False","HasTV":"True","BikeParking":"True","RestaurantsReservations":"True","Restaura
ntsPriceRange2":"2","RestaurantsAttire":"'casual'"},"categories":"Sushi Bars,
Restaurants, Japanese","hours":
{"Monday":"17:30-21:30","Wednesday":"17:30-21:30","Thursday":"17:30-21:30","Friday":"17:3
0-22:0","Saturday":"17:30-22:0","Sunday":"17:30-21:0"}}
...
You want to filter the records so that they only include restaurants with three or more stars. You also want to
add the city to each record based on the postal code.
The postal codes and corresponding cities exist in a flat file with the following format:
postal_code|city
15090|Wexford, PA
15102|Bethel Park, PA
15206|Pittsburgh, PA
15317|Canonsburg, PA
28012|Belmont, NC
28027|Concord, NC
...
You want the target file to contain JSON data in the following format:
{ "Date": "<date>",
"vendors":[
{
"name": "<restaurant name>",
"location": "<city, state/province>",
Source transformation
Configure the Source transformation to read data from the database table that contains the JSON BLOB.
Velocity transformation
Property Value
Input Field Incoming string field from the Source transformation that contains the JSON BLOB.
On the Velocity Template tab, enter the following template in the template editor and validate the syntax:
#set($comma ="")
{ "Date": "$function.call('Systimestamp')",
"vendors":[
#foreach($vend in $inputRoot.items)
#if($vend.categories.toString().contains("Restaurants") && ($vend.stars > 3))
$comma
{
"name": "$vend.name",
"location": "$function.call(':lkp.lkp_CityLookup', $vend.postal_code)",
"desc": "$vend.categories",
"stars": $vend.stars
}
#set($comma =",")
#end
Examples 383
#end
]
}
Unconnected Lookup transformation
Configure the Lookup transformation to return the city based on the postal code. The Lookup
transformation must be an unconnected Lookup transformation so that you can call it from the template
in the Velocity transformation.
On the Incoming Fields tab, create an incoming field for the postal code called in_postal_code.
On the Lookup Object tab, select the text file that contains the postal codes and cities. Then, click
Formatting Options and configure the following properties:
Property Value
Delimiter Other: |
postal_code = in_postal_code
Target transformation
Configure the Target transformation to write data to a flat file target created at run time. To ensure that
the target file contains no header and no text qualifier character, configure the formatting options.
On the Target tab, select the connection and set the target type to Single Object. Click Select next to the
Object field. Select Create New at Runtime, and enter the file name "vendors.json."
When you run the mapping, the target file, vendors.json, contains the following data:
{ "Date": "07/08/2020 12:33:44.647037",
"vendors":[
{
"name": "Fujiyama Japanese Cuisine",
"location": "Charlotte, NC",
"desc": "Sushi Bars, Restaurants, Japanese",
"stars": 4.0
}
,
{
"name": "D'Amico's Pizzeria",
"location": "Mentor-on-the-Lake, OH",
"desc": "Italian, Restaurants, Pizza, Chicken Wings",
"stars": 4.0
}
Examples 385
Chapter 35
Verifier transformation
The Verifier transformation adds a verifier asset that you created in Data Quality to a mapping.
A verifier asset defines a template for input and output address data that you can connect to the input and
output fields on the Verifier transformation. Connect the fields in your source data or in upstream
transformations to the corresponding input ports on the Verifier transformation. Connect the output ports on
the Verifier transformation to downstream transformations in the mapping or to the mapping target.
The Verifier transformation performs the following operations on the input address data:
• The transformation compares the address records in the source data to address reference data.
• It fixes errors and completes partial address records. To fix an address, the transformation must find a
positive match with an address in the reference data. The transformation copies the required data
elements from the address reference data to the address records.
• It writes output addresses in the format that the verifier asset specifies. You define a verifier asset in Data
Quality to create address records that suit your business needs and that conform to the structure that the
mail carrier requires.
• It can report on the deliverable status of each address and the nature of any error or ambiguity that the
address contains.
• It can provide suggestions for any ambiguous or incomplete address.
For more information about the types of address information that the Verifier transformation can read and
write, including address status information, consult the verifier asset documentation in the Data Quality
online help.
A Verifier transformation is similar to a Mapplet transformation, as it allows you to add address verification
logic that you created elsewhere to a mapping. Like mapplets, verifiers are reusable assets. A Verifier
transformation shows incoming and outgoing fields. It does not display the address data that the verifier
contains or allow to you edit the verifier. To edit the verifier, open it in Data Quality.
When you run a mapping with a Verifier transformation, the Secure Agent evaluates the input data and
downloads the reference data files that you need. Each reference data file is specific to a single country. The
Secure Agent downloads one or more files for each county that the input address data specifies.
You do not need to take any action to download the files. If the current reference data files already exist on
the system, the Secure Agent does not download them again.
386
Each reference data file requires a license. You buy the license from Informatica. You enter the license key
information as a system configuration property on the Secure Agent that runs the mapping. Find the Secure
Agent properties in the Administrator service.
For more information on reference data properties, consult the Verifier Guide in the Data Quality online help.
• Manual. Manually link incoming fields to transformation input fields. Removes links for automatically
mapped fields.
• Automatic. Automatically link fields with the same name. Use when all of the fields that you want to
link share the same name. You cannot manually link fields with this option.
• Completely Parameterized. Use a parameter to represent the field mapping. In the task, you can
configure all field mappings.
Choose the Completely Parameterized option when the verifier in the transformation is parameterized
or any upstream transformation in the mapping is parameterized.
• Partially Parameterized. Configure links in the mapping that you want to enforce and use a parameter
to allow other fields to be mapped in the mapping task. Or, use a parameter to configure links in the
mapping, and allow all fields and links to display in the task for configuration.
Parameter
Select the parameter to use for the field mapping, or create a new parameter. This option appears when
you select Completely Parameterized or Partially Parameterized as the field map option. The parameter
must be of type field mapping.
Do not use the same field mapping parameter in more than one Verifier transformation in a single
mapping.
Options
Controls how fields are displayed in the Incoming Fields and Target Fields lists.
• The fields that appear. You can show all fields, unmapped fields, or mapped fields.
• Field names. You can use technical field names or labels.
Automap
Links fields with matching names. Allows you to link matching fields and to manually configure other
field mappings. The Automap options appear when you select the Manual or Partially Parameterized
field map option.
• Exact Field Name. Data Integration matches fields of the same name.
You can use both Exact Field Name and Smart Map in the same field mapping. For example, use Exact
Field Name to match fields with the same name and then use Smart Map to map fields with similar
names.
You can undo all automapped field mappings by clicking Automap > Undo Automap.
To unmap a single field, select the field to unmap and click Actions > Unmap on the context menu for the
field. To unmap one or more fields that you selected, click Unmap Selected on the Target Fields context
menu.
To clear all field mappings from the transformation, click Clear Mapping on the Target Fields context
menu.
The data on the transformation input fields that you map to the verifier asset must reflect the types of
information that the asset inputs expect. If the fields do not correspond, the mapping cannot evaluate the
input data with full accuracy.
The asset inputs may expect a single element of address data, or they may expect multiple elements
organized in a single field. Likewise, the asset outputs may write a single element of address data, or they
may write multiple elements to a single field. If necessary, work with the asset designer to determine the
meaning of the inputs.
The tab displays the name, type, precision, and scale for each output field. The output field names are the
names of the output fields on the asset.
A web service integrates applications and uses open standards, such as SOAP, WSDL, and XML. SOAP is the
communications protocol for web services. Web Services Description Language (WSDL) is an XML schema
that describes the protocols, formats, and signatures of the web service operations. Web service operations
include requests for information, requests to update data, and requests to perform tasks.
A Web Services transformation connects to a web service as a web service client to access, transform, or
deliver data. The web service client request and the web service response are SOAP messages. The mapping
task processes SOAP messages with document/literal encoding. The Web Service transformation does not
support RPC/encoded or document/encoded WSDL files.
For example, the Web Services transformation sends a SOAP request to a web service to run a web service
operation called getCityWeatherByZIP. The Web Services transformation passes zip codes in the request.
The web service retrieves weather information and then returns the information in a SOAP response.
SOAP request messages and response messages can contain hierarchical data, such as data that follows an
XML schema. For example, a web service client sends a request to add customer orders to a sales database.
The web service returns the following hierarchy in the response:
Response
Order
Order_ID
Order_Date
Customer_ID
Product
Product_ID
Qty
Status
The response has information on orders, including information on each product in the order. The response is
hierarchical because within the Order element, the Product element contains more elements.
To use the Web Services transformation, you need the appropriate license.
1. Create a Web Services Consumer connection and use a WSDL URL and an endpoint URL.
2. Define a business service. A business service is a web service with configured operations.
3. Configure the Web Services transformation in a mapping in the Mapping Designer.
390
Create a Web Services consumer connection
Connect to a web service using Web Services Description Language (WSDL) and an endpoint URL. You can
also enable security for the web service.
Property Description
Endpoint URL Endpoint URL for the web service. The WSDL file specifies this URL in the location element.
Username Applicable to username token authentication. User name to authenticate the web service.
Password Applicable to username token authentication. Password to authenticate the web service.
Encrypt Applicable to username token authentication. Enables the PasswordDigest property which
Password combines the password with a nonce and a time stamp. The mapping task applies a SHA hash
on the password, encodes it in base64 encoding, and uses the encoded password in the SOAP
header.
If you do not select this option, the PasswordText property is enabled and the mapping task
does not change the password in the WS-Security SOAP header.
1. Click New > Components > Business Services and then click Create.
2. Enter the business service details and select the Web Services Consumer connection.
3. Select the operation you want to use from the web service.
4. If necessary, configure the operation to specify the choice elements and derived type elements for the
request and the response.
If operation components include choice elements or complexType elements where the abstract attribute
is true, then you must choose one or more elements or derived types when you configure the operation
mapping.
Optionally, for a complexType element where the abstract attribute is false, you can also select a derived
type for a complexType element.
a. For the operation you want to configure, click Configure.
b. From the Configure Operation window, click the Request, Response, or Fault tab and navigate to the
node you need to configure.
Note: If the WSDL uses the anyAttribute element, the element will not appear for the request or the
response.
You can click the icons at the top to navigate to the nodes that you need to configure:
1. Create a mapping and add the source objects you want to work with.
2. Add a Web Services transformation to the canvas.
3. Connect the source to the Web Services transformation.
4. Select the business service and operation in the Web Service tab.
5. On the Request Mapping and Response Mapping tabs, create the field mappings between the source
fields and the web service request.
For an illustration of the mapping process, see “Web Services transformation example” on page 399.
6. On the Output Fields tab, review the success groups, fault group, and field details. You can edit the field
metadata, if needed. The success groups contain the SOAP response from the web service. The fault
group contains SOAP faults with the fault code, string, and object name that caused the fault to occur.
7. Define the advanced properties.
8. Save and run the mapping.
For additional information about the mapping process, see the following sections:
Property Description
Cache Size Memory available for the web service request and response. If the web service request or response
contains a large number of rows or columns, you might want to increase the cache size. Default is
100 KB.
Allow Input The mapping task creates XML when it has all of the data for a group. When enabled, the mapping
Flush task flushes the XML after it receives all of the data for the root value. When not enabled, the
mapping task stores the XML in memory and creates the XML after it receives data for all the
groups.
Note: You cannot select the option to allow input flush if you are connecting to multiple source
objects.
Transaction Control to commit or roll back transactions based on the set of rows that pass through the
Commit Control transformation. Enter an IIF function to specify the conditions to determine whether the mapping
task commits, rolls back, or makes no transaction changes to the row. Use the transaction commit
control if you have a large amount of data and you want to control how it is processed.
Note: You cannot configure a transaction commit control if you are connecting to multiple source
objects.
• If you need to apply an expression to incoming fields, use an Expression transformation upstream of the
Web Services transformation.
• To ensure that a web service request has all the required information, map incoming derived type fields to
fields in the request structure.
You can map the incoming fields to the request mapping as shown in the following image:
Drag each incoming field onto the node in the request structure where you want to map it.
• Any source fields you want to designate as primary key and foreign key must use the data type Bigint or
String. If needed, you can edit the metadata in the Source transformation.
Note: If the Bigint data type is not available for a source, you can convert the data with an Expression
transformation upstream of the Web Services transformation.
• Ensure that the source data is sorted on the primary key for the parent object and sorted on the foreign
key and primary key for child objects.
• Map one of the fields or a group of fields to the recurring elements. In the incoming fields, you can see
where each recurring element is mapped.
• Map at least one field from each child object to the request structure.
• You must map fields from the parent object to fields in the request structure that are higher in the
hierarchy than fields from the child object.
• For child objects, select a primary key and a foreign key.
- On the Incoming Fields tab, select the source object you want to designate as the parent object.
- Right-click on an incoming field in the tree to designate the primary key and foreign key.
When you choose Relational, the transformation generates the following output groups:
• Output group for the parent element. In denormalized output, the element values from the parent group
repeat for each child element.
• FaultGroup, if it is supported by the connection type you are using.
Enter an IIF function to specify the conditions to determine whether the mapping task commits, rolls back, or
makes no transaction changes to the row. When the mapping task issues a commit or roll back based on the
return value of the expression, it begins a new transaction.
Note: You cannot configure a transaction commit control if you are connecting to multiple source objects.
• TC_CONTINUE_TRANSACTION. The mapping task does not perform any transaction change for this row.
This is the default value of the expression.
• TC_COMMIT_BEFORE. The mapping task commits the transaction, begins a new transaction, and writes
the current row to the target. The current row is in the new transaction.
• TC_COMMIT_AFTER. The mapping task writes the current row to the target, commits the transaction, and
begins a new transaction. The current row is in the committed transaction.
• TC_ROLLBACK_BEFORE. The mapping task rolls back the current transaction, begins a new transaction,
and writes the current row to the target. The current row is in the new transaction.
• TC_ROLLBACK_AFTER. The mapping task writes the current row to the target, rolls back the transaction,
and begins a new transaction. The current row is in the rolled back transaction.
If the transaction control expression evaluates to a value other than commit, roll back, or continue, the
mapping is invalid.
Example
You want to use transaction commit control to write order information based on the order entry date. You
want all orders entered on a given date to be committed to the target in the same transaction.
You create a field named New_Date and populate it by comparing the order date of the current row to the
order date of the previous row. If the orders have different dates, then New_Date evaluates to 1.
You can view and locate fields using the following methods:
To search for a particular field, type the field name in the Search text box.
You can filter the incoming fields view to show all fields, keys, mapped fields, or unmapped fields. You
have a similar option in other tree views:
For each node in a hierarchy, you can view the field and field mapping details. Right-click on the node to
show the properties:
In the Request Structure tree, you can clear the mapping, clear the keys, or map and unmap selected
fields:
Pass-through fields
Pass-through fields are response fields that you do not need for the current transformation but you might
want to use later in the mapping.
For example you might only need the primary key from a source object for the Web Service transformation.
However, you might want to use the other fields in the source object in a downstream transformation.
Incoming fields and response fields pass through the Web Services transformation to the success groups
and fault group. However, when the mapping contains multiple sources, only the fields from the primary
source pass through. The source that includes the primary key is the primary source. For example, in the
following image, NewSource is the primary source and NewSource1 is the secondary source:
Because NewSource1 is a secondary source, fields from this source do not pass through to the success
groups and fault group.
To determine the source of a field in the Success or Fault group, choose the Incoming Fields tab. The Origin
column shows the source for each field.
You can review this example to learn how to configure a Web Services transformation to structure a SOAP
request and response using the NetSuite operation, getItemAvailability. The web service client passes the
item ID in the request. The web service then returns information on the latest item availability in a SOAP
response.
First, to connect to the web service, create a WSConsumer connection. You add a WSDL that describes the
request and response web service messages and configures the necessary security. For this example, we use
the following NetSuite connection:
Third, create a mapping that uses the Web Service transformation. The example mapping includes the
following configuration options:
1. The source is a simple .csv file that includes four fields with the login details and an item ID:
2. On the Web Service tab, select the business service and operation previously defined:
4. In the Request Mapping, the incoming fields are mapped to the SOAP message. The Envelope contains
the credentials and the Body contains the item ID, as shown in the following image. You drag incoming
fields onto items in the request structure to create the mapping:
5. On the Response Mapping tab, you map the SOAP message to the output fields you want to use. You can
choose Relational or Denormalized format for output fields. This example uses Relational format:
6. On the Output Fields tab, you can see each group. If needed, you can edit the type, precision, and scale of
the fields:
When the mapping runs, it returns the item availability and status code. If it cannot successfully run, it
creates a record in the default target that contains the fault information.
B repeatable output 93
seed 94
bulk requests Data Masking transformations
Machine Learning transformation 264 creating 105
Business Services data preview
defining for Web Service transformations 392 sources, targets, and lookup objects 26
data type conversion
Java transformation 204
C databases
configuration in mapping sources 46
caches configuration in mapping targets 71
dynamic lookup cache 247 target update override 74
Rank transformation 300, 304 Deduplicate transformations
Sorter transformation 327 configuring 112
classpath identity population files 111
configuring design time value 203 overview 110
configuring for Java transformation 200 dynamic lookup cache
configuring the Java Classpath session property 203 SQL overrides 251
configuring the JVMClassPath agent property 202
CLASSPATH environment variable
configuration 202
configuring on UNIX 203
E
configuring on Windows 202 elastic mappings
Cleanse transformations hierarchical data 59, 78, 86, 128, 130, 231, 305, 311, 321, 328
configuring 87 Email masking
in mappings 87 masking technique 98
404
error handling Filter transformations
Machine Learning transformation 267 filter condition 129
examples in mappings 129
Hierarchy Processor transformation 178, 180, 187, 192 flat file time stamps
lookup SQL override 244 in Target transformations 68
Rank transformation 305 flat files
Router transformation 311 command sample file 40
expression editor configuration in mapping sources 44
expression transformation 118 file lists 38
expression macros in mappings parsing with a Java transformation 212
configuring a horizontal macro 34 FTP/SFTP
configuring a vertical macro 29 configuration in mapping sources 44
horizontal expansion functions 33
horizontal macro configuration 33
hybrid macro configuration 37
macro input field 28
G
macro input field configuration for incoming fields in a horizontal generateRow
macro 34 Java transformation API method 222
macro input field configuration for vertical macros 29 getInRowType
macro output field configuration for vertical macros 30 Java transformation API method 223
macro types 27 group by fields
output field inclusion for vertical macros 31 Aggregator transformation 82
overview 27 group filter condition
transformation output field for horizontal macros 36 configuring 310
using constants in horizontal macros 35 Router transformation 309
vertical macro configuration 28 groups
Expression transformation Router transformation 309
advanced properties 127
variable fields 27
window functions 119
Expression transformations
H
expression fields 118 hierarchical data and multibyte characters 41, 59, 78, 137, 147, 177,
in mappings 118 403
hierarchical schema
overview 142
F Hierarchical Schema
creating 133, 134, 143, 144
failSession Hierarchy Builder transformation
Java transformation API method 222 intelligent structure model 133
field expression 118 advanced properties 137
field mapping creating hierarchical schema 134
Output transformation 281 example 137
field mappings field mapping 135
for web service connections in Source transformations 53 hierarchical schema 133
for web service connections in Target transformations 76 output format 133
in Mapplet transformations 270 output precision 133
in Target transformations 79 output settings 133
in the Normalizer transformation 276 overview 131
field name conflicts select schema elements 136
resolving in mappings 23 selecting hierarchical schema 134
transformations 21 Hierarchy Parser transformation
field rules example 147
field selection criteria 22 field mapping 145
in mappings 21 input field selection 144
field selection criteria 22 output fields 146
file lists select output group 147
batch files 39 select schema elements 146
command sample file 40 selecting hierarchical schema 144
commands 39 selecting input settings 143
in Lookup transformations 41 Hierarchy Parser transformations
in Source transformations 40 hierarchical schema 142
manually created 38 overview 141
overview 38 Hierarchy Processor transformation
rules and guidelines 38 adding all descendants 160
shell scripts 39 adding an array as a struct 164
text file format 38 adding incoming fields to output 158
Filter transformation adding primitive single occurring children 161
advanced properties 130 adding single occurring children 160
Index 405
Hierarchy Processor transformation (continued) Java transformation (continued)
aggregating values in an output field array 167 compiling Java code 213
configuring data sources 173 configuring design time classpath 203
configuring filter conditions 174 configuring the CLASSPATH environment variable 202
configuring group by fields 176 configuring the Java Classpath session property 203
configuring join conditions 173 configuring the JVMClassPath agent property 202
configuring order by fields 176 creating code snippets 209
configuring output groups and fields 166 data type conversion 204
data processing strategies 151 defining 200
data source configuration example 171 defining end of data behavior 211
data source conflicts 172 defining helper code 210
data sources 169 defining input row behavior 211
defining 155, 157, 158 defining transaction notification behavior 212
examples 178 enabling high precision 207
field limitations 178 example 215
field threshold exceeded 178 failing sessions 222
filter configuration example 174 finding compilation errors 214
flattened output 158 generating output rows 222
flattening the selected array 163 getting input row type 223
hierarchical output 157 group by fields 205
hierarchical to flattened example 192 importing packages 210
hierarchical to hierarchical example 187 incoming fields 203
hierarchical to relational example 178 incrementing the error count 223
inheriting parent's data sources 169 invoking an expression 224
JSON configuration 177 Java editor sections 208
order of operations 168 logging errors 225
output data configuration 168 logging messages 226
overview 151 non-user code errors 214
port threshold exceeded 178 output fields 203
preserving the incoming field 162, 171 overview 199
reading a JSON input file 177 parsing flat files 212
relational output 155 setting nulls 226
relational to hierarchical example 180 setting update strategy 227
steps to create 155, 157, 158 sort conditions 205
writing to a single JSON output file 177 steps to create 200
subsecond processing 208
troubleshooting 214
J
Java transformation
active and passive transformations 207
L
advanced properties 205 Labeler transformations
API methods 221 configuring 232
checking for nulls 225 in mappings 232
classpath configuration 200 overview 232
406 Index
logError mapping tasks
Java transformation API method 225 configuring the Java Classpath session property 203
logInfo mappings
Java transformation API method 226 adding cleanse assets 87
lookup caches adding mapplets 269
dynamic 247 adding rule specifications 232, 313
lookup source filter Cleanse transformations 87
Lookup transformation 246 configuring aggregate calculations with the Aggregator
lookup SQL overrides transformation 82
examples 244 custom lookup source queries 239
Lookup transformation 244 custom source queries 49
query guidelines 246 example of using a Union transformation 371
Lookup transformation file target properties 65
advanced properties 242 filtering data with a Filter transformation 129
dynamic cache 247 filtering source data 50
dynamic cache inserts and updates 248 joining heterogenous sources with a Joiner transformation 228
field mapping 250 Labeler transformations 232
file name prefix 252 look up data with a Lookup transformation 236
generated key fields 250 lookup object configuration 237
ignore fields in comparison 251 Mapplet transformations 269
insert else update 248 normalizing data with the Normalizer transformation 274
lookup return fields 240 output fields in a Union transformation 370
lookup source filter 246 Parse transformations 283
lookup SQL override example 244 performing calculations with an Expression transformation 118
lookup SQL override guidelines 246 planning to use a Union transformation 369
lookup SQL overrides 244 Rule Specification transformations 313
NewLookupRow 248 sorting source data 50
non-persistent lookup cache 252 source configuration 42
persistent lookup cache 252 Source transformations 42
re-cache from lookup source 252 SQL transformation 332
rebuilding the lookup cache 252 target configuration 64
Sequence-ID field 250 Target transformations 63
synchronizing lookup source with dynamic cache 249 Union transformation 368
Lookup transformations using expression macros 27
:LKP expression syntax 254 Verifier transformations 386
calling unconnected lookups 254 mapplet
configuring file lists 41 parameters 271
configuring unconnected lookups 254 Mapplet transformations
custom lookup source queries 239 configuring 269
data preview 26 field mappings 270
in mappings 236 in mappings 269
lookup condition 239 output fields 272
lookup object 237 purpose 269
lookup object configuration 237 selecting a mapplet 270
lookup object properties 238 mapplets
multiple match policy restrictions 238 selecting in Mapplet transformations 270
unconnected lookup example 256 mask format
unconnected lookups 253 blurring 96
key masking 94
random masking 94
M range 96
source filter characters 95
machine learning model target filter characters 96
Machine Learning transformation 260 masking
Machine Learning transformation advanced email 98
API proxy 265 masking technique
bulk requests 264 credit card masking 97
error handling 267 custom substitution 101
machine learning model 260 dictionary 104
request mapping 260 Email masking 98
response fields 263 IP address masking 99
troubleshooting 266 Key 99
macro input fields phone number masking 100
in mappings 28 Random 100
maintenance outages 16 Social Insurance number masking 100
mapping designer Social Security number masking 101
transformations 17 substitution 104
URL masking 104
Index 407
metadata override
editing native data types 61 R
editing transformation data types 62 Rank transformation
source fields 60 advanced properties 304
target fields 78 caches 300
Microsoft SQL Server case-sensitive string comparison 304
configuration in mapping sources 46 configuring as optional 304
configuration in mapping targets 71 configuring cache directory 304
multibyte data configuration 41, 59, 78, 137, 147, 177, 403 configuring cache sizes 304
MySQL defining 301
configuration in mapping sources 46 example 305
configuration in mapping targets 71 fields 301
overview 299
rank groups 303
N rank index 301
rank order 302
NEXTVAL 318 RANKINDEX field 301
normalized fields ranking string values 299
Normalizer transformation 274 selecting rows to rank 302
Normalizer transformation steps to create 301
advanced properties 277 tracing level 304
Normalizer transformations transformation scope 304
example in mapping 278 request mapping
field mapping 276 Machine Learning transformation 260
field mapping options 276 request messages
field rule for parameterized sources 277 for web service connections 52
generated keys 275 response fields
handling unmatched groups of multiple-occurring fields 275 Machine Learning transformation 263
normalized fields 274 Router transformation
occurs configuration 274 advanced properties 311
overview 274 configuring a filter condition 310
target configuration 277 examples 311
group filter condition 309
groups 309
O output group guidelines 309
overview 308
operations routing rows
for source web service connections 51 transformation for 308
for target web service connections 75 Rule Specification transformations
Oracle configuring 313
configuration in mapping sources 46 in mappings 313
configuration in mapping targets 71 overview 313
output fields
Output transformation 281
Output transformation
field mapping 281 S
output fields 281 Secure Agent
configuring CLASSPATH 202
configuring the JVMClassPath 202
P Sequence Generator transformation
disable incoming fields 321
Parameters example 322
Data Masking transformation 105 output fields 318
mask rule 105 properties 319
Parse transformations rules and guidelines 321
configuring 283 setNull
overview 283 Java transformation API method 226
partitioning setOutRowType
source 56 Java transformation API method 227
target 77 SOAP messages
partitions for Web Service transformations 390
examples 57 Social Insurance number masking
rules and guidelines 57 masking technique 100
pass-through fields 398 Social Security number masking
passive transformations masking technique 101
Java 207 sorter cache
phone number masking description 327
masking technique 100
408 Index
Sorter transformation Structure Parser transformation (continued)
advanced properties 327 example 359
cache size 327 field mapping 353
caches 327 output fields 354
overview 326 output type 356
sort conditions 326 output type rules and guidelines 356
work directory 327 overview 352
Source transformations rules and guidelines 355
advanced relationships 49 select output group 358
configuring file lists 40 selecting 357
custom source queries 49 subseconds
data preview 26 Java transformation 208
database sources 45 synchronization
editing native data types 61 source fields 60
editing transformation data types 62 target fields 78
field mapping for web service connections 53 system status 16
filtering data 50
in mappings 42
joining related objects 47
partitioning 56
T
sorting data 50 Target transformation
source configuration 42 file target properties 65
source fields 60 Target transformations
web service connections 50 creating a database target at run time 73
SQL overrides creating a flat file target at run time 70
dynamic lookup cache 251 data preview 26
SQL queries database targets 71
SQL transformations 339 database targets created at run time 72
SQL transformation dynamic names for flat file targets 69
advanced properties 349 entering a target update statement 75
call from an expression 335 field mapping for web services 76
unconnected 334–336 field mappings 79
unconnected SQL transformation 334 file targets 65
SQL transformations flat file targets created at run time 67
configuration 344 flat file time stamps 68
configuring the SQL type 345 in mappings 63
dynamic queries 340 partitioning 77
entering a query 346 specifying targets 80
field mapping 346 static names for flat file targets 67
NumRowsAffected field 347 target configuration 64
output fields 347 target fields 78
overview 332 target update override 74
parameterizing a query 346 target update override guidelines 74
passing the full query 340 update columns 73
passive mode 342 web service connections 75
query guidelines 343 Transaction Control transformation
query processing 339 advanced properties 367
selecting a saved query 345 effective and ineffective 364
selecting a stored procedure or function 345 in mappings with multiple targets 365
selecting multiple rows 340 mapping guidelines 366
SQL statements for queries 342 overview 362
SQLError field 347 transaction control condition 363
static queries 339 using in mappings 364
stored function processing 332 transformations
stored procedure processing 332, 334–336 active and passive 17
substituting the table name 341 connecting 17
status field name conflicts 21
Informatica Intelligent Cloud Services 16 field rules 21, 22
stored functions incoming fields 20
SQL transformation 332 Java 199
stored procedures licensed 19
SQL transformation 332, 334 overview 17
strings previewing fields 20
ranking string values 299 Rank 299
Structure Parser transformation renaming fields 23
advanced properties 355 Router 308
configuration 356 Transaction Control 362
configuring 357 types 17
Index 409
transformations (continued) Velocity transformation (continued)
Velocity 375 output 379
troubleshooting output field precision 379
Machine Learning transformation 266 overview 375
trust site parsers 379
description 16 testing 378
Velocity template 377
XML example 380
U Verifier transformations
configuring 387
Union transformation overview 386
advanced properties 371
example 371
field mappings 370
output fields 370
W
overview 368 web service connections
Union transformations operations for Source transformations 51
comparison with Joiner 368 operations for Target transformations 75
guidelines 369 request messages 52
input group guidelines 369 Source transformations 50
update columns Target transformations 75
configuring 73 Web Services transformations
in Target transformations 73 advanced properties 393
upgrade notifications 16 configuring 393
URL masking creating a Web Services Consumer connection 391
masking technique 104 defining a business service 392
filtering fields 397
mapping incoming and outgoing fields 393
V operations 392
overview 390
variable fields web site 15
in Expression transformations 27 work directory
Velocity transformation Sorter Transformation 327
configuring file sources 376 WSConsumer
configuring file targets 379 Web Services Consumer 391
examples 379 WSDL URL
input format 376 Web Services transformations 390
JSON example 382
410 Index