Informatica Interview Questions
Informatica Interview Questions
Advanced, Scenario-Based]
Shiksha Online
Updated on May 31, 2023 15:22 IST
While interviewing for data warehousing jobs, you may be asked questions about
Informatica concepts as well as Informatica-based scenarios. Here are the most
commonly-asked Informatica interview questions and answers that will help you ace
your upcoming interview. These Informatica interview questions for freshers and
experienced are suitable for professionals at any level. For your convenience, we
have divided this list of 100+ Informatica questions into 3 sections:
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q1. Dif f erentiate between a database, a data warehouse, and a data
mart?
Ans. The database includes a set of sensibly affiliated data, which is usually small in
size as compared to a data warehouse. In contrast, in a data warehouse, there are
assortments of all sorts of data from where data is taken out only according to the
customer’s needs. Datamart is also a set of data that is designed to cater to the
needs of different domains.
Ans. This is one of the commonly asked Informatica interview questions. Informatica
PowerCenter is a GUI based ETL (Extract, Transform, Load) tool. This data
integration tool extracts data from different OLTP source systems, transforms it
into a homogeneous format and loads the data throughout the enterprise at any
speed. It is known for its wide range of applications.
Q3. Explain the dif f erence between Inf ormatica 7.0 and 8.0?
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. The main difference between Informatica 8.0 and Informatica 7.0 is that in the
8.0 series, Informatica corp has introduced the power exchange concept.
Source Qualif ier T ransf ormation: Rows are f iltered while reading data f rom a relational
data source.
Filter T ransf ormation: Rows are f iltered within a mapped data f rom any source.
Ans. The joiner transformation is an active and connected transformation that helps
to create joins in Informatica. It is used to join two heterogeneous sources.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. In Informatica, we use the application of traditional CASE or IF which is possible
by the decode in Informatica. A decode in Informatica is a function used within an
Expression Transformation.
Ans. The Router Transformation allows users to split a single pipeline of data into
multiple. It is an active and connected transformation that is similar to filter
transformation.
Ans. The Rank Transformation is active and connected used to sort and rank the
top or bottom set of records based on a specific port. It filters data based on groups
and ranks. The rank transformation has an output port assigning a rank to the rows.
Ans. Filter transformation is used to filter the records based on the filter condition. It
is an active transformation as it changes the no of records.
Ans. A master outer join is a specific join typesetting within a joiner transformation.
In a master outer join, all records from the detail source are returned by the join and
only matching rows from the master source are returned.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q14. What are some examples of Inf ormatica ETL programs?
Mappings
Workf lows
T asks
Ans. T his is one of the most important Inf ormatica interview questions. A Dimension table is
a table in a star schema of a data warehouse. Dimension tables are used to describe
dimensions. T hey contain attributes that describe f act records in the table.
For example, a product dimension could contain the name of the products, their description,
unit price, weight, and other attributes as applicable.
T he dimension attributes tend to change slowly with time rather than changing in a regular
intervals of time.
Conformed Dimension:
Conf ormed dimensions are exactly the same with every possible f act table to which they are
joined. It is used to maintain consistency.
T his dimension is shared among multiple subject areas or data marts. T he same can be used
in dif f erent projects without any modif ications.
Junk Dimension:
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Degenerated Dimension:
It is derived f rom the f act table and does not have its own dimension table. T he attributes
are stored in the f act table, not as a separate dimension table.
Role-playing dimension:
Role-playing dimensions are the dimensions used f or multiple purposes within the same
database
Ans. It is the simplest form of data warehouse schema that consists of one or more
dimensions and fact tables. It is used to develop data warehouses and dimensional
data marts.
Now, let’s take a look at some more Informatica Interview questions and answers.
Ans. A natural primary key uniquely identifies each record within a table and relates
records to additional data stored in other tables.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. A surrogate key is a sequentially generated unique number attached with each
record in a Dimension table. It is used in substitution for the natural primary key.
Ans. A repository server controls the complete repository, which includes tables,
charts, and various procedures, etc
Ans. Data concatenation is the process of bringing different pieces of the record
together.
Q24. How can one identif y whether the mapping is correct without
connecting the session?
Ans. Mapping designer, transformation developer, and mapplet designer are used for
creating transformations.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. A session is a set of commands for the server to move data to the target, while
a batch is a set of tasks that can include one or more tasks.
Q28. What are the dif f erent names of the Data Warehouse System?
Analytic Application
Data Warehouse
Standard Edition
Advanced Edition
Premium Edition
Ans. We can use the sorter transformation to delete duplicate rows from flat files
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
and select the distinct option.
Joiner Lookup
Joiner is an Active
It is a Passive transf ormation.
transf ormation.
It supports Normal,
Master, Detail, and Full By def ault, it supports lef t outer join.
Outer join.
Also explore:
Now, let’s check out some advanced-level Informatica Interview questions and
answers.
Q32. What is the dif f erence between static cache and dynamic
cache?
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. In the static cache, the data will remain the same for the entire session,
whereas in the dynamic cache, whenever a row is inserted, the cache will also be
updated.
Q34. What are the dif f erences between the ROUTER and FILTER?
Router Filter
Captures data rows that don’t meet the T ests data f or one condition and drops the
conditions of a def ault output group data rows that don’t meet the condition.
Single input and multi-output group Single input and single output group
transf ormation transf ormation
User can specif y multiple f ilter conditions User can only one f ilter condition
Ans. A Domain comprises nodes and services and serves as the fundamental
administrative unit in the Informatica tool. It categorizes all related relationships and
nodes into folders and sub-folders depending upon the administration requirement.
Explore Top Data Analytics Courses from Coursera, Edx, WileyNXT, and Jigsaw
Ans. Partition not only helps optimize a Session but also helps load a colossal
volume of data and improves the server’s operation and efficiency.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q37. What is Complex Mapping?
Static cache
Dynamic cache
Persistent cache
Shared cache
Recache
Ans. Mapplets are reusable object that can be created in the Mapplet Designer and
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
has a set of transformations that allow the reuse of transformation logic in multiple
mappings.
Ans. Source Qualifiers represent rows using the PowerCenter server that the
integrations service reads during a session. The source qualifier transformation
converts the source data types to the Informatica native data types, eliminating the
need to alter the data types of the ports in the source qualifier transformation.
Q44. How many tools are there in the Workf low Manager?
T ask Developer – T o create tasks that need to be run in the workf low.
Workf low Designer – T o create a workf low by connecting tasks with links.
Ans. Also known as Target Load Plan, a Target Load Order specifies the order of
target loading by integration service. It is dependent on the source qualifiers in a
mapping.
Ans. A Command Task runs the shell/UNIX commands in Windows during the
workflow. It allows a user to specify UNIX commands in the command task to
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
remove rejected files, create files, copy files, rename files, and archive files, among
others.
Ans. Standalone Command Task allows the shell commands to run anywhere during
the workflow.
Ans. A PowerCenter Repository is a relational database like Oracle and SQL server.
It consists of the following Metadata –
Mapping
ODBC Connection
Workf low
PowerCenter Service
PowerCenter Clients
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
PowerCenter Repository
PowerCenter Domain
Repository Service
Integration Service
Designer
Repository Manager
Ans. Tracing Level refers to the amount of information the server writes in the
session log. Tracing Level is created and configured either at –
T he transf ormation level
T he session-level
T erse
Verbose Initialization
Verbose Data
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q53. What is a Surrogate Key?
Ans. A Surrogate Key is any column or set of columns attached to every record in a
Dimension table in a Data Warehouse. It is used as a substitute or replacement for
the primary key when the update process becomes difficult for a future requirement.
Q56. Explain the dif f erence between the partitioning of f ile targets
and the partitioning of the relational target?
Ans. Partitions can be accomplished on both relational and flat files. Informatica
holds up the following partitions:
Database partitioning
RoundRobin
Pass-through
Hash-Key partitioning
Ans. The following are the unsupported repository objects for a mapplet:
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
COBOL source def inition
Ans. The following are the differences between direct and indirect loading options in
sessions:
Direct loading is used f or Single transf ormation, whereas indirect transf ormation can be
used f or multiple transf ormations or f iles.
In this direction, we can perf orm the recovery process, but in Indirect, we can’t do it.
Q59. What is the dif f erence between static and dynamic cache?
Explain with one example.
Static – Once the data is cached, it will not change, for example unconnected
lookup uses static cache.
Dynamic – The cache is updated to reflect the update in the table (or source) for
which it is referring to. (ex. connected lookup).
Ans. It is not possible to start a batch within a batch if you want to start a batch that
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
resides in a batch, create a new independent batch and copy the necessary
sessions into the new batch.
Ans. As far as I know, by using the power exchange tool to convert VSAM files to
Oracle tables, then map as usual to the target table.
Q62. Mention how many types of f acts there are and what are they.
Additive fact: A f act that can be summarized by anyone of dimension or all dimensions
EX: QT Y, REVENUE
Non-additive fact: a f act that cannot be summarized by any of the dimensions. ex:
percentage of prof it
Ans. There are two methods used for creating reusable transformations:
Ans. By using the command in the command task, there is an option pression. We
can write a suitable command of pmcmd to run the workflow.
Q65. What is the def ault join that the source qualif ier provides?
Ans. Inner equi join is the default join provided by the source qualifier.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q66. Explain the aggregate f act table and where is it used?
Factless f act table – T he f actless f act table doesn’t consist of aggregated columns and it
only has FK to the Dimension tables.
Q67. To provide support f or mainf rames source data, which f iles are
used as source def initions?
Ans. By using SCD Type 1/2/3, we can load any dimensions based on the
requirement. We can also use the procedure to populate the time dimension.
Q69. Explain the dif f erence between the summary f ilter and the
details f ilter?
Ans. Summary Filter- we can apply a record group comprising common values.
Q70. What are the dif f erences between connected lookup and
unconnected lookup?
Ans. The differences between connected lookup and unconnected lookup are:
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Connected Lookup Unconnected Lookup
1. Gets the input directly f rom the other 1. It takes the input values f rom the
transf ormations and participates in the data result or the f unction of the LKP
f low. expression.
Ans. T his is one of the commonly asked Informatica interview questions. T he dif f erent
partition algorithms f or the implementation of parallel processing are:
Pass-through Partitioning: In this portioning, the Integration Service passes all rows
f rom one partition point to the next partition point without redistributing them.
Database partitioning: In this partitioning, the Integration Service queries the database
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
system f or table partition inf ormation and reads partitioned data f rom the
corresponding nodes in the database.
Key Range Partitioning: It enables you to specif y one or more ports to f orm a
compound partition key f or a source or target. T he Integration Service then passes data
to each partition depending on the ranges you specif y f or each port.
Ans: The following are some of the mapping development practices in Informatica:
Aggregator
Expressions
Filter
Lookup
Joiner
Ans. The event can be any action or functionality implemented in a workflow. There
are two types of events:
Event Wait T ask: It waits until an event occurs. T he specif ic event f or which the Event
Wait task should wait can be def ined. Once the event is triggered, this task gets
accomplished and assigns the next task in the workf low.
Events Raise T ask: It triggers the specif ic event in the workf low.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q76. What is a Fact Table? What are its dif f erent types?
Ans. This is one of the most frequently asked Informatica interview questions. A Fact
table is a centralized table in the star schema. It contains summarized numerical and
historical data (facts). There are two types of columns in a Fact table:
Semi-Additive: T he f acts can be summed up f or only some of the dimensions in the f act
table.
Ans: OLAP stands for Online Analytical Processing. It is used to analyze database
information from multiple database systems at one time. It offers a multi-
dimensional analysis of data for business decisions.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Of f ers consistency of inf ormation and calculations.
Q80. What are the dif f erent types of lookup transf ormation in
Inf ormatica?
Pipeline Lookup
Cached/Uncached Lookup
Connected/Unconnected Lookup
Ans. You can call a Command task the pre-or post-session shell command for a
Session task. They can be called in COMPONENTS TAB of the session. They can be
run in Pre-Session Command or Pre-Session Success Command or Post-Session
Failure Command. The application of shell commands can be changed as per the
use case.
Q82. Name the dif f erent types of groups in router transf ormation.
Input group
Output group
Ans. A Junk Dimension is a collection of some random codes or flags that do not
belong in the fact table or any of the existing dimension tables. These attributes are
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
unrelated to any particular dimension. The nature of these attributes is like random
codes or flags, for example, non-generic comments or just yes/no.
Q84. What are the output f iles created by the Inf ormatica server
during the session running?
Ans. T he f ollowing are the output f iles created by the Inf ormatica server during the session
running.
Informatica server log: T his f ile is created f or all status and error messages by
def ault name: pm.server.log. An error log f or error messages is also created.
Session log file: Session log f iles are created f or each session. It writes inf ormation
about sessions into log f iles such as the initialization process, creation of SQL
commands f or reader and writer threads, etc.
Session detail file: T he Session detail f ile contains load statistics f or each target in
mapping. It includes table name, number of rows written or rejected.
Performance detail file: T his f ile contains session perf ormance details that help
identif y areas where perf ormance can be improved.
Reject file: It contains the rows of data that the writer does not write to targets.
Control file: Control f ile and a target f ile are created when you run a session that uses
the external loader. T he control f ile has inf ormation about the target f lat f ile such as
data f ormat and loading instructions, etc.
Post session email: With the help of this f ile, you can automatically communicate
inf ormation about a session run to designated recipients.
Indicator file: Inf ormatica server can be conf igured to create an indicator f ile while
using the f lat f ile as a target. T he indicator f ile contains a number f or each target row to
indicate whether the row was marked f or insert, update, delete or reject.
Output file: If a session writes to a target f ile, a target f ile based on f ile properties
entered in the session property sheet is created.
Cache files: When the Inf ormatica server creates a memory cache, it also creates
cache f iles.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Workflow log: It contains the high-level and detailed inf ormation of sessions, nodes,
integration services, repository inf ormation, etc.
Q85. Name the f iles are created during the session RUMs.
Errors log
Session log
Bad f ile
Mapping Mapplet
Mapping is developed with dif f erent It can be reused with other mapping and
transf ormations. mapplets.
Ans. It is a passive transf ormation that populates and maintains databases. It helps you to
use or call Stored procedures inside the Inf ormatica Workf low. It can be used in connected
as well as the unconnected mode.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Check the status of a target database bef ore loading data into it.
Ans. T he Data T ransf ormation Manager Process (DT M) process is started by PowerCenter
Integration Service to run a session. T he main role of the DT M process is to create and
manage threads that carry out the session tasks. T he DT M process perf orms various tasks,
including:
Q89. What is the dif f erence between a f act table and a dimension
table?
Ans. T he dif f erences between a f act table and a dimension table are:
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Fact T able Dimension T able
A f act table contains more T he dimension table contains more attributes and
records and f ewer attributes. f ewer records.
Let’s take a look at the most commonly-asked Informatica scenario based questions.
The following are some of the frequently asked informatica interview questions
scenario based.
In a Scenario based interview, you will be f irst of f ered a scenario and then asked questions
related to it. Your response to Informatica scenario based questions will show your
technical skills as well as your sof t skills, such as problem-solving and critical thinking.
Now that you are just one step away to land your dream job, you must prepare well f or all
the likely interview questions. Remember that every interview round is dif f erent, especially
when scenario-based Informatica interview questions are asked.
Q90. How do you load the last N rows f rom a f lat-f ile into a target
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
table in Inf ormatica?
ABC
DEF
GHI
JKL
MNO
Now f ollow t he below st eps t o load t he last 3 rows int o a t arget t able
St ep 1
Assign the row numbers to each record by using expression transf ormation. Name the
row to calculate as N_calculate.
Create a dummy output port and assign 1 to the port in the same expression
transf ormation.
V_calculate=V_calculatet+1
N_calculate=V_calculate
N_dummy=1
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Outputs in Expression T ransformation
ABC, 1, 1
DEF, 2, 1
GHI, 3, 1
JKL, 4, 1
MNO, 5, 1
St ep 2
Now it will hold the value as 1 and N_total_records port (it will keep the value of the total
number of records available in the source)
N_dummy
N_calculate
N_total_records=N_calculate
N_total_records, N_dummy
5, 1
St ep 3
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Now pass the value of expression and aggregator transf ormation to the joiner
transf ormation
Check the property sorted input in the joiner transf ormation to connect both expression
and aggregator transf ormation
Now the join condition will be O_dummy (port f rom aggregator transf ormation) =
O_dummy (port f rom expression transf ormation)
ABC, 1, 5
DEF, 2, 5
GHI, 3, 5
JKL, 4, 5
MNO, 5, 5
St ep 4
T hus, the f ilter condition in the f ilter transf ormation will be N_total_records – N_calculate
<=2
Out put
GHI, 3, 5
JKL, 4, 5
MNO, 5, 5
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Check out the popular Business Data Mining Courses
Data
Amazon
Walmart
Snapdeal
Snapdeal
Walmart
Flipkart
Walmart
Sit uat ion – Give st eps t o load all unique names in one t able and duplicat e
names in anot her t able.
And
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Expression T ransf ormation Output
Name, N_dummy
Amazon, 1
Walmart, 1
Walmart, 1
Walmart, 1
Snapdeal, 1
Snapdeal, 1
Flipkart, 1
Pass the acquired expression transf ormation output to aggregator transf ormation
Amazon, 1
Walmart, 3
Snapdeal, 2
Flipkart, 1
Pass the expression and aggregator transf ormation output to joiner transf ormation
Review the property sorted input to connect both transf ormations to joiner
transf ormation
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Joiner T ransf ormation Output
Amazon, 1, 1
Walmart, 1, 3
Walmart, 1, 3
Walmart, 1, 3
Snapdeal, 1, 2
Snapdeal, 1, 2
Flipkart, 1, 1
Data
Amazon
Walmart
Snapdeal
Snapdeal
Walmart
Flipkart
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Walmart
Situation – Load each name once in one table and duplicate products in another
table.
Ans.
T able 1
Amazon
Walmart
Snapdeal
Flipkart
T able 2
Walmart
Walmart
Snapdeal
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Assign Z _calculate to this output port
Z _curr_name=name
N_calculate=Z _calculate
Amazon, 1
Walmart, 1
Walmart, 2
Walmart, 3
Snapdeal, 1
Snapdeal, 2
Flipkart, 1
Form a group
ABC 80 85 90 95
DEF 60 65 70 75
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. This is one of the popularly asked Informatica Interview questions that you must
prepare for your upcoming interview.
If you want to transform a single row into multiple rows, Normalizer Transformation
will help. Also, it is used for converting multiple rows into a single row to make data
look organized. As per the above scenario-based Informatica interview question , we
want the solution to look as:
ABC 1 80
ABC 2 85
ABC 3 90
ABC 4 95
DEF 1 60
DEF 2 65
DEF 3 70
DEF 4 75
Follow the steps to achieve the desired solution by using normalizer transformation:
St ep 1 –
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
St ep 2 –
From the tab, click on the icon, this will create two columns
Select OK
St ep 3 –
In the mapping, link all f our columns in source qualif ier of the f our Quarters to the
normalizer
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
State Name Quarter Purchase
ABC 1 80
ABC 2 85
ABC 3 90
ABC 4 95
DEF 1 60
DEF 2 65
DEF 3 70
DEF 4 75
Remove all the unconnected input ports to the normalizer transf ormation
If OCCURS is present, check that the number of input ports is equal to the number of
OCCURS
Q95. What are the steps to create, design, and implement SCD Type
1 mapping in Inf ormatica using the ETL tool?
Ans. The SCD Type 1 mapping helps in the situation when you don’t want to store
historical data in the Dimension table as this method overwrites the previous data
with the latest data.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Update it in the dimension table
For example:
Student_Id Number,
Student_Name Varchar2(60),
Place Varchar2(60)
Now we require using the SCD Type 1 method to load the data present in the
source table into the student dimension table.
Stud_Key Number,
Student_Id Number,
Student_Name Varchar2(60),
Location Varchar2(60)
Create or import source def inition in the mapping designer tool’s source analyzer
Import the T arget Def inition f rom Warehouse designer or T arget designer
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Enter the name and click on create
Connect the port to source qualif ier transf ormation’ Student_Id port
From the lkp transf ormation’s condition tab, enter the Lookup condition as Student_Id =
IN_Student_Id
Click OK
Now, connect source qualif ier transf ormation’s student_id port to lkp transf ormation’s
In_Student_Id port
Create expression transf ormation using the input port as Stud_Key, Name, Location,
Src_Name, Src_Location
In the expression transf ormation’s output ports, enter the below-mentioned expression
New_Flag = IIF(ISNULL(Stud_Key),1,0)
OR Location != Src_Location),
1, 0 )
Also, connect source qualif ier transf ormation port to expression transf ormation port
Form a f ilter transf ormation and move the ports of source qualif ier transf ormation
Edit the f ilter transf ormation and set new Filter Condition as New_Flag=1 f rom the edit
f ilter transf ormation option
Press OK
Connect all f ilter transf ormation port just exclude except the New_Flag port
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
From the properties tab of update strategy, enter DD_INSERT as the strategy expression
In the f ilter transf ormation, drag lkp transf ormation’s port (Stud_Key), source qualif ier
transf ormation (Name, Location), expression transf ormation (changed_f lag) ports
Click OK
From the update strategy, connect all the appropriate ports to target def inition
inf acmd
inf asetup
pmcmd
Pmrep
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Schedule workf lows
Q97. How to conf igure the target load order in Inf ormatica?
Create mapping containing multiple target load order groups in the PowerCenter designer
From the toolbar, click on the Mappings and then click on T arget Load Plan
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
You will see a pop up that will have a list of source qualif ier transf ormations in the
mapping. Also, it will have the target f rom which it receives data f rom each source qualif ier
Using the Up and Down button, move source qualif ier within load order
Click ok
Q98. Using the incremental aggregation in the below table, what will
be the output in the next table?
Ans. When the first load is finished the table will become:
Ans. This function is used to capitalize the first character of each word in the string
and makes all other characters in lowercase.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Below is the Syntax:
INITTCAP(string_name)
Ans. We can generate sequence numbers using expression transf ormation by f ollowing the
below steps:
Allocate the variable port to an output port. T he two ports in the expression
transf ormation are: V_count=V_count+1 and O_count=V_count
Q101. How will you load the f irst 4 rows f rom a f lat-f ile into a
target?
Ans. T he f irst 4 rows can be loaded f rom a f lat-f ile into a target using the f ollowing steps:
Create the row numbers by using the expression transf ormation or by using the sequence
generator transf ormation.
Pass the output to f ilter transf ormation and specif y the f ilter condition as O_count <=4
Q102. What is the dif f erence between Source Qualif ier and Filter
Transf ormation?
Ans. T he dif f erences between Source Qualif ier and Filter T ransf ormation are:
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Source Qualifier
Filter T ransformation
T ransformation
2. It can f ilter rows only f rom 2. T his can f ilter rows f rom any type of source system
relational sources. at the mapping level.
4. It reduces the number of rows 4. T o maximize perf ormance, Filter T ransf ormation is
used in mapping thereby added close to the source to f ilter out the unwanted
enhancing perf ormance. data early.
5. Filter condition uses the 5. Filter T ransf ormation def ines a condition using any
standard SQL to run in the statement or transf ormation f unction that returns
database. either a T RUE or FALSE value.
employee_id, salary
1, 2000
2, 3000
3, 4000
4, 5000
1, 2000, 2000
2, 3000, 5000
3, 4000, 9000
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
4, 5000, 14000
1, 2000, 14000
2, 3000, 14000
3, 4000, 14000
4, 5000, 14000
Step 1:
Create a dummy port in the expression transf ormation and assign value 1 to it. T he ports will
be:
employee_id
salary
O_dummy=1
Step 2:
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Provide the output of expression transf ormation to the aggregator transf ormation.
Salary
O_dummy
O_sum_salary=SUM(salary)
Step 3:
Provide the output of expression transf ormation and aggregator transf ormation to joiner
transf ormation.
Check the property sorted input and connect expression and aggregator to joiner
transf ormation.
Step 4:
1, 2000, Null
2, 3000, 2000
3, 4000, 3000
4, 5000, 4000
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Connect the source Qualif ier to the expression transf ormation.
employee_id
salary
V_count=V_count+1
V_salary=IIF(V_count=1,NULL,V_prev_salary)
V_prev_salary=salary
O_prev_salary=V_salary
Q106. What is the name of the scenario in which the Inf ormatica
server rejects f iles?
Ans: T he Inf ormatica server rejects f iles when there is a rejection of the update strategy
transf ormation. In such a rare case scenario the database comprising the inf ormation and
data also gets interrupted.
Explore Database Administration Courses, Skills, and Career, read our blog – what is
Database Administration?
If the SELECT list COLUMNS in the Custom override SQL Query and
the OUTPUT PORTS order in SQ transf ormation do not match?
Ans. Such a scenario where the SELECT list COLUMNS in the Custom override SQL Query
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
and the OUT PUT PORT S order in SQ transf ormation do not match – may result in session
f ailure.
Q108. What can be done to enhance the perf ormance of the joiner
condition?
If the data is unsorted, then consider the source with f ewer rows as the master source.
If joins cannot be perf ormed f or some tables, then the user can create a stored
procedure and then join the tables in the database.
Q109. How do you load alternate records into dif f erent tables
through mapping f low?
Ans. T o load alternate records into dif f erent tables through mapping f low, just add a
sequence number to the records and then divide the record number by 2. If it can be divided,
then move it to one target. If not, then move it to the other target.
Add the next value of a sequence generator to the expression transf ormation.
Make two ports, Odd and Even in the expression transf ormation.
Connect a router transf ormation and drag the port (products, v_count) f rom expression
into the router transf ormation.
Give condition
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Send the two groups to dif f erent targets
Repository Privileges
Q111. How can you store previous session logs in Inf ormatica?
Ans. T he f ollowing steps will enable you to store previous session logs in Inf ormatica:
Save session log f or these runs –> Change the number that you want to save the number of
log f iles (Def ault is 0)
If you want to save all of the log f iles created by every run, and then select the option
Save session log for these runs –> Session T imeStamp
Ans. T he f ollowing are the perf ormance considerations while working with Aggregator
T ransf ormation:
T o reduce unnecessary aggregation, f ilter the unnecessary data bef ore aggregating.
T o minimize the size of the data cache, connect only the needed input/output ports to the
succeeding transf ormations.
Use Sorted input to minimize the amount of data cached to enhance the session
perf ormance.
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
We hope that this interview blog covering Informatica interview questions for freshers
and experienced candidates as well scenario-based Informatica interview questions,
will help you crack your upcoming interview.
FAQs
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
What are the prerequisites to learn Inf ormatica?
Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.