0% found this document useful (0 votes)
32 views

Informatica Interview Questions

The document discusses 100+ Informatica interview questions and answers divided into basic, advanced, and scenario-based categories. The basic questions section covers topics like the differences between a database, data warehouse and data mart, explaining Informatica PowerCenter, filtering and sorting transformations. The advanced section includes questions on slowly changing dimensions, surrogate keys, and more technical topics.

Uploaded by

ankitajash1234
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Informatica Interview Questions

The document discusses 100+ Informatica interview questions and answers divided into basic, advanced, and scenario-based categories. The basic questions section covers topics like the differences between a database, data warehouse and data mart, explaining Informatica PowerCenter, filtering and sorting transformations. The advanced section includes questions on slowly changing dimensions, surrogate keys, and more technical topics.

Uploaded by

ankitajash1234
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

100+ Informatica Interview Questions and Answers [Basic,

Advanced, Scenario-Based]

Shiksha Online
Updated on May 31, 2023 15:22 IST
While interviewing for data warehousing jobs, you may be asked questions about
Informatica concepts as well as Informatica-based scenarios. Here are the most
commonly-asked Informatica interview questions and answers that will help you ace
your upcoming interview. These Informatica interview questions for freshers and
experienced are suitable for professionals at any level. For your convenience, we
have divided this list of 100+ Informatica questions into 3 sections:

Basic Inf ormatica Interview Questions

Advanced Inf ormatica Interview Questions

Inf ormatica Scenario Based Interview Questions

Basic Inf ormatica Interview Questions

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q1. Dif f erentiate between a database, a data warehouse, and a data
mart?

Ans. The database includes a set of sensibly affiliated data, which is usually small in
size as compared to a data warehouse. In contrast, in a data warehouse, there are
assortments of all sorts of data from where data is taken out only according to the
customer’s needs. Datamart is also a set of data that is designed to cater to the
needs of different domains.

Q2. Explain Inf ormatica PowerCenter.

Ans. This is one of the commonly asked Informatica interview questions. Informatica
PowerCenter is a GUI based ETL (Extract, Transform, Load) tool. This data
integration tool extracts data from different OLTP source systems, transforms it
into a homogeneous format and loads the data throughout the enterprise at any
speed. It is known for its wide range of applications.

Q3. Explain the dif f erence between Inf ormatica 7.0 and 8.0?

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. The main difference between Informatica 8.0 and Informatica 7.0 is that in the
8.0 series, Informatica corp has introduced the power exchange concept.

Explore courses related to Informatica:

Popular T echnology Courses Popular Sof tware T ools Courses

Popular Big Data Analytics Courses Popular Programming Courses

Q4. How will you f ilter rows in Inf ormatica?

Ans. In Informatica, rows can be filtered in two ways:

Source Qualif ier T ransf ormation: Rows are f iltered while reading data f rom a relational
data source.

Filter T ransf ormation: Rows are f iltered within a mapped data f rom any source.

Q5. What is a Sorter Transf ormation?

Ans. Sorter transformation is used to sort the data in an ascending or descending


order based on single or multiple keys. It sorts collections of data by port or ports.

Q6. What is Expression Transf ormation?

Ans. An expression transformation is a collective Powercenter mapping


transformation. It is a connected, passive transformation that calculates values on a
single row and can also be used to test conditional statements before passing the
data to other transformations.

Q7. What is Joiner Transf ormation?

Ans. The joiner transformation is an active and connected transformation that helps
to create joins in Informatica. It is used to join two heterogeneous sources.

Check out the best Tableau Courses

Q8. What is a Decode in Inf ormatica?

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. In Informatica, we use the application of traditional CASE or IF which is possible
by the decode in Informatica. A decode in Informatica is a function used within an
Expression Transformation.

Q9. What is a Router Transf ormation?

Ans. The Router Transformation allows users to split a single pipeline of data into
multiple. It is an active and connected transformation that is similar to filter
transformation.

Q10. What is a Rank Transf ormation?

Ans. The Rank Transformation is active and connected used to sort and rank the
top or bottom set of records based on a specific port. It filters data based on groups
and ranks. The rank transformation has an output port assigning a rank to the rows.

Q11. What is Filter Transf ormation?

Ans. Filter transformation is used to filter the records based on the filter condition. It
is an active transformation as it changes the no of records.

Q12. What is a Sequence Generator Transf ormation?

Ans. Sequence Generator Transformation generates primary fundamental values or


a range of sequence numbers for calculations or processing. It is passive and
connected.

Also Read>> Top Online IT Courses for IT Professionals

Q13. What is a Master Outer Join?

Ans. A master outer join is a specific join typesetting within a joiner transformation.
In a master outer join, all records from the detail source are returned by the join and
only matching rows from the master source are returned.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q14. What are some examples of Inf ormatica ETL programs?

Ans. Some examples of Informatica ETL programs are:

Mappings

Workf lows

T asks

Q15. What is a dimensional table? What are the dif f erent


dimensions?

Ans. T his is one of the most important Inf ormatica interview questions. A Dimension table is
a table in a star schema of a data warehouse. Dimension tables are used to describe
dimensions. T hey contain attributes that describe f act records in the table.

For example, a product dimension could contain the name of the products, their description,
unit price, weight, and other attributes as applicable.

T he dif f erent types of dimension tables are:

SCD (Slowly Changing Dimension):

T he dimension attributes tend to change slowly with time rather than changing in a regular
intervals of time.

Conformed Dimension:

Conf ormed dimensions are exactly the same with every possible f act table to which they are
joined. It is used to maintain consistency.

T his dimension is shared among multiple subject areas or data marts. T he same can be used
in dif f erent projects without any modif ications.

Junk Dimension:

A junk dimension is a collection of attributes of low cardinality. It contains dif f erent


transactional code f lags or text attributes unrelated to any other attribute. A junk dimension
is a structure that provides a convenient place to store the junk attributes.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Degenerated Dimension:

It is derived f rom the f act table and does not have its own dimension table. T he attributes
are stored in the f act table, not as a separate dimension table.

Role-playing dimension:

Role-playing dimensions are the dimensions used f or multiple purposes within the same
database

Q16. What is star schema?

Ans. It is the simplest form of data warehouse schema that consists of one or more
dimensions and fact tables. It is used to develop data warehouses and dimensional
data marts.

Q17. Describe snowf lake schema.

Ans. A snowflake schema is a fact table connected to several dimensional tables


such that the entity-relationship diagram resembles a snowflake shape. It is an
extension of a Star Schema and adds additional dimensions. The dimension tables
are normalized, which splits data into additional tables.

Now, let’s take a look at some more Informatica Interview questions and answers.

Q18. What is a Mapplet?

Ans. A Mapplet is a reusable object containing a set of transformations that can be


used to create reusable mappings in Informatica.

Q19. What is a natural primary key?

Ans. A natural primary key uniquely identifies each record within a table and relates
records to additional data stored in other tables.

Q20. What is a surrogate key?

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. A surrogate key is a sequentially generated unique number attached with each
record in a Dimension table. It is used in substitution for the natural primary key.

Explore the popular Salesforce Courses

Q21. What is the dif f erence between a repository server and a


powerhouse?

Ans. A repository server controls the complete repository, which includes tables,
charts, and various procedures, etc

A powerhouse server governs the implementation of various processes among the


factors of the server’s database repository.

Q22. How many repositories can be created in Inf ormatica?

Ans. We can create as many repositories in Informatica as required.

Q23. Describe Data Concatenation.

Ans. Data concatenation is the process of bringing different pieces of the record
together.

Q24. How can one identif y whether the mapping is correct without
connecting the session?

Ans. With the help of debugging options.

Also Read>> Top Data Analyst Interview Questions and Answers

Q25. Name the designer tools f or creating transf ormations.

Ans. Mapping designer, transformation developer, and mapplet designer are used for
creating transformations.

Q26. Dif f erentiate between sessions and batches?

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. A session is a set of commands for the server to move data to the target, while
a batch is a set of tasks that can include one or more tasks.

Q27. What is Enterprise Data Warehousing?

Ans. Enterprise data warehousing is a process of creating a centralized repository of


operational data so that it can be used as per the reporting and analytics
requirements. It has a single access point, and the data is provided to the server via
only source store.

Q28. What are the dif f erent names of the Data Warehouse System?

Ans. The Data Warehouse System has the following names –

Analytic Application

Business Intelligence Solution

Data Warehouse

Decision Support System (DSS)

Executive Inf ormation System

Management Inf ormation System

Explore popular Data Analysis Courses

Q29. Name dif f erent available editions of INFORMATICA


PowerCenter.

Ans. Different editions of INFORMATICA PowerCenter are –

Standard Edition

Advanced Edition

Premium Edition

Q30. How to delete duplicate rows f rom f lat f iles?

Ans. We can use the sorter transformation to delete duplicate rows from flat files

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
and select the distinct option.

Q31. What is the dif f erence between Joiner and Lookup


transf ormations?

Ans. The differences between Joiner and Lookup transformations are:

Joiner Lookup

Joiner is an Active
It is a Passive transf ormation.
transf ormation.

Lookup transf ormation is used to get related values f rom


It is used to join data
another table. It also helps in checking f or updates in the
f rom dif f erent sources.
target table.

It is not possible to do It is possible to override the query by writing a customized


SQL query override. SQL query.

Only the ‘=’ operator is


All operators, such as = , < , > , <= . >= are available.
used.

It supports Normal,
Master, Detail, and Full By def ault, it supports lef t outer join.
Outer join.

Also explore:

Paid and f ree online courses by Coursera

Popular online Udemy courses

T op online edX courses

Now, let’s check out some advanced-level Informatica Interview questions and
answers.

Advanced Inf ormatica Interview Questions

Q32. What is the dif f erence between static cache and dynamic
cache?

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. In the static cache, the data will remain the same for the entire session,
whereas in the dynamic cache, whenever a row is inserted, the cache will also be
updated.

Q33. What is the command used to run a batch?

Ans. To run a batch in Informatica, we use the pmcmd command.

Q34. What are the dif f erences between the ROUTER and FILTER?

Ans. Differences between the Router and Filter are:

Router Filter

Captures data rows that don’t meet the T ests data f or one condition and drops the
conditions of a def ault output group data rows that don’t meet the condition.

Single input and multi-output group Single input and single output group
transf ormation transf ormation

User can specif y multiple f ilter conditions User can only one f ilter condition

It does not block input rows and f ailed


Chances that records can get blocked
records

Acts like IIF() f unction in Inf ormatica or


Works as SQL WHERE clause
CASE

Q35. What is a Domain?

Ans. A Domain comprises nodes and services and serves as the fundamental
administrative unit in the Informatica tool. It categorizes all related relationships and
nodes into folders and sub-folders depending upon the administration requirement.

Explore Top Data Analytics Courses from Coursera, Edx, WileyNXT, and Jigsaw

Q36. Why should we partition a Session?

Ans. Partition not only helps optimize a Session but also helps load a colossal
volume of data and improves the server’s operation and efficiency.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q37. What is Complex Mapping?

Ans. Complex Mapping is a mapping with huge requirements based on many


dependencies. It doesn’t necessarily need to have hundreds of 100 transformations.
It can be a complex map, even with five odd transformations. It is complex mapping if
the requirement has many business restrictions and constraints.

Q38. What are the f eatures of Complex Mapping?

Ans. The features of Complex Mapping are –

Complicated and huge requirements

Complex business logic

Multiple transf ormations

Q39. What is a Lookup Transf ormation?

Ans. Lookup Transformations are passive transformations with admission rights to


RDBMS-based data sets. It is used to find a source, source qualifier, or target to get
the relevant data.

Q40. What are dif f erent Lookup Caches(s)?

Ans. Lookups can be cached or uncached and can be divided as –

Static cache

Dynamic cache

Persistent cache

Shared cache

Recache

Q41. What are Mapplets?

Ans. Mapplets are reusable object that can be created in the Mapplet Designer and

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
has a set of transformations that allow the reuse of transformation logic in multiple
mappings.

Q42. What is the use of the Source Qualif ier?

Ans. Source Qualifiers represent rows using the PowerCenter server that the
integrations service reads during a session. The source qualifier transformation
converts the source data types to the Informatica native data types, eliminating the
need to alter the data types of the ports in the source qualifier transformation.

Q43. Def ine Workf low.

Ans. Workflow is a set of multiple tasks enabling a server to communicate and


implement the tasks. These tasks are connected with the start task link and trigger
the required sequence to start a process.

Check out the top SQL Interview Questions and Answers

Q44. How many tools are there in the Workf low Manager?

Ans. There are three types of tools in the Workflow Manager –

T ask Developer – T o create tasks that need to be run in the workf low.

Workf low Designer – T o create a workf low by connecting tasks with links.

Worklet Designer – T o create a worklet

Q45. What is a Target Load Order?

Ans. Also known as Target Load Plan, a Target Load Order specifies the order of
target loading by integration service. It is dependent on the source qualifiers in a
mapping.

Q46. What is the Command Task?

Ans. A Command Task runs the shell/UNIX commands in Windows during the
workflow. It allows a user to specify UNIX commands in the command task to

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
remove rejected files, create files, copy files, rename files, and archive files, among
others.

Q47. What is a Standalone Command Task?

Ans. Standalone Command Task allows the shell commands to run anywhere during
the workflow.

Q48. What is the PowerCenter Repository?

Ans. A PowerCenter Repository is a relational database like Oracle and SQL server.
It consists of the following Metadata –
Mapping

ODBC Connection

Session and session logs

Source Def inition

T arget Def inition

Workf low

Q49. What is the Snowf lake Schema? What is its advantage?

Ans. Snowflake Schema is a logical arrangement where dimension tables are


normalized in a multidimensional database. It is designed in a manner that looks like
a snowflake, thus the name. It contributes to improving the Select Query
performance.

Q50. What are the Dif f erent Components of PowerCenter?

Ans. This is an important Informatica interview question for experienced candidates. A


PowerCenter has eight crucial components –

PowerCenter Service

PowerCenter Clients

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
PowerCenter Repository

PowerCenter Domain

Repository Service

Integration Service

PowerCenter Administration Console

Web Service Hub

Q51. What does the PowerCenter Client application consist of ?

Ans. PowerCenter Client application is comprised of the following tools:

Designer

Mapping Architect f or Visio

Repository Manager

Workf low Manager

Workf low Monitor

Q52. How will you def ine the Tracing Level?

Ans. Tracing Level refers to the amount of information the server writes in the
session log. Tracing Level is created and configured either at –
T he transf ormation level

T he session-level

Else at both the levels

Different types of Tracing Level are –


None

T erse

Verbose Initialization

Verbose Data

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q53. What is a Surrogate Key?

Ans. A Surrogate Key is any column or set of columns attached to every record in a
Dimension table in a Data Warehouse. It is used as a substitute or replacement for
the primary key when the update process becomes difficult for a future requirement.

Also Read>> Top Data Warehouse Interview Questions and Answers

Q54. What is a Session?

Ans. A session in Informatica is a set of instructions to be followed when data is


transferred from source to target using Session Command. A Session Command
can be a pre-session command or a post-session command.

Q55. What is a User-Def ined Event?

Ans. A User-Defined Event is a flow of tasks in a workflow. It allows users to create


and name an event.

Q56. Explain the dif f erence between the partitioning of f ile targets
and the partitioning of the relational target?

Ans. Partitions can be accomplished on both relational and flat files. Informatica
holds up the following partitions:

Database partitioning

RoundRobin

Pass-through

Hash-Key partitioning

Key Range partitioning

Q57. Mention what are the unsupported repository objects f or a


mapplet.

Ans. The following are the unsupported repository objects for a mapplet:

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
COBOL source def inition

Normalizer transf ormations

Pre or post-session stored procedures

T arget def initions

Non-reusable sequence generator transf ormations

Joiner transf ormations

IBM MQ source def initions.

Power mart 3.5 styles Look Up f unctions

XML source def initions

Q58. Explain what are direct and indirect loading options in


sessions.

Ans. The following are the differences between direct and indirect loading options in
sessions:
Direct loading is used f or Single transf ormation, whereas indirect transf ormation can be
used f or multiple transf ormations or f iles.

In this direction, we can perf orm the recovery process, but in Indirect, we can’t do it.

Q59. What is the dif f erence between static and dynamic cache?
Explain with one example.

Ans. The difference between static and dynamic cache are:

Static – Once the data is cached, it will not change, for example unconnected
lookup uses static cache.

Dynamic – The cache is updated to reflect the update in the table (or source) for
which it is referring to. (ex. connected lookup).

Q60. Is it possible to start Batches within a batch?

Ans. It is not possible to start a batch within a batch if you want to start a batch that

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
resides in a batch, create a new independent batch and copy the necessary
sessions into the new batch.

Q61. What is the procedure to import VSAM f iles f rom source to


target? Do I need a special plugin?

Ans. As far as I know, by using the power exchange tool to convert VSAM files to
Oracle tables, then map as usual to the target table.

Q62. Mention how many types of f acts there are and what are they.

Ans. There are three types of facts:

Additive fact: A f act that can be summarized by anyone of dimension or all dimensions
EX: QT Y, REVENUE

Semi-additive fact: a f act that can be summarized f or a f ew dimensions, not f or all


dimensions. ex: current balance

Non-additive fact: a f act that cannot be summarized by any of the dimensions. ex:
percentage of prof it

Q63. Mention the methods f or creating reusable transf ormations.

Ans. There are two methods used for creating reusable transformations:

By using the transf ormation developer tool.

By converting a non-reusable transf ormation into a reusable transf ormation in mapping.

Q64. What is the procedure f or using the pmcmd command in a


workf low or run a session?

Ans. By using the command in the command task, there is an option pression. We
can write a suitable command of pmcmd to run the workflow.

Q65. What is the def ault join that the source qualif ier provides?

Ans. Inner equi join is the default join provided by the source qualifier.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q66. Explain the aggregate f act table and where is it used?

Ans. There are two types of fact tables:

Aggregated f act table – T he aggregated f act table consists of aggregated columns.


Example- T otal-Sal, Dep-Sal.

Factless f act table – T he f actless f act table doesn’t consist of aggregated columns and it
only has FK to the Dimension tables.

Q67. To provide support f or mainf rames source data, which f iles are
used as source def initions?

Ans. COBOL Copy-book files are used as a source definition.

Q68. What is the procedure to load the time dimension?

Ans. By using SCD Type 1/2/3, we can load any dimensions based on the
requirement. We can also use the procedure to populate the time dimension.

Q69. Explain the dif f erence between the summary f ilter and the
details f ilter?

Ans. Summary Filter- we can apply a record group comprising common values.

Detail Filter- we can apply to every record in a database.

Q70. What are the dif f erences between connected lookup and
unconnected lookup?

Ans. The differences between connected lookup and unconnected lookup are:

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Connected Lookup Unconnected Lookup

1. Gets the input directly f rom the other 1. It takes the input values f rom the
transf ormations and participates in the data result or the f unction of the LKP
f low. expression.

2. T his Lookup returns to only one


2. It can return to multiple output ports.
output port.

3. T his can be both dynamic and static. 3. It cannot be dynamic.

Q71. How many input parameters can be included in an unconnected


lookup?

Ans. Any number of input parameters can be included in an unconnected lookup.


However, the return value would only be one. For example, parameters like column 1,
column 2, column 3, and column 4 can be provided in an unconnected lookup but
there will be only one return value.

Check out the top Database and SQL Courses

Q72. Mention the advantages of a partitioned session.

Ans. The advantages of a partitioned session in Informatica are:

Increases the manageability, ef f iciency, and operation of the server

Involves the solo implementation sequences in the session

Simplif ies common administration tasks.

Q73. Explain the dif f erent methods f or the implementation of


parallel processing?

Ans. T his is one of the commonly asked Informatica interview questions. T he dif f erent
partition algorithms f or the implementation of parallel processing are:

Pass-through Partitioning: In this portioning, the Integration Service passes all rows
f rom one partition point to the next partition point without redistributing them.

Database partitioning: In this partitioning, the Integration Service queries the database

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
system f or table partition inf ormation and reads partitioned data f rom the
corresponding nodes in the database.

Round-Robin Partitioning: T he Integration service distributes data evenly among all


partitions.

Key Range Partitioning: It enables you to specif y one or more ports to f orm a
compound partition key f or a source or target. T he Integration Service then passes data
to each partition depending on the ranges you specif y f or each port.

Hash Auto-Keys Partitioning: T he hash f unction groups rows of data among


partitions. T he Integration Service uses all grouped or sorted ports as a compound
partition key with the hash auto-key partition.

Hash User-Keys Partitioning: It groups rows of data among partitions based on a


user-def ined partition key.

Q74. Name some of the mapping development practices.

Ans: The following are some of the mapping development practices in Informatica:

Source Qualif ier

Aggregator

Expressions

Filter

Lookup

Joiner

Q75. Explain the Event and what its types are.

Ans. The event can be any action or functionality implemented in a workflow. There
are two types of events:

Event Wait T ask: It waits until an event occurs. T he specif ic event f or which the Event
Wait task should wait can be def ined. Once the event is triggered, this task gets
accomplished and assigns the next task in the workf low.

Events Raise T ask: It triggers the specif ic event in the workf low.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Q76. What is a Fact Table? What are its dif f erent types?

Ans. This is one of the most frequently asked Informatica interview questions. A Fact
table is a centralized table in the star schema. It contains summarized numerical and
historical data (facts). There are two types of columns in a Fact table:

Columns that contain the measure called f acts

Columns that are f oreign keys to the dimension tables

The different types of Fact Tables are:


Additive: T hese f acts can be summed up through all of the dimensions in the f act table.

Semi-Additive: T he f acts can be summed up f or only some of the dimensions in the f act
table.

Non-Additive: T he f acts that cannot be summed up f or any of the dimensions present in


the f act table.

Q77. Explain OLAP?

Ans: OLAP stands for Online Analytical Processing. It is used to analyze database
information from multiple database systems at one time. It offers a multi-
dimensional analysis of data for business decisions.

Q78. What are the dif f erent types of OLAP?

Ans: There are three types of OLAP techniques, namely:


MOLAP (Multi-dimensional OLAP)

ROLAP (Relational OLAP)

HOLAP (Hybrid OLAP)

Q79. What are the advantages of using OLAP services?

Ans. The advantages of using OLAP services are as follows:

It is a single platf orm f or all types of analytical business needs.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Of f ers consistency of inf ormation and calculations.

It complies with regulations to saf eguard sensitive data.

Applies security restrictions on users and objects to protect data.

Q80. What are the dif f erent types of lookup transf ormation in
Inf ormatica?

Ans. The different types of lookup transformation in Informatica are:

Relational Lookup (Flat File)

Pipeline Lookup

Cached/Uncached Lookup

Connected/Unconnected Lookup

Q81. Explain pre-session and post-session shell commands?

Ans. You can call a Command task the pre-or post-session shell command for a
Session task. They can be called in COMPONENTS TAB of the session. They can be
run in Pre-Session Command or Pre-Session Success Command or Post-Session
Failure Command. The application of shell commands can be changed as per the
use case.

Q82. Name the dif f erent types of groups in router transf ormation.

Ans. The different types of groups in router transformation are:

Input group

Output group

Def ault group

Q83. Explain Junk Dimensions?

Ans. A Junk Dimension is a collection of some random codes or flags that do not
belong in the fact table or any of the existing dimension tables. These attributes are

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
unrelated to any particular dimension. The nature of these attributes is like random
codes or flags, for example, non-generic comments or just yes/no.

Explore the top Business Intelligence Tools Courses

Q84. What are the output f iles created by the Inf ormatica server
during the session running?

Ans. T he f ollowing are the output f iles created by the Inf ormatica server during the session
running.

Informatica server log: T his f ile is created f or all status and error messages by
def ault name: pm.server.log. An error log f or error messages is also created.

Session log file: Session log f iles are created f or each session. It writes inf ormation
about sessions into log f iles such as the initialization process, creation of SQL
commands f or reader and writer threads, etc.

Session detail file: T he Session detail f ile contains load statistics f or each target in
mapping. It includes table name, number of rows written or rejected.

Performance detail file: T his f ile contains session perf ormance details that help
identif y areas where perf ormance can be improved.

Reject file: It contains the rows of data that the writer does not write to targets.

Control file: Control f ile and a target f ile are created when you run a session that uses
the external loader. T he control f ile has inf ormation about the target f lat f ile such as
data f ormat and loading instructions, etc.

Post session email: With the help of this f ile, you can automatically communicate
inf ormation about a session run to designated recipients.

Indicator file: Inf ormatica server can be conf igured to create an indicator f ile while
using the f lat f ile as a target. T he indicator f ile contains a number f or each target row to
indicate whether the row was marked f or insert, update, delete or reject.

Output file: If a session writes to a target f ile, a target f ile based on f ile properties
entered in the session property sheet is created.

Cache files: When the Inf ormatica server creates a memory cache, it also creates
cache f iles.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Workflow log: It contains the high-level and detailed inf ormation of sessions, nodes,
integration services, repository inf ormation, etc.

Badfile cache: It contains the bad records or rejected records.

Q85. Name the f iles are created during the session RUMs.

Ans. T he f ollowing f iles are created during the session RUMs:

Errors log

Session log

Bad f ile

Workf low low

Q86. What is the dif f erence between Mapping and Mapplet?

Ans. T he dif f erences between Mapping and Mapplet are:

Mapping Mapplet

It is a collection of source, target, and


It is a collection of transf ormation only.
transf ormation.

Mapping is developed with dif f erent It can be reused with other mapping and
transf ormations. mapplets.

It is not reusable. Mapplets are reusable components.

It f ocuses on what data move to target and Mapplet is developed f or complex


what modif ications are done upon that. calculations used in multiple mappings.

Q87. What is a Stored Procedure Transf ormation? What are its


uses?

Ans. It is a passive transf ormation that populates and maintains databases. It helps you to
use or call Stored procedures inside the Inf ormatica Workf low. It can be used in connected
as well as the unconnected mode.

T he major uses of Stored Procedure T ransf ormation are:

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Check the status of a target database bef ore loading data into it.

Identif y if enough space exists in a database.

Carry out a complex calculation.

Drop and recreate indexes.

Q88. What is DTM Process?

Ans. T he Data T ransf ormation Manager Process (DT M) process is started by PowerCenter
Integration Service to run a session. T he main role of the DT M process is to create and
manage threads that carry out the session tasks. T he DT M process perf orms various tasks,
including:

Reading the session inf ormation

Forming dynamic partitions

Creating partition groups

Validating code pages

Running the processing threads

Running post-session operations

Sending post-session email

Q89. What is the dif f erence between a f act table and a dimension
table?

Ans. T he dif f erences between a f act table and a dimension table are:

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Fact T able Dimension T able

A f act table contains A dimension table is one of the companion tables to a


summarized numerical and f act table in the star schema. It contains the
historical data (f acts). dimensions of a f act.

A f act table contains more T he dimension table contains more attributes and
records and f ewer attributes. f ewer records.

It is def ined by data grain. It is descriptive, complete, and wordy.

T he primary key in the f act table


It has primary key columns that uniquely identif y each
is mapped as f oreign keys to
dimension.
dimensions.

It can have data in numeric as


It contains attributes in textual f ormat.
well as textual f ormat.

It contains hierarchy and grows horizontally. T he


It does not contain hierarchy and
dimension can also contain one or more hierarchical
grows vertically.
relationships.

Let’s take a look at the most commonly-asked Informatica scenario based questions.

Inf ormatica Scenario Based Interview Questions

The following are some of the frequently asked informatica interview questions
scenario based.

What are Inf ormatica Scenario Based interview questions?

In a Scenario based interview, you will be f irst of f ered a scenario and then asked questions
related to it. Your response to Informatica scenario based questions will show your
technical skills as well as your sof t skills, such as problem-solving and critical thinking.

Now that you are just one step away to land your dream job, you must prepare well f or all
the likely interview questions. Remember that every interview round is dif f erent, especially
when scenario-based Informatica interview questions are asked.

Q90. How do you load the last N rows f rom a f lat-f ile into a target

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
table in Inf ormatica?

Ans. This is an important Informatica scenario based question.

Considering that the source has data:


Col

ABC

DEF

GHI

JKL

MNO

Now f ollow t he below st eps t o load t he last 3 rows int o a t arget t able

St ep 1

Assign the row numbers to each record by using expression transf ormation. Name the
row to calculate as N_calculate.

Create a dummy output port and assign 1 to the port in the same expression
transf ormation.

T his will return 1 f or each row.

Ports in Expression T ransformation

V_calculate=V_calculatet+1

N_calculate=V_calculate

N_dummy=1

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Outputs in Expression T ransformation

col, N_calculate, N_dummy

ABC, 1, 1

DEF, 2, 1

GHI, 3, 1

JKL, 4, 1

MNO, 5, 1

St ep 2

Pass expression transf ormation output to the aggregator transf ormation

Do not specif y condition ‘any group’

Create a N_total_records output port in the aggregator

Assign the N_calculatet port to it.

By def ault, it will return the last row

It will contain DUMMY port

Now it will hold the value as 1 and N_total_records port (it will keep the value of the total
number of records available in the source)

Ports in Aggregator T ransformation

N_dummy

N_calculate

N_total_records=N_calculate

Outputs in Aggregator T ransformation

N_total_records, N_dummy

5, 1

St ep 3

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Now pass the value of expression and aggregator transf ormation to the joiner
transf ormation

Merge the dummy port

Check the property sorted input in the joiner transf ormation to connect both expression
and aggregator transf ormation

Now the join condition will be O_dummy (port f rom aggregator transf ormation) =
O_dummy (port f rom expression transf ormation)

Outputs in Joiner T ransformation

col, N_calculate, N_total_records

ABC, 1, 5

DEF, 2, 5

GHI, 3, 5

JKL, 4, 5

MNO, 5, 5

St ep 4

Pass the joiner transf ormation to f ilter transf ormation

Mention the f ilter condition as N_total_records (port f rom aggregator)-N_calculate(port


f rom expression) <=2

T hus, the f ilter condition in the f ilter transf ormation will be N_total_records – N_calculate
<=2

Out put

Outputs in Filter T ransformation

col N_calculate, N_total_records

GHI, 3, 5

JKL, 4, 5

MNO, 5, 5

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Check out the popular Business Data Mining Courses

Q91. Solve the below situations if data has duplicate rows.

Data

Amazon

Walmart

Snapdeal

Snapdeal

Walmart

Flipkart

Walmart

Sit uat ion – Give st eps t o load all unique names in one t able and duplicat e
names in anot her t able.

Solution 1 – We want solution tables as:

Amazon and Flipkart in one table

And

Walmart, Walmart, Walmart, Snapdeal, and Snapdeal in another table

Follow t he below st eps

Sort the name data by using a sorter transf ormation

Pass the sorted output to an expression transf ormation

Form a dummy port N_dummy and assign 1 to the port

Now f or each row, the Dummy output port will return 1

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Expression T ransf ormation Output

Name, N_dummy

Amazon, 1

Walmart, 1

Walmart, 1

Walmart, 1

Snapdeal, 1

Snapdeal, 1

Flipkart, 1

Pass the acquired expression transf ormation output to aggregator transf ormation

Check ‘groupby’ on name port

Create an output port in aggregator N_calculate_of _each_name and write an expression


calculate(name).

Aggregator T ransf ormation Output

name, N_calculate_of _each_name

Amazon, 1

Walmart, 3

Snapdeal, 2

Flipkart, 1

Pass the expression and aggregator transf ormation output to joiner transf ormation

Join the name ports

Review the property sorted input to connect both transf ormations to joiner
transf ormation

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Joiner T ransf ormation Output

name, N_dummy, N_calculate_of _each_name

Amazon, 1, 1

Walmart, 1, 3

Walmart, 1, 3

Walmart, 1, 3

Snapdeal, 1, 2

Snapdeal, 1, 2

Flipkart, 1, 1

Move the joiner output to router transf ormation

Create one group

Specif y it as O_dummy=O_count_of _each_name

Connect the group to one table

Connect def ault output group to another table

You will get separate tables f or both

Q92. Situation 2 – Solve the below situations if data has duplicate


rows.

Data

Amazon

Walmart

Snapdeal

Snapdeal

Walmart

Flipkart

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Walmart

Situation – Load each name once in one table and duplicate products in another
table.

Ans.

Solution 2 – We want the output as:

T able 1

Amazon

Walmart

Snapdeal

Flipkart

T able 2

Walmart

Walmart

Snapdeal

The below st eps will give t he desired solut ion:

Sort the name data by using a sorter transf ormation

Pass name output to expression transf ormation

Create a variable port,Z _curr_name

Assign the name port to variable port

Create Z _calculate port

Write in the expression editor, IIF(Z _curr_name=Z _prev_name, V_calculate+1,1)

Form another variable and call it as port Z _prev_port

Assign the name port to this variable

Form the output portN_calculate port

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Assign Z _calculate to this output port

Expression T ransformation Name port

Z _curr_name=name

Z _calculate=IIF(Z _curr_name=Z _prev_name, Z _calculate+1, 1)

N_calculate=Z _calculate

Expression T ransformation Output

Amazon, 1

Walmart, 1

Walmart, 2

Walmart, 3

Snapdeal, 1

Snapdeal, 2

Flipkart, 1

Route the expression transf ormation to router transf ormation

Form a group

Specif y condition as N_calculate=1

Merge the group to one table

Merge the def ault group output to another table

Learn more about Data Analysis

Q93. In Inf ormatica, how do you use Normalizer Transf ormation f or


the below-mentioned condition?

Quarter 1 Quarter 2 Quarter 3 Quarter 4


State
Purchase Purchase Purchase Purchase

ABC 80 85 90 95

DEF 60 65 70 75

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Ans. This is one of the popularly asked Informatica Interview questions that you must
prepare for your upcoming interview.

If you want to transform a single row into multiple rows, Normalizer Transformation
will help. Also, it is used for converting multiple rows into a single row to make data
look organized. As per the above scenario-based Informatica interview question , we
want the solution to look as:

State Name Quarter Purchase

ABC 1 80

ABC 2 85

ABC 3 90

ABC 4 95

DEF 1 60

DEF 2 65

DEF 3 70

DEF 4 75

Follow the steps to achieve the desired solution by using normalizer transformation:

St ep 1 –

Create a table “purchase_source” and assign a target table as “purchase_target”

Import the table to inf ormatica

Create a mapping f or both the tables having a source as “purchase_source”


“purchase_target” respectively

Create a new transf ormation f rom the transf ormation menu

Enter the name “xyz_purchase”

Select create option

Select done (now the transf ormation is created)

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
St ep 2 –

Double click on normalization transf ormation

Go to normalizer tab and select it

From the tab, click on the icon, this will create two columns

Enter the names of columns

Fix number of occurrences to 4 f or purchase and 0 f or the state name

Select OK

4 columns will be generated and appear in the transf ormation

St ep 3 –

In the mapping, link all f our columns in source qualif ier of the f our Quarters to the
normalizer

Link state name column to normalizer column

Link state_name and purchase columns to target table

Link lkp_purchase column to target table

Create session and workf low

Save the mapping and execute it

You will get the desired rearranged output

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
State Name Quarter Purchase

ABC 1 80

ABC 2 85

ABC 3 90

ABC 4 95

DEF 1 60

DEF 2 65

DEF 3 70

DEF 4 75

Q94. What to do when you get the below error?

AA_10000 Normalizer T ransformation: Initialization Error: [Cannot match AASid with


BBT id.]

Ans. Follow the below process –

Remove all the unconnected input ports to the normalizer transf ormation

If OCCURS is present, check that the number of input ports is equal to the number of
OCCURS

Q95. What are the steps to create, design, and implement SCD Type
1 mapping in Inf ormatica using the ETL tool?

Ans. The SCD Type 1 mapping helps in the situation when you don’t want to store
historical data in the Dimension table as this method overwrites the previous data
with the latest data.

The process t o be f ollowed:

Identif y new records

Insert it into the dimension table

Identif y the changed record

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Update it in the dimension table

For example:

If the source table looks like:

CREATE TABLE Students (

Student_Id Number,

Student_Name Varchar2(60),

Place Varchar2(60)

Now we require using the SCD Type 1 method to load the data present in the
source table into the student dimension table.

CREATE TABLE Students_Dim (

Stud_Key Number,

Student_Id Number,

Student_Name Varchar2(60),

Location Varchar2(60)

Follow t he st eps t o generat e SCD Type 1 mapping in Inf ormat ica

In the database, create source and dimension tables

Create or import source def inition in the mapping designer tool’s source analyzer

Import the T arget Def inition f rom Warehouse designer or T arget designer

Create a new mapping f rom the mapping designer tab

Drag and drop the source

Select Create option f rom toolbar’s T ransf ormation section

Select Lookup T ransf ormation

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Enter the name and click on create

From the window, select Student dimension table and click OK

Edit lkp transf ormation

Add a new port In_Student_Id f rom the properties tab

Connect the port to source qualif ier transf ormation’ Student_Id port

From the lkp transf ormation’s condition tab, enter the Lookup condition as Student_Id =
IN_Student_Id

Click OK

Now, connect source qualif ier transf ormation’s student_id port to lkp transf ormation’s
In_Student_Id port

Create expression transf ormation using the input port as Stud_Key, Name, Location,
Src_Name, Src_Location

Create an output port as New_Flag, Changes_Flag

In the expression transf ormation’s output ports, enter the below-mentioned expression
New_Flag = IIF(ISNULL(Stud_Key),1,0)

Changed_Flag = IIF(NOT ISNULL(Stud_Key)

AND (Name != Src_Name

OR Location != Src_Location),

1, 0 )

Connect lkp transf ormation port to expression transf ormation port

Also, connect source qualif ier transf ormation port to expression transf ormation port

Form a f ilter transf ormation and move the ports of source qualif ier transf ormation

Edit the f ilter transf ormation and set new Filter Condition as New_Flag=1 f rom the edit
f ilter transf ormation option

Press OK

Create an update strategy transf ormation

Connect all f ilter transf ormation port just exclude except the New_Flag port

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
From the properties tab of update strategy, enter DD_INSERT as the strategy expression

Drag target def inition to mapping

Connect relevant ports to target def inition f rom update strategy

Create a sequence generator transf ormation

Connect NEXT VAL port to target surrogate key port (stud_key)

Create a dif f erent f ilter transf ormation

In the f ilter transf ormation, drag lkp transf ormation’s port (Stud_Key), source qualif ier
transf ormation (Name, Location), expression transf ormation (changed_f lag) ports

Go to the properties tab to edit the f ilter transf ormation

Mention the f ilter condition as Changed_Flag=1

Click OK

Create the update strategy

Connect parts of f ilter transf ormation to update strategy

From the update strategy properties tab, enter expressions DD_Update

In this mapping, drag target def inition

From the update strategy, connect all the appropriate ports to target def inition

Explore the concept of Business Analytics

Q96. Give steps to use PMCMD Utility Command.

Ans. There are 4 different built-in command-line programs:

inf acmd

inf asetup

pmcmd

Pmrep

PMCMD command helps f or t he f ollowing f unct ions:

Start workf lows

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Schedule workf lows

Start a workf low f rom a specif ic task

Stop and abort workf lows and sessions

Below are t he st eps t o use PMCMD command:

Start workf low

pmcmd startworkflow -service informatica-integration-Service -d domain-name -u


user-name -p password -f folder-name -w workflow-name

Scheduling the workf low

pmcmd scheduleworkflow -service informatica-integration-Service -d domain-name -


u user-name -p password -f folder-name -w workflow-name

Start a workf low f rom a specif ic task

pmcmd startask -service informatica-integration-Service -d domain-name -u user-


name -p password -f folder-name -w workflow-name -startfrom task-name

Abort workf low

pmcmd abortworkflow -service informatica-integration-Service -d domain-name -u


user-name -p password -f folder-name -w workflow-name

pmcmd aborttask -service informatica-integration-Service -d domain-name -u user-


name -p password -f folder-name -w workflow-name task-name

Q97. How to conf igure the target load order in Inf ormatica?

Ans. Follow the below steps:

Create mapping containing multiple target load order groups in the PowerCenter designer

From the toolbar, click on the Mappings and then click on T arget Load Plan

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
You will see a pop up that will have a list of source qualif ier transf ormations in the
mapping. Also, it will have the target f rom which it receives data f rom each source qualif ier

From the list, pick a source qualif ier

Using the Up and Down button, move source qualif ier within load order

Click ok

You will get the desired output

Q98. Using the incremental aggregation in the below table, what will
be the output in the next table?

Product ID Bill Number Cost Data

101 1 100 01/01/2020

201 2 150 01/01/2020

301 3 200 01/01/2020

101 4 300 05/01/2020

101 5 400 05/01/2020

201 6 500 05/01/2020

555 7 550 05/01/2020

151 8 600 05/01/2020

Ans. When the first load is finished the table will become:

Product ID Bill Number Load_Key Data

101 1 20011 100

201 2 20011 150

301 3 20011 200

Q99. What is the syntax of the INITCAP f unction?

Ans. This function is used to capitalize the first character of each word in the string
and makes all other characters in lowercase.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Below is the Syntax:

INITTCAP(string_name)

These were some of the most popular scenario-based Informatica interview


questions.

Q100. How will you generate sequence numbers using expression


transf ormation?

Ans. We can generate sequence numbers using expression transf ormation by f ollowing the
below steps:

Create a variable port and increment it by 1

Allocate the variable port to an output port. T he two ports in the expression
transf ormation are: V_count=V_count+1 and O_count=V_count

Also Read>> Top Database Interview Questions and Answers

Q101. How will you load the f irst 4 rows f rom a f lat-f ile into a
target?

Ans. T he f irst 4 rows can be loaded f rom a f lat-f ile into a target using the f ollowing steps:

Allocate row numbers to each record.

Create the row numbers by using the expression transf ormation or by using the sequence
generator transf ormation.

Pass the output to f ilter transf ormation and specif y the f ilter condition as O_count <=4

Q102. What is the dif f erence between Source Qualif ier and Filter
Transf ormation?

Ans. T he dif f erences between Source Qualif ier and Filter T ransf ormation are:

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Source Qualifier
Filter T ransformation
T ransformation

1. It f ilters rows while reading


1. Filters rows f rom within mapping.
the data f rom a source.

2. It can f ilter rows only f rom 2. T his can f ilter rows f rom any type of source system
relational sources. at the mapping level.

3. Source Qualif ier limits the row


3. It limits the row set sent to a target.
sets extracted f rom a source.

4. It reduces the number of rows 4. T o maximize perf ormance, Filter T ransf ormation is
used in mapping thereby added close to the source to f ilter out the unwanted
enhancing perf ormance. data early.

5. Filter condition uses the 5. Filter T ransf ormation def ines a condition using any
standard SQL to run in the statement or transf ormation f unction that returns
database. either a T RUE or FALSE value.

Q103. Create a mapping to load the cumulative sum of salaries of


employees into the target table. Consider the f ollowing employee’s
data as a source.

employee_id, salary

1, 2000

2, 3000

3, 4000

4, 5000

T he target table data should look like the f ollowing:

employee_id, salary, cumulative_sum

1, 2000, 2000

2, 3000, 5000

3, 4000, 9000

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
4, 5000, 14000

Ans. T he f ollowing steps need to be f ollowed to get the desired output:

Connect the source Qualif ier to the expression transf ormation

Create a variable port V_cum_sal in the expression transf ormation

Write V_cum_sal+salary in the expression editor

Create an output port O_cum_sal and assign V_cum_sal to it

Q104. Create a mapping to f ind the sum of salaries of all employees.


The sum should repeat f or all the rows. Consider the employee’s
data provided in Q14. as a source.

T he output should look like:

employee_id, salary, salary_sum

1, 2000, 14000

2, 3000, 14000

3, 4000, 14000

4, 5000, 14000

Ans. T he f ollowing steps should be f ollowed to get the desired output:

Step 1:

Connect the source qualif ier to the expression transf ormation.

Create a dummy port in the expression transf ormation and assign value 1 to it. T he ports will
be:

employee_id

salary

O_dummy=1

Step 2:

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Provide the output of expression transf ormation to the aggregator transf ormation.

Create a new port O_sum_salary

Write- SUM(salary) in the expression editor.

T he ports will be:

Salary

O_dummy

O_sum_salary=SUM(salary)

Step 3:

Provide the output of expression transf ormation and aggregator transf ormation to joiner
transf ormation.

Join the DUMMY port.

Check the property sorted input and connect expression and aggregator to joiner
transf ormation.

Step 4:

Provide the output of the joiner to the target table.

Q105. Create a mapping to get the previous row salary f or the


current row. In case, there is no previous row f or the current row,
then the previous row salary should be displayed as null.

T he output should look like:

employee_id, salary, pre_row_salary

1, 2000, Null
2, 3000, 2000
3, 4000, 3000
4, 5000, 4000

Ans. T he f ollowing steps will be f ollowed to get the desired output:

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Connect the source Qualif ier to the expression transf ormation.

Create a variable port V_count in the expression transf ormation.

Increment it by 1 f or each row.

Create V_salary variable port and assign IIF(V_count=1,NULL,V_prev_salary) to it.

Create variable port V_prev_salary and assign Salary.

Create output port O_prev_salary and assign V_salary.

Connect the expression transf ormation to the target ports.

T he ports in the expression transf ormation will be:

employee_id

salary

V_count=V_count+1

V_salary=IIF(V_count=1,NULL,V_prev_salary)

V_prev_salary=salary

O_prev_salary=V_salary

Q106. What is the name of the scenario in which the Inf ormatica
server rejects f iles?

Ans: T he Inf ormatica server rejects f iles when there is a rejection of the update strategy
transf ormation. In such a rare case scenario the database comprising the inf ormation and
data also gets interrupted.

Explore Database Administration Courses, Skills, and Career, read our blog – what is
Database Administration?

Q107. What will happen in the f ollowing scenario:

If the SELECT list COLUMNS in the Custom override SQL Query and
the OUTPUT PORTS order in SQ transf ormation do not match?

Ans. Such a scenario where the SELECT list COLUMNS in the Custom override SQL Query

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
and the OUT PUT PORT S order in SQ transf ormation do not match – may result in session
f ailure.

Q108. What can be done to enhance the perf ormance of the joiner
condition?

Ans. T he joiner condition perf ormance can be enhanced by the f ollowing:

Sort the data bef ore applying to join.

If the data is unsorted, then consider the source with f ewer rows as the master source.

Perf orm joins in a database.

If joins cannot be perf ormed f or some tables, then the user can create a stored
procedure and then join the tables in the database.

Q109. How do you load alternate records into dif f erent tables
through mapping f low?

Ans. T o load alternate records into dif f erent tables through mapping f low, just add a
sequence number to the records and then divide the record number by 2. If it can be divided,
then move it to one target. If not, then move it to the other target.

It involves the f ollowing steps:

Drag the source and connect to an expression transf ormation.

Add the next value of a sequence generator to the expression transf ormation.

Make two ports, Odd and Even in the expression transf ormation.

Write the expression below

v_count (variable port) = v_count+1

o_count (output port) = v_count

Connect a router transf ormation and drag the port (products, v_count) f rom expression
into the router transf ormation.

Make two groups in the router

Give condition

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
Send the two groups to dif f erent targets

Q110. How do you implement Security Measures using a Repository


manager?

Ans. T here are 3 ways to implement security measures:

Folder Permission within owners, groups, and users.

Locking (Read, Write, Retrieve, Save, and Execute).

Repository Privileges

Q111. How can you store previous session logs in Inf ormatica?

Ans. T he f ollowing steps will enable you to store previous session logs in Inf ormatica:

Go to Session Properties > Config Object > Log Options

Select the properties:

Save session log by –> SessionRuns

Save session log f or these runs –> Change the number that you want to save the number of
log f iles (Def ault is 0)

If you want to save all of the log f iles created by every run, and then select the option
Save session log for these runs –> Session T imeStamp

Q112. Mention the perf ormance considerations while working with


Aggregator Transf ormation?

Ans. T he f ollowing are the perf ormance considerations while working with Aggregator
T ransf ormation:

T o reduce unnecessary aggregation, f ilter the unnecessary data bef ore aggregating.

T o minimize the size of the data cache, connect only the needed input/output ports to the
succeeding transf ormations.

Use Sorted input to minimize the amount of data cached to enhance the session
perf ormance.

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
We hope that this interview blog covering Informatica interview questions for freshers
and experienced candidates as well scenario-based Informatica interview questions,
will help you crack your upcoming interview.

FAQs

Is Inf ormatica worth learning?

Is Inf ormatica dif f icult to learn?

How should I learn Inf ormatica?

Where is Inf ormatica used?

What are the responsibilities of an Inf ormatica developer?

Is Inf ormatica Developer a good career?

Which job prof iles are available f or Inf ormatica experts?

Which companies are using Inf ormatica tool?

What are the skills required f or Inf ormatica Developer?

Is Inf ormatica certif ication usef ul?

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.
What are the prerequisites to learn Inf ormatica?

Disclaim e r: This PDF is auto -generated based o n the info rmatio n available o n Shiksha as
o n 0 1-No v-20 23.

You might also like