Sprint Case - Study IICS - Updated
Sprint Case - Study IICS - Updated
Group 1
1. Configure a data task while reading the source data as well as matching
columns from lookup source and implement it to the target. Deploy create and
run the process for Data Transfer tasks.
2. Create 2 different mappings for one time load of customer data and also
capture the incremental load using the uniqueness of the row by MD5 function
and use the insert/update/upsert functionality in the target and load to the
target.
3. Implement Hierarcy Builder & Hierarchy parse mapping to convert the
Hierarchial input to relational output and similarly relational input into
hierarchial output.
4. Configure a taskflow to process the *.txt files coming from the file listener and
archive them upon successful execution. Create the steps that are needed to
archive and send email notification upon the completion.
5. Create a mapping using the sample json schema and convert the hierarchial
input to relational output – 2 target files one is .csv and other one is in .xml
file format.
6. Capture the restAPI Login, Actiivty log entries as well as invoking a mapping
configuration task from the POSTMAN and compare it with the cloud UI and
capture the screenshots.
Group 2
1. Configure a taskflow to process the *.csv files coming from the file listener
and archive them upon successful execution. Create the steps that are needed
to archive and send email notification upon the completion.
2. Configure a mapping task with source and target connections parameterized at
the Global level and use the incremental load approach to capture the last
modified date of the flat file source.
3. Create a mapping to load duplicate records to one flat file and unique records
to an other file also - Create a mapping to remove the leading and trailing
spaces from a flat file using expression macros and load into the target file
using the Dynamic linking.
4. Configure a data transfer task with a second source as lookup and use the
filter/Sort options to sort data and load it to the target table.
5. Configure a data synchronization task to read the data from salesforce for
account, lead, opportunity and load them to the flat files as .csv format and
also do a reverse sync up to salesforce.
6. Capture the restAPI Login, Actiivty log entries as well as invoking a mapping
configuration task from the POSTMAN and compare it with the cloud UI and
capture the screenshots.
Group 3
1. Configure a taskflow to process the *.json files coming from the file listener using
data task and have a decision task to process the source target rows more than
1000 as success and if less than success to end the process and archive them upon
successful execution. Create the steps that are needed to archive and send email
notification upon the completion.
2. Configure a mapping task with source and target connections parameterized at the
Global level and use the incremental load approach to capture the last modified
date of the flat file source.
3. Create a mapping and then mapping configuration task with source as table and
target as table and implement the logic of loading the last two records into the
target. Use the transformations wherever applicable and load the last 2 records
into the target.
4. Create a mapping to display the average salary amount of all employees against
each employee record.
5. Create a replication task to include all the objects from the emp schema to a new
schema.
6. Capture the restAPI Login, Actiivty log entries as well as invoking a mapping
configuration task from the POSTMAN and compare it with the cloud UI and
capture the screenshots.
Group 4
1. Configure a taskflow to process the *.json files coming from the file listener using
data task and have a decision task to process the source target rows more than 100
as success and if less than success to use jump step and send the notification as
failure to the email recipients and process and archive them upon successful
execution. Create the steps that are needed to capture failure/archive and send
email notification upon the completion.
2. Create a replication task to replicate the account, opportunity,lead,contact into the
database and use the filter conditions to remove the null values if any and load
into the target table.
3. Create a mapping/mapping config task to load alternate records to the alternate
targets which means that 1,3,5 records should go to the ODD target and 2,4,6,8 etc
records should go to the EVEN target.
4. Create a mapping with Source/Target using parameters as connections and data
objects and load the data into the target table. Create the appropriate mapping
configuration task as needed.
5. Design mapping to load only last 2 records to the target table.
6. Capture the restAPI Login, Actiivty log entries as well as invoking a mapping
configuration task from the POSTMAN and compare it with the cloud UI and
capture the screenshots.
Group 5
Group 6
1. Configure a data transfer task while reading the source data as well as
matching columns from lookup source and implement it to the target. Deploy
create and run the process for Data Transfer tasks.
2. Configure a taskflow to process the *.csv files coming from the file listener
using data task and have a decision task to process the source target rows more
than 100 as success and if less than success to use jump step and send the
notification as failure to the email recipients and process and archive them
upon successful execution. Create the steps that are needed to capture
failure/archive and send email notification upon the completion.
3. Implement Hierarchy Builder & Hierarchy parse mapping to convert the
Hierarchial input to relational output and similarly relational input into
hierarchical output.
4. Create a replication task to include all the objects from the emp schema to a
new schema.
5. Design mapping to load only last 2 records to the target table.
6. Capture the restAPI Login, Activity log entries as well as invoking a mapping
configuration task from the POSTMAN and compare it with the cloud UI and
capture the screenshots.
Group 7
1. Configure a data transfer task with a second source as lookup and use the
filter/Sort options to sort data and load it to the target table.
2. Create 2 different mappings for one time load of customer data and also
capture the incremental load using the uniqueness of the row by MD5 function
and use the insert/update/upsert functionality in the target and load to the
target.
3. Create a mapping to display the average salary amount of all employees
against each employee record.
4. Configure a taskflow to process the *.txt files coming from the file listener and
archive them upon successful execution. Create the steps that are needed to
archive and send email notification upon the completion.
5. Create a mapping using the sample json schema and convert the hierarchial
input to relational output – 2 target files one is .csv and other one is in .xml
file format.
6. Capture the restAPI Login, Actiivty log entries as well as invoking a mapping
configuration task from the POSTMAN and compare it with the cloud UI and
capture the screenshots.