Source is a flat file and want to load unique and duplicate
records separately into two separate targets; right??
Answer Posted / nitin
Create the mapping as below to load Unique records and duplicate records each in separate targets
Source->SQ->Sorter->Aggregator->Router-> Tgt_Unique
-> Tgt_Duplicate
In aggregator use group by on all ports.
and define a port OUTPUT_COUNT = COUNT(*)
In the router define two groups OUTPUT_COUNT > 1 and OUTPUT_COUNT = 1; Connect the outputs from the first group
OUTPUT_COUNT > 1 to tgt_Duplicate and OUTPUT_COUNT = 1 to Tgt_Unique
Is This Answer Correct ? | 1 Yes | 0 No |
Post New Answer View All Answers
Source and Target are flat files, Source table is as below ID,NAME 1,X 1,X 2,Y 2,Y On Target flat file i want the data to be loaded as mentioned below ID,NAME,REPEAT 1,X,2 1,X,2 2,Y,2 2,Y,2 How to achieve this, Can i get a map structure
What is a repository manager?
In informatica workflow manager, how many repositories can be created?
What is meant by lookup transformation?
State the limitations where we cannot use joiner in the mapping pipeline?
What are the settings that you use to configure the joiner transformation?
Differentiate between reusable transformation and mapplet.
What is a difference between complete, stop and abort?
How to generate sequence numbers without using the sequence generator transformation?
How do you manage the Parameter files while migrating your data from one environment to another environment?
Some flat files are there, out of these having some duplicate. How do you eliminate duplicate files while loading into targets?
What do think which one is the better joiner or look up?
What is a predefined event?
What is the surrogate key?
What are the performance considerations when working with aggregator transformation?