Source is a flat file and want to load unique and duplicate
records separately into two separate targets; right??

Answers were Sorted based on User's Feedback



Source is a flat file and want to load unique and duplicate records separately into two separate t..

Answer / nitin

Create the mapping as below to load Unique records and duplicate records each in separate targets

Source->SQ->Sorter->Aggregator->Router-> Tgt_Unique
-> Tgt_Duplicate
In aggregator use group by on all ports.
and define a port OUTPUT_COUNT = COUNT(*)
In the router define two groups OUTPUT_COUNT > 1 and OUTPUT_COUNT = 1; Connect the outputs from the first group
OUTPUT_COUNT > 1 to tgt_Duplicate and OUTPUT_COUNT = 1 to Tgt_Unique

Is This Answer Correct ?    1 Yes 0 No

Source is a flat file and want to load unique and duplicate records separately into two separate t..

Answer / ankit kansal

Hi,
What i have understood after seeing your problem is like if your source contains 1,2,1,2,3 then only 3 is taken as unique and 1,2 will be considered as duplicate values.

SRC->SQ->SRT->EXP(to set flags for dup)->ROUTER->JOINER->EXP->RTR->2TGTS

http://deepinopensource.blogspot.in/

Is This Answer Correct ?    1 Yes 1 No

Source is a flat file and want to load unique and duplicate records separately into two separate t..

Answer / mohank106

Refer the below link, the answer is crystal clear here

http://www.bullraider.com/database/informatica/scenario/11-informatica-scenario3

Is This Answer Correct ?    0 Yes 0 No

Source is a flat file and want to load unique and duplicate records separately into two separate t..

Answer / rani

Take Source Qualifier,next place sorter t/f,select option Distinct in sorter and load it in Unique_target.

Take lookup transformation and lookup on target and compare it with source, when a record occurs more than 1 ,delete that record from target using Update strategy -DD_DELETE 2 and load
in Duplicate_target.This is a source in another pipeline and take unconnected lookup and write lookup override like count(*) having >1 then load them in Duplicate_target.

Is This Answer Correct ?    0 Yes 2 No

Post New Answer

More Informatica Interview Questions

suppose we have 1 to 10 records.In router transformation we had given two condition A>= 5 A<=5 then what will be the output?

2 Answers   emc2,


I have 10 columns in a flat file and 10 rows corresponding to that columns. I want column number 5 and 6 for last five records. In unix as well as informtica.

0 Answers   CTS,


i have 1000 records in my dource table, the same i have in target ,but a new column added in target as "batchno", and this column adds no 10 for 1st 100 records and 20 for next 100 records and 30 next 100 records and vice versa. how to acheive this?

6 Answers   Thomson Reuters,


How/where can i install Informatica software with oracle or teradata as database

0 Answers  


Explain the use of aggregator cache file?

0 Answers  


What is the internal processes of integration server in Informatica? How data will be extract and load to the target?

0 Answers   TCS,


what is the diff b/w union and joiner and lookup?

4 Answers  


WHAT IS THE DIFFERENCE BETWEEN .NET AND INFORMATICA?

1 Answers  


How can one know that a table has indexes and is partitioned? How data will be pulled from Partitions in Oracle for Informatica?

1 Answers  


Does an informatica transformation support only aggregate expressions?

0 Answers  


If one flat file contains n number records., we have to load in target from 51 to 100.. how to use expressions in Informatica..?

2 Answers  


Source-1 No name 1 satish 2 karthik 3 swathi 4 keerthi Source-2 No name 1 satish 2 karthik 5 santhose 6 vasu Target 3 swathi 4 keerthi 5 santhose 6 vasu here i want non matching Records i want how to achieve that

5 Answers   TCS,


Categories