Source is a flat file and want to load unique and duplicate
records separately into two separate targets; right??
Answers were Sorted based on User's Feedback
Answer / nitin
Create the mapping as below to load Unique records and duplicate records each in separate targets
Source->SQ->Sorter->Aggregator->Router-> Tgt_Unique
-> Tgt_Duplicate
In aggregator use group by on all ports.
and define a port OUTPUT_COUNT = COUNT(*)
In the router define two groups OUTPUT_COUNT > 1 and OUTPUT_COUNT = 1; Connect the outputs from the first group
OUTPUT_COUNT > 1 to tgt_Duplicate and OUTPUT_COUNT = 1 to Tgt_Unique
| Is This Answer Correct ? | 1 Yes | 0 No |
Answer / ankit kansal
Hi,
What i have understood after seeing your problem is like if your source contains 1,2,1,2,3 then only 3 is taken as unique and 1,2 will be considered as duplicate values.
SRC->SQ->SRT->EXP(to set flags for dup)->ROUTER->JOINER->EXP->RTR->2TGTS
http://deepinopensource.blogspot.in/
| Is This Answer Correct ? | 1 Yes | 1 No |
Answer / mohank106
Refer the below link, the answer is crystal clear here
http://www.bullraider.com/database/informatica/scenario/11-informatica-scenario3
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / rani
Take Source Qualifier,next place sorter t/f,select option Distinct in sorter and load it in Unique_target.
Take lookup transformation and lookup on target and compare it with source, when a record occurs more than 1 ,delete that record from target using Update strategy -DD_DELETE 2 and load
in Duplicate_target.This is a source in another pipeline and take unconnected lookup and write lookup override like count(*) having >1 then load them in Duplicate_target.
| Is This Answer Correct ? | 0 Yes | 2 No |
My Source qualifier has empno, sal. Now my mapping is like SQ(EMPNO)->AGGR->EXP->TARGET SAL ------------>TARGET ? Is this mapping valid or any issues are there if we design like this?
SO many times i saw "$PM parser error " .what is meant by PM?
What is Data Caches size?
Hi, In Router transformation I created two groups . One is Passthrough=> True Second one is CorrectId’s => Invest>50000 Here I have one doubt. Can’t I treat default group as Passthrough group (fist group) . Is there any difference between default group and Passthrough group in this scenario? Let me know if you want more information about this scenario. Advance thanks.
How many ways are there to do 'remove duplicate records in informatica'?
Describe data concatenation?
What is the cumulative sum and moving sum?
In which scenario did you used pushdown optimization?
Why union transformation is an active transformation?
what is meant by lookup caches?
2 Answers Cap Gemini, Informatica,
Tell me about MD5 functions in informatica
expain about the tune parameters?