my source like dis
10,asd,2000
10,asd,2000
10,asd,2000
20,dsf,3000
20,dsf,3000
20,dsf,3000
like dis and my requirement is first record is inserted into
first target and duplicates of first record is inserted into
second target ...like dis way ...?
how to achieve dis?
Answers were Sorted based on User's Feedback
Answer / chandra
take 2 agg Trs after SQ and drag all the ports to both the
AGG Trs and enable group by for all ports for one AGG and
take one output port in second AGG ..,in that take count(no)
then take filter and write condition as count(no)>1
SQ---->AGG(enable all ports group by)-->Tar1
---->AGG(count(no))-->FIL(count(no)>1)--->Tar2
| Is This Answer Correct ? | 11 Yes | 8 No |
Answer / santosh kumar sarangi
1.first sort the data with sorter t/r as ID key column and
create below port in exp transformation
var2= var1
id
var1=id
rec_count=iff(var1=var2,rec_count+1,1)
2.Give all the port to router t/r and give the filer
condition as below
rec_count=1 then to first target
default to 2nd target
Let me know if any things is wrong.
Thanks & Regards
Santosh Kumar Sarangi
| Is This Answer Correct ? | 1 Yes | 0 No |
Answer / baswaraj
take a normalizer tr and set port vale 2 r 3 how much u
required and drag remaing ports as it is
| Is This Answer Correct ? | 0 Yes | 1 No |
Answer / ramakrishna
SQ---->AGGR(CREATE O/P PORT AS RCOUNT)WRITE EXPRESSION AS
COUNT(EMPNO)SELECT GROUPBY TO EMPNO
---->ROUTER(GRP1- TRUE)--- TRG1
(GRP2-RCOUNT>1)--TRG2
| Is This Answer Correct ? | 1 Yes | 7 No |
What are steps to follow Informatica migration from 7x to 8x? Pls Explain...
How can we delete duplicate rows from flat files?
what is mean by complex business rule ?
how to delete duplicate records by using filter transfermation?
how will you remove the duplicate records from flat file without using sorter?
What is the difference b/w natural key and surrogate key
Two Default User groups created in the repository are ____ and ______
SOURCE 1 a 1 b 1 c 2 a 2 b 2 c TARGET 1 A B C 2 A B C In oracle & informatica level how to achieve
What is a filter transformation?
Clarify the utilization of aggregator cache record?
task is running successfully but data is not loded why?
suppose a session is failed after a transformation , from where that session will run again , i.e . from beginning or from that transformation ?