How to send duplicates to one target and unique rows to one
target?target is empty
Answers were Sorted based on User's Feedback
Answer / kesava reddy
Using Source Qualifier Trnsformation ,
Explantion:
1.Take 2 Source Qualifier Transformations,and
2.One sq ports connect to Target(Unique Target) then Write
a SQL Query (sqlOverride) ,
SELECT DISTINCT EMPNO,ENAME
FROM EMP;
3.TAKE ANOTHER SQ AND CONNECT TO ALL PORTS TO TARGET,THEN
DEVELOP THE SQLOVERRIDE,
SELECT * FROM EMP WHERE ROWID IN(SELECT ROWID FROM EMP
MINUS
SELECT MAX(ROWID) FROM EMP
GROUP BY EMPNO,ENAME)
Is This Answer Correct ? | 7 Yes | 2 No |
Answer / shiva
s-->sq-->agg-->rtr-->tgt1
'--> tgt2
in aggr take two ports count(*)=1
count(*)>1 take group by on column
send this to rtr(create 2 groups in rtr)
Is This Answer Correct ? | 5 Yes | 2 No |
Answer / mohan
Here is the minor modification on Answer #6 posted by me,
SQ-->Sorter-->Expression-->Router-->Targets
Sorter Transformation: Sort by key column(EMP_ID)
Expression Transformation:
V_Match(variable port) =IIF(EMPNO=V_OLD_EMPNO,1,0)
V_OLD_EMPNO(Variable)= EMP_ID
O_EMPNO(output)= V_MATCH
Router Transformation:
create two groups under groups tab,
Original : O_EMPNO=0
Duplicates: o_EMPNO=1
Is This Answer Correct ? | 3 Yes | 0 No |
Answer / naresh araveti
source> dynamic lookup>router,2 conditions 1. condition
if column_lkp port is null then insert into target1(unqie)
2. condtion if COLumn_lkp port is not null then insert into
target2(duplicates)
Is This Answer Correct ? | 4 Yes | 3 No |
Answer / ram mohan reddy
we can do this process by 2 ways ....
1)by dynamic lookup option in lookup(new lookup row) we can
load duplicate rows in one target table and unique rows in
one target table
do to this we need to have router transformation (add to
group ports one is for unique(new lookup row=1) and other is
for duplicate(new lookup row =2))after the lookup trans.
2)we can perform this by aggregator transformation using
coutnt(*) >1 for duplicate rows .here also we need to use
router transformion.
Is This Answer Correct ? | 3 Yes | 2 No |
Answer / venkat
S->S.Q->Aggr->Rtr->T1
->T2
Where Aggr take group by option
Rtr group1 condition reccount>1..........>T1
Rtr default group to....................>T2
T1 records are unic records
T2 records are duplicate records
Is This Answer Correct ? | 3 Yes | 2 No |
Answer / mohan
SQ-->Sorter-->Expression-->Router-->TGT
Sorter: Sort by key column(EMP_ID)
Expression:
V_OLD_EMPNO EMP_ID
V_Match IIF(EMPNO=V_OLD_EMPNO,1,0)
O_EMPNO V_MATCH
Router:
create two groups under groups tab,
Original : O_EMPNO=0
Duplicates: o_EMPNO=1
Is This Answer Correct ? | 0 Yes | 0 No |
Answer / vaas
Hi All,
One tbl has duplicate values means it does not has PK.In
this case can we use dynamic lookup.
Pl let me know, on vaas31@yahoo.in
Is This Answer Correct ? | 0 Yes | 2 No |
Answer / dwhlabs
1> using dynamiclookup concept
2> using variable concept
First solution
source > sorter >dynamic lookup > filter > Target1 and
Target2
for more abt informatica mappings ... www.dwhlabs.in
Is This Answer Correct ? | 1 Yes | 8 No |
I have done MBA in 2008. i got job as business analyst in 2008 january through consultany. but after 3 months they are giving training Informatica developer. now iam continuing this job. my question is when iam going to interview HR people ask me many times like this " YOU ARE MBA GRADUATE. HOW YOU ARE SELECT THIS POSTION. IAM EXPLAINING WHAT I HAVE MENTION ABOVE". PLEASE TELL HOW IAM TELLING THIS QUESTION ANSWER.
Hi, Can someone send me the DWH and Informatica FAQ's at vanibv6@gmail.com Thanks in Advance, Vani
which quality process u can approach in ur project
What is operational data source (ODS)?
I am having a FLAT FILE SOURCE as first line: 1000,null,null,null second line as:null,2000,null,null 3rd line as :null,null,3000,null and final line as: null,null,null,4000 ............................Now i want the OUTPUT as 1000,2000,3000,4000 to a FLAT FILE only.For more clarification i want to elimate nulls and want in a single line. Please help me out
can anyone suggest best free Talend data integration training online
How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?
How to load only the first and last record of a flat file into the target?
Define mapping and session?
How many cubes create from a single model?
How can we join the tables if they don't have primary and foreign key relationship and no matching port?
There are 2 files, Master and User. We need to compare 2 files and prepare a output log file which lists out missing Rolename for each UserName between Master and User file. Please find the sample data- MASTER.csv ---------- Org|Tmp_UsrID|ShortMark|Rolename ---|---------|----------|------------ AUS|0_ABC_PW |ABC PW |ABC Admin PW AUS|0_ABC_PW |ABC PW |MT Deny all GBR|0_EDT_SEC|CR Edit |Editor GBR|0_EDT_SEC|CR Edit |SEC MT103 GBR|0_EDT_SEC|CR Edit |AB User USER.csv -------- Org|UserName|ShortMark|Rolename ---|--------|---------|------------ AUS|charls |ABC PW |ABC Admin PW AUS|amudha |ABC PW |MT Deny all GBR|sandya |CR Edit |Editor GBR|sandya |CR Edit |SEC MT103 GBR|sandya |CR Edit |AB User GBR|sarkar |CR Edit |Editor GBR|sarkar |CR Edit |SEC MT103 Required Output file: --------------------- Org|Tmp_UsrID|UserName|Rolename |Code ---|---------|--------|------------|-------- AUS|0_ABC_PW |charls |ABC Admin PW|MATCH AUS|0_ABC_PW |charls |MT Deny all |MISSING AUS|0_ABC_PW |amudha |ABC Admin PW|MISSING AUS|0_ABC_PW |amudha |MT Deny all |MATCH GBR|0_EDT_SEC|sandya |Editor |MATCH GBR|0_EDT_SEC|sandya |SEC MT103 |MATCH GBR|0_EDT_SEC|sandya |AB User |MATCH GBR|0_EDT_SEC|sarkar |Editor |MATCH GBR|0_EDT_SEC|sarkar |SEC MT103 |MATCH GBR|0_EDT_SEC|sarkar |AB User |MISSING Both the files are mapped through Organization, Shor_mark. So, based on each Organization, Short_Mark, for each UserName from User.csv, we need to find the Matching and Missing Rolename. I am able to bring Matching records in the output. But really I don't find any concept or logic to achieve "MISSING" records which are present in Master and not in User.csv for each UserName. Please help out guys. Let me know if you need any more information. Note:- In User.csv file, there are n number of Organization, under which n number Shortmark comes which has n number of UserName.