How do we eliminate duplicate records in a flat file without using Sorter and Aggregator?
Answers were Sorted based on User's Feedback
Answer / kiran
We can use dynamic cache in lookup to eliminate duplicates.
Is This Answer Correct ? | 11 Yes | 0 No |
Answer / joe
Option 1: using Unix for flat files
Option2: Using Checksum function in the expression to
generate a unique hexadecimal code for each record.
and comparing the same with the next record.
Is This Answer Correct ? | 5 Yes | 2 No |
Answer / ankur saini
sol--seq gen---rank ---filter
add a sequence generator ...
ex input is
1 a
1 b
2 a
2 b
after seq generator
1 a 1
1 b 2
2 a 3
2 b 4
then ranl it group by all file ports rank on the seq gen key
input seq rank
1 a 1 1
1 b 2 2
2 a 3 1
2 b 4 2
add filter on rank=1
enjoy!!!!!
Is This Answer Correct ? | 2 Yes | 0 No |
Answer / harish konda
Give the SQL query to sort the data in source in source
qualifier t/f.
And then connect to exp t/f and add one more port (say flag)
to generete numbers like, when prev row and current row
values are same, then increment number, or else give 1.
And next connect to Filter t/f and give the condition in
filter as flag=1.
Then rout the data to target.
Is This Answer Correct ? | 2 Yes | 1 No |
Answer / isha
Select all source rows.
The Dynamic Lookup transformation builds the caches from the target table.
When the lookup evaluates a row from the source that does not exist in the lookup cache, it inserts the row into the cache and assigns the NewLookupRow output port the value of 1. When the lookup evaluates a row from the source that exists in the lookup cache, it does not insert the row into cache and assigns the NewLookupRow output port the value of 0.
The filter in this mapping checks if the row is a duplicate or not by evaluating the NewLookupRow output port from the Lookup. If the value of the port is 0, the row is filtered out, as it is a duplicate row. If the value of the port is not equal to 0, then the row is passed out to the target table.
Is This Answer Correct ? | 1 Yes | 0 No |
Answer / priyank
There are several ways of achieving this. We can do it
through expression transformation and other is look up on
the target.
Expression transformation:
Create ports,
Var_PREV_KEY=Key
Var_CURR_KEY=Var_PREV_KEY
Var_CHK_DUPLICATE --> IIF(Var_CURR_KEY=Key,'DUP','NODUP')
OUT_DUPLICATE --> Var_CHK_DUPLICATE
Note: I have taken a scenario where the target table
contains only 1 Key. In case of multiple keys, will have to
create a few more Variable ports for both CURR and PREV and
in the Var_CHK_DUPLICATE port, we need to add those checks
with an 'AND' operator.E.g. For 2 keys,
Var_PREV_KEY1=Key1
Var_CURR_KEY1=Var_PREV_KEY1
Var_PREV_KEY2=Key2
Var_CURR_KEY2=Var_PREV_KEY2
Var_CHK_DUPLICATE --> IIF(Var_CURR_KEY1=Key1 AND
Var_CURR_KEY2=Key2,'DUP','NODUP')
OUT_DUPLICATE --> Var_CHK_DUPLICATE
If the Informatica version is Unix installation, then in
the pre session command you can give an unix command to
remove the duplicates from the file like
sort <file_name> | uniq > <file_name>.new
Hope it helps.
Is This Answer Correct ? | 4 Yes | 12 No |
In aggregator transformation, I sort the data before aggregator and select sorted port but still I’m getting an error. What is that error?
i have ten flat files with same structure,if i want to load it to single target,and mapping needs to should show only one source.what will be the steps to taken to achieve it?/
HOE DO U IMPLIMENT SCHEDULING IN INFORMATICA?
How You Pull the records on daily basis into your ETL Server.
Mapplets can you use an active transformation in a mapplet,
What are the reusable transforamtions?
have u done any performance tuning? how u ll do?
If we use only lookup transformation in a mapping ie, SourceQualifier-->Lookup --> Target. , here datas are taking very long time to load in target., so what are steps to improve the performance in that mapping???????
i did MBA in 2008. i got job as a Software Engineer(Informatica) in 2008 February in our college campus interviews through consultancy. my problem is when iam going interview HR people ask " YOu are MBA graduate how u get software(informatica) job". iam saying i got job in campus interviews. i have knowledge in informatica in dataware housing. is this answer correct or not. plese give me guidence
why cant we put a sequence generator or upd strategy transformation before joiner transformation?
what is data driven in update strategy transformation?
we have table like cust_id,cust_name,cust_loc like this 1.we need to get perticular location,to do this we can use filter transformatin,that logic is same for relation table and flat file tabl?