how to delete duplicate records by using filter
transfermation?
Answer Posted / devendar pd
if we use group by key columns in aggrigator then automatically duplicates will be removed, no need to create o/p port again.
I think we can't remove duplicates only by using filter
| Is This Answer Correct ? | 1 Yes | 1 No |
Post New Answer View All Answers
What are the main issues while working with flat files as source and as targets ?
Suppose we configure sorter transformations in the master and detail pipelines with the following sorted ports in order: item_no, item_name, price. When we configure the join condition, what are the guidelines we need to follow to maintain the sort order?
Mention some types of transformation?
Which transformation is needed while using the Cobol sources as source definitions?
Explain pushdown optimization $pushdownconfig parameter - informatica
Describe data concatenation?
If informatica has its scheduler why using third party scheduler?
Name the different lookup cache(s)?
what is the size of u r source(like file or table)?
How does a sorter cache works?
tell me 5 session failure in real time how can you solve that in your project?
What is difference between a gateway node and worker node?
What are the types of presistent cache in look up tr.
What is lookup change?
What are active and passive transformations?