how can we use sorter transformation in joins?
Answers were Sorted based on User's Feedback
Answer / prasuna
You can improve session performance by configuring the
Joiner transformation to use sorted input.
To configure a mapping to use sorted data, you establish and
maintain a sort order in the mapping so the Integration
Service can use the sorted data when it processes the Joiner
transformation.
| Is This Answer Correct ? | 13 Yes | 3 No |
Answer / x
Hi,
In joiner T/S properties we have an option called sorter
input.
So by using this we can improve the session Performance
| Is This Answer Correct ? | 9 Yes | 3 No |
Answer / m
To improve the perfomance we have to check the button "sorted input" ,
| Is This Answer Correct ? | 2 Yes | 0 No |
Answer / sreekanth
to eliminate duplicates also sorter transformation is used
| Is This Answer Correct ? | 2 Yes | 0 No |
major difference between normal loading and bulk loading?
how to join the 2 different table with different columns in informatica?
IF Sorce table contains CLOB as its one data type then i get error at the target table. How can this be resolve?
How to load the name of the current processing flat file along with the data into the target using informatica mapping?
Which is costliest transformation? costly means occupying more memory?
Can anyone please help me out,In which transformations records will be rejected and how capture those records?and How to reload the rejected records?
I want my deployment group to refer an external configuration file, while i deploy in the production environment. How can i achieve it.
There are 2 files, Master and User. We need to compare 2 files and prepare a output log file which lists out missing Rolename for each UserName between Master and User file. Please find the sample data- MASTER.csv ---------- Org|Tmp_UsrID|ShortMark|Rolename ---|---------|----------|------------ AUS|0_ABC_PW |ABC PW |ABC Admin PW AUS|0_ABC_PW |ABC PW |MT Deny all GBR|0_EDT_SEC|CR Edit |Editor GBR|0_EDT_SEC|CR Edit |SEC MT103 GBR|0_EDT_SEC|CR Edit |AB User USER.csv -------- Org|UserName|ShortMark|Rolename ---|--------|---------|------------ AUS|charls |ABC PW |ABC Admin PW AUS|amudha |ABC PW |MT Deny all GBR|sandya |CR Edit |Editor GBR|sandya |CR Edit |SEC MT103 GBR|sandya |CR Edit |AB User GBR|sarkar |CR Edit |Editor GBR|sarkar |CR Edit |SEC MT103 Required Output file: --------------------- Org|Tmp_UsrID|UserName|Rolename |Code ---|---------|--------|------------|-------- AUS|0_ABC_PW |charls |ABC Admin PW|MATCH AUS|0_ABC_PW |charls |MT Deny all |MISSING AUS|0_ABC_PW |amudha |ABC Admin PW|MISSING AUS|0_ABC_PW |amudha |MT Deny all |MATCH GBR|0_EDT_SEC|sandya |Editor |MATCH GBR|0_EDT_SEC|sandya |SEC MT103 |MATCH GBR|0_EDT_SEC|sandya |AB User |MATCH GBR|0_EDT_SEC|sarkar |Editor |MATCH GBR|0_EDT_SEC|sarkar |SEC MT103 |MATCH GBR|0_EDT_SEC|sarkar |AB User |MISSING Both the files are mapped through Organization, Shor_mark. So, based on each Organization, Short_Mark, for each UserName from User.csv, we need to find the Matching and Missing Rolename. I am able to bring Matching records in the output. But really I don't find any concept or logic to achieve "MISSING" records which are present in Master and not in User.csv for each UserName. Please help out guys. Let me know if you need any more information. Note:- In User.csv file, there are n number of Organization, under which n number Shortmark comes which has n number of UserName.
Difference between task flow and linear task flow
What are the mapings that we use for slowly changing dimension table?
How can i catch the Duplicate Rows From SorterTrans in a Seperate Target Table ?
What is meant by incremental aggregation?