i want send my all duplicate record one tar and all uniq
records one target how we will perfome explain
example:
input data
eid
251
251
456
456
951
985
out put/target1
251
251
456
456
out put/target2
951
985
how we will bring
Answers were Sorted based on User's Feedback
First use the seq filestage then use the copy stage after
the copy stage one link is going to agregator stage there we
will calculate the count of records after the agregator
stage use filter stage for finding uniq records then those
records are connected to the lookup as a reference link. the
main link for lookup is comming from copy stage. Then
matching records are uniq else duplicate.
Agre--->filter
| |
Seq--->copy---->Lookup---->uniq
|
|
duplicate.
In agregator use count rows.
Is This Answer Correct ? | 14 Yes | 3 No |
According his logic: from aggregator stage: the output is
this: 251,2
456,2
951,1
985,1
the main data is:
251
251
456
456
951
985
If you join these two links then the output will be:
251,2
251,2
456,2
456,2
951,1
985,1
Then your are specifying that count=1 then you get the
unique records. means YOU get: 951,985
in another link count<>1 means YOU get: 251,
251,
456,
456
this is our desired out put, our out put.
the logic explained by Shiva is correct.
Is This Answer Correct ? | 9 Yes | 0 No |
Answer / siva
src-->copy-->agg
| |
| |
join stage--filter--uniq
|
|
dup
this is senario ans
plz do this u get correct answer
thanks
siva
Is This Answer Correct ? | 3 Yes | 2 No |
Answer / binayak mohapatra
seq-agg-copy-filt1-tgt1
Again from that agg-filt2-tgt2
In agg group=empid
count=rowcount
outputcount=empid
In agg column create another column like count_empid and map it.
in filt1 count_empid>1 and map into tag 1
then you can take input from another copy source then filter2 put a condction like count_empid=1 and map them into tgt 2
Is This Answer Correct ? | 1 Yes | 0 No |
Appreciate your Answer but it was wrong..because if you are using Agregator stage it will give only unique records with record count. so how can u print all the duplicate records.
in the above senario:
input to agregator stage: 251,251,456,456,951,985
i used count rows.
so, the o/p is 251,2
456,2
951,1
985,1
if U use the Filter stage then what is the output:
O/P1:
251
456
O/P2:
951
985
Unfortunately this is not the answer we are required...
This is not as simple as U think...i implemented it before posting!!!!!
Is This Answer Correct ? | 4 Yes | 4 No |
Answer / pnvnreddy
Hi Shiva ur right
---------Agg -tar1
----------/ \ -/
seq---copy--join---filter---tar2
in Agg
Agg type=count
in filter
count=1-----tar1
count>1-----tar2
Is This Answer Correct ? | 0 Yes | 0 No |
Hi guys
Here is the answer.
Seq-->copy--->Agg,--> Join ----->Filter -----> DS1
----------> -----> DS2
Explanation:
From seq stg to copy and
from copy send one link to aggregator for counting of rows
and other to join stage for Clubbing.
At join use input column to club. From join to filter and at
filter give conditions at where clause count = 1 output link
-0 and count =2 output link -1.
Finally use Datasets as targets DS1 and DS2.
Ok thats it.
Is This Answer Correct ? | 0 Yes | 0 No |
- > Unique
Sq File - > Aggregate - > Transformer |
- > Duplicate
1. Read data in seq file stage.
2. In Aggregate stage group by EID and count the number of rows in Count column
3. In transformer stage create loop condition as @ITERATION<= DSLink6.Count and derivation as EID = EMPNO.
4. In unique output tab use constraint count<=1 and in duplicate output tab use constraint count>1
Is This Answer Correct ? | 0 Yes | 0 No |
Unfortunately, I didn't got the logic behind the YOUR answer.
According your logic: from aggregator stage: the output is this: 251,2
456,2
951,1
985,1
the main data is:
251
251
456
456
951
985
If you join these two links then:
251
251
456
456
951
985
251,2
456,2
951,1
985,1
Then your are specifying that count=1 then you get the unique records.
in another link count<>1 means YOU get 251
456
Unfortunately this is not our desired out put, our out put is: 251
251
456
456
SO, YOUR LOGIC IS NOT ACCEPTABLE. TRY TO THINK FOR DESIRED OUTPUT !!!!
in order to get the desired out put i using the lookup-stage
in lookup i am going to lookup the out put from filter stage which is 951,985 with main data there i will specify the condition as reject:
so YOU will get the desired output!!!!!
If you can prove the above logic is wrong send me the mail!!!!
Is This Answer Correct ? | 2 Yes | 3 No |
What is the Difference Between DataStage 7.5 version and 8.1 Version?
1. How many People are part of your Team? 2. Explain how you create jobs or flow of project? 3. Join Stage vs Lookup vs Merge Stage 4. Summation scenario based question - How you find sum of salary for a specific employee (Explain stages and flow of job)? 5. Explain Remove duplicates stage ? Can you do sort in this stage? 6. SQL Questions - Joins - Types, Difference between Join and Union 7. Unix Questions - How you run the Job, How you list all jobs in project 8. Explain Environmental Variables? 9. SQL Scenario - If you have 3 Identical record in a Table, Ex: 1, Ram, Xyz; 1, Ram, Xyz; 1, Ram, Xyz; Delete only 2 of the records and keep only 1 using the Delete query. How you will you do this?
i have source like balance,drawtime 20000, 8.30 50000,10.20 3000,4.00 i want target like this balance,drawtime 20000, 20.30 50000,22.20 3000,16.00
if ename='subbu' while running job the job should be abort how come?
i have seq file that contents 10 million records load to target any data base.. in that case it takes lot of time for loading..how do performance tuning in that situation...?
I have a few records all are same structures data, I want to store data in multiple targets how?
Hi All, I have a file. i need to fetch the records between first and last records by using transform stage. EX:- Source: EMPNO EMPNAME 4567 shree 6999 Ram 3265 Venkat 2655 Abhi 3665 Vamsi 5852 Amit 3256 Sagar 3265 Vishnu Target: EMPNO EMPNAME 6999 Ram 3265 Venkat 2655 Abhi 3665 Vamsi 5852 Amit 3256 Sagar I dont wan't to Shree and vishnu records.we can fetch another way also but How can I write the function in transform stage?
1.what is materialized data? 2.how to view the materialized data?
What is apt_config in datastage?
What is active and passive stage?
Can we use sequential file as source to hash file? Have you do it ?if what error it will give?
how to use self join using datastage ? can u tell me using stage how can we implemnet the self join