Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...


one file contains
col1
100
200
300
400
500
100
300
600
300
from this i want to retrive the only duplicate like this
tr1
100
100
300
300
300 how it's possible in datastage?can any one plz explain
clearley..........?

Answers were Sorted based on User's Feedback



one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / vinod upputuri

In order to collect the duplicate values:

first cal the count output col in aggregator stage
group by col.
aggregator type: count rows.
count output col..

next, use the filter stage to separate the multiple occurrence.

finally, use the join stage or lookup stage to map the two
tables join type INNER ..

then u can get the desired output..

Is This Answer Correct ?    14 Yes 1 No

one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / chandu

use aggregator and calculate count of source column after
that use filter or transaformer stage use condition as count
>1 it gives only duplicate records


Thanks
Chandu-9538627859

Is This Answer Correct ?    6 Yes 0 No

one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / prasad

>Agg--->Filter1------->|
| |
| |
file-->cp-------------------->Join---->Filter2---->target1
|
|
Target2
Agg: use aggregator and select Agg_type=count rows and then give the Count O/P column=Count (User defined)

Count
------------
100--2
200--1
300--3
400--1
500--1
600--1
it will generate in Agg stage then

Filter1: give condition like Count=1( u will get unique records from Filter1)

Join Stage: take Left Outer Join

Filter2:
where=column_name=''(null){u will get duplicates records)

Target1 o/p:
100
100
300
300
300

where= column_name<>''(u will get unique records)

Target2 o/p:

200
400
500
600

Please correct, if am wrong :)

Is This Answer Correct ?    2 Yes 0 No

one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / sudheer

- aggregator -
seq. file - copy join - filter - seq.op


in arrg - cnt rows
in join - left outer join - left as seq.file data
in filter - where cond. - cnt>1

Is This Answer Correct ?    1 Yes 0 No

one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / reddyvaraprasad

Job Design:

|----->Agg--->Filter1-->|
| |
| |
file-->cp-------------------->Join---->Filter2---->target

Agg: use aggregator and select Agg_type=count rows and then give the Count O/P column=Count (User defined).

Filter1: give the condition Count<>1

Join: select left outer join

Filter2: give the condition Count<>0

u will get the right output....what ever the duplicate records.

and if u want unique records, give the condition Count=0

Is This Answer Correct ?    0 Yes 0 No

one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / reddymkl.dwh

Job Design:

Agg--->Filter1---------->|
| | Unique
file-->cp-------------------->Join---->Filter2---->target1
|
|-->Duplicate
Target2

Agg: use aggregator and select Agg_type=count rows and then give the Count O/P column=Cnt (User defined).

Filter1: give the condition Where=Cnt=1

U will get unique values like 200,400,500,600

Use Join (Or) Lookup stage: select left outer join

Filter2:

Where=Column_name='' (Duplicate values like 100,100,300,300,300)
Where=Column_name<>'' (Unique Values like 200,400,500,600)


u will get the right output....what ever the duplicate records.

Plz correct me if am wrong.....

Is This Answer Correct ?    0 Yes 0 No

one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / pooja

Follow the following steps -

1. Seq file stage - Read the input data in seq file - input1.txt
2. Aggregate stage - count the number of rows (say CountRow) for each ID(group=ID)
3. Filter stage - Filter the data where CountRow<>1
4. Perform join on the output of the step 3 and input1.txt.
You will get the result :)

Is This Answer Correct ?    0 Yes 0 No

one file contains col1 100 200 300 400 500 100 300 600 300 from this i want to retrive th..

Answer / me

seq----> copy

from copy stage one link to aggregator apply count rows option ---> filter (on count rows output 1 ) send as reference to look up below
from copy stage second link to lookup

apply filter

Is This Answer Correct ?    0 Yes 0 No

Post New Answer

More Data Stage Interview Questions

How to implement complex jobs in data stage?

0 Answers  


Explain the situation where you have applied SCD in your project?

0 Answers  


my source seq file have col1 1 2 3 4 5 6 7 8 9 i have 4 targets t1 t2 t3 t4 1 2 3 4 5 6 7 8 9 like this how we can get?

10 Answers   Polaris,


how can u find out the datastage job is running on how many nodes

7 Answers   IBM,


Hi every one, I am Suneel. I/p o/p --- ----- suneel suneel suneel suneel suneel suneel how it will get. Please design job with explain. Thanks.

4 Answers   Amdocs,


what is the use of skid in reporting?

0 Answers   NTT Data,


SEQUENTIAL FILE I HAVE ONE RECORD,I WANT 100 RECORDS IN TARGET?HOW CAN WE DO THAT?PLS EXPLAIN ME AND WHAT STAGES ARE THERE?WHAT LOGIC?

1 Answers   TCS,


What are the main differences you have observed between 7.x and 8.x version of datastage?

0 Answers  


how to cleansing data

6 Answers   Cap Gemini,


Difference between sequential file and data set?

0 Answers  


Name the different types of Lookups in Datastage?

0 Answers  


How can we run same job in 1 day 2 times

4 Answers   IBM,


Categories