Create a job to load all unique products in one table and the duplicate rows in to another table.
The first table should contain the following output
A
D
The second target should contain the following output
B
B
B
C
C
Q2. Create a job to load each product once into one table and the remaining products which are duplicated into another table.
The first table should contain the following output
A
B
C
D
The second table should contain the following output
B
B
C
Answers were Sorted based on User's Feedback
Answer / sudheer
answer for Q2, use sort stage and use generate change key column option. use filter stage to send all change key columns having value=1 to move in to a file which generates A B C D, and filter records having value=0 to another file which generates B B C.
Is This Answer Correct ? | 7 Yes | 1 No |
Answer / unknown
Q1: First use Aggregate stage- Row Count property then filter stage to separate Row Count 1 and more than 1.
Is This Answer Correct ? | 4 Yes | 0 No |
Answer / nams
It can be done by unix uniq -u file_Name and uniq -d file_Name
or it can be done from datastage sequential file stage filter option from property.....write command overthere like above.....
Is This Answer Correct ? | 1 Yes | 3 No |
Differentiate between Join, Merge and Lookup stage?
what is normalization and denormalization
What is the project in datastage?
Explain the scenarios where sequential file stage runs in parallel?
How can we improve the performance in datastage?
1.what is repartionoing technique? 2.what deliverables transferred to client using datastage? 3.how to write loop statements using nested loop sequence?
Can we use sequential file as source to hash file? Have you do it ?if what error it will give?
What is difference between 8.1 , 8.5 and 9.1 ?
how many dimentions and fact tables used in your project and what are names of it?
where we use config file as parameter ?
1.When did 8.1 parallel came into existence? 2.What is the difference between 7.5 and 8.1?
Define data aggregation?