How we can convert rows to columns in datastage?
Answers were Sorted based on User's Feedback
Answer / krishna
hai vinod
this is krishna,when i try to use your method using copy &
funnel stages,its not working because when we are dividing
copy stage into sub copys after that when we are club in
funnel stage from cpy1 it was not supporting o/p link,it
support reject link too funnel
| Is This Answer Correct ? | 3 Yes | 0 No |
Answer / rams
Using pivot stage
ex:-
name,ph1,ph2
ramu,9849,9948
raja,9987,9967
touch,9898,0909
in pivoi stage go to output>- columns
columnname derivation
name
phone PH1,PH2
TO GOT THE RESULT IS
o/p:-
name phone
ramu,9849
ramu 9948
raja,9987
raja,9967
touch,9898
touch,0909
| Is This Answer Correct ? | 7 Yes | 6 No |
Answer / manohar palla
I am going to use tranformer and funnel stages to implement the
pivot functionality.
name ph1 ph2---> tranformer--->tf1 name ph1
| funnel---->o/p
|____>tf2 name ph2
tf1 and tf2 are the TWO links from transfomer..
in tf1 we have the name and ph1 where in tf2 name and ph2.
In funnel stage i am going to club the two links and send it
to o/p.
Thanks :)
| Is This Answer Correct ? | 2 Yes | 2 No |
Answer / upputuri.vinod
Bro that's right I have another ans as-well..
I am going to use copy and funnel stages to implement the
pivot functionality.
name ph1 ph2---> copy--->cp1 name ph1
| funnel---->o/p
|____>cp2 name ph2
CP1 and CP2 are the TWO links from COPY..
in cp1 we have the name and ph1 where in cp2 name and ph2.
In funnel stage i am going to club the two links and send it
to o/p.
THE BENEFIT IN THIS IS WE CAN USE PIVOT UPTO 3250 ROWS
ONLY.. AFTER THAT IT WONT WORK..
SO WE CAN USE IT...
| Is This Answer Correct ? | 3 Yes | 6 No |
how can or from where we can get reference data in scd type2 implementation?
What is process model?
1.When did 8.1 parallel came into existence? 2.What is the difference between 7.5 and 8.1?
Sequential file i have one record,i want 100 records in target?How can we do that?Pls explain me and what stages are there?What logic?
8000 jobs r there i given commit, suddenly job will abort? what happens? 2)diff b/t transformer stage & filter stage? 3)how to load the data in the source?
If the job aborted in a sequencer, how can we start that from the previews successful job.
A flatfile contains 200 records.I want to load first 50 records at first time running the job,second 50 records at second time running and so on,how u can develop the job?pls give the steps?pls pls
input like 2 7 8 9 5 1 7 3 6 output:2 5 6 how to find out this plz explain?
In which situations we can use normal and sparse lookup stages
what is meta data? Explain? Where it is used?
How do y read Sequential file from job control?
Why do we use link partitioner and link collector in datastage?