How do you load dimension data and fact data? Which is first
Answers were Sorted based on User's Feedback
Answer / usha
first we load data into dimension table then we will loadin
to fact table..
dimensions and facts has parent and child relationship.
many to one relation
| Is This Answer Correct ? | 9 Yes | 1 No |
Answer / manikandan
1.First load the data in to Dimension tables from the
sourse table
2.Load the data into fact table from source by putting
lookup on to
the dimension tables (If it is a fact less fact table U
need to load
only dimension data other wise load the messers from source
also)
| Is This Answer Correct ? | 1 Yes | 0 No |
project Steps,hits, Project level HArd things,Solved methods?
Unix Qn asked in datastage interview: I have diff type(.txt, .tmp, .bat etc) of file in 4 diff directories, I want move all '.txt' file from 4 directories to other folder. And need to delete all the files except which are created TODAY?
Hi I have scenario like this s/r table T/r table ename,sal empno,ename,sal vijay,2000 1 , vijay, 2000 kumar,3000 2 ,kumar , 3000 ravi ,4000 3 ,ravi , 4000 How can i get target table like that without using Transformer stage?
I am getting input value like X = Iconv(ā31 DEC 1967ā,āDā)? What is the X value? How it is? At what situation we r used Iconv(),Oconv().
SOURCE LIKE I_D,F1,F2 --------- 100,N,Y 100,N,N 100,Y,N 101,Y,Y 101,N,Y 102,Y,N 103,N,N 104,Y,Y 105,N,N 106,N,Y 102,N,Y 105,Y,Y O/P LIKE ID flag1 flag2 101 Y Y 101 N Y 102 Y N 102 N Y 104 Y Y 106 N Y
tell me abt Datastage trigger?
What is ibm datastage flow designer?
in job of 30 one job is very slow due to this entire job is very slow how can u know which job is slow?
hi i am bhavani, in real time data stage who is the source provide? and how to recevied in developer? Pls send me answer
what is the differeces between hash and modulus partition methods
what is stage is used for below Input columns: dept|mgr|employee|salary Output columns: mgr|count of employee per mgr|avg salary per dept note: each dept has one mgr and each mgr has many employees
how can we perform the 2nd time extraction of client database without accepting the data which is already loaded in first time extraction