How to implement complex jobs in data stage?
No Answer is Posted For this Question
Be the First to Post Answer
in datastage interview qustion source target ------- ------- 12345 1 2 3 4 5
How to convert table data into xml file using xml output stage? please explain step by step;
HOW CAN U DO ERROR HANDLING IN DATA STAGE?
I have 2 files 1st contains duplicate records only, 2nd file contains Unique records.EX: File1: 1 subhash 10000 1 subhash 10000 2 raju 20000 2 raju 20000 3 chandra 30000 3 chandra 30000 File2: 1 subhash 10000 5 pawan 15000 7 reddy 25000 3 chandra 30000 Output file:-- capture all the duplicates in both file with count. 1 subhash 10000 3 1 subhash 10000 3 1 subhash 10000 3 2 raju 20000 2 2 raju 20000 2 3 chandra 30000 3 3 chandra 30000 3 3 chandra 30000 3
at source level i have 40 columns,i want only 20 cols at target what r the various ways to get it
i want job aborted after some records are loaded into output by using only sequential stage and dataset
What all the types of jobs you developed?
data stores in which location while using data set stage as the target?
1)What is configuration your file structure 2)I have two databases both are Oracle while loading data from source to target the job takes 30 min but I want to load less time how?
A flatfile contains 200 records.I want to load first 50 records at first time running the job,second 50 records at second time running and so on,how u can develop the job?pls give the steps?
What is the difference between hashfile and sequential file?
How do u set a default value to a column if the column value is NULL?