A flatfile contains 200 records.I want to load first 50
records at first time running the job,second 50 records at
second time running and so on,how u can develop the job?pls
give the steps?pls pls
Answer Posted / varun
Design the job like this:
1. Read records from input flat file and click on option of
rownumbercolumn in the file. It will generate a unique
number corresponding to each record in that file.
2. Use filter stage and write the conditions like this:
a. rownumbercolumn<=50(in 1st link to load the records
in target file/database)
b. rownumbercolumn>50 (in 2nd link to load the records
in the file with the same name as input file name, in
overwrite mode)
So, first time when your job runs first 50 records will be
loaded in the target and same time the input file records
are overwritten with records next first 50 records i.e. 51
to 200.
2nd time when your job runs first 50 records(i.e. 51-100)
will be loaded in the target and same time the input file
records are overwritten with records next first 50 records
i.e. 101 to 200.
And so on, all 50-50 records will be loaded in each run to
the target
| Is This Answer Correct ? | 19 Yes | 9 No |
Post New Answer View All Answers
How many types of sorting methods are available in datastage?
Notification Activity
how to run a sequential file stage in parallel if the stage is used on the TARGET side
What is the command line function to import and export the ds jobs?
How to RD using transformer?
What are the repository tables in datastage?
Explain the situation where you have applied SCD in your project?
What are the components of ascential data stage?
Explain datastage architecture?
Where do the datastage jobs get stored?
hi.... am facing typical problem in every interview " I need some critical scenarios faced in real time" plz help me guys
Why we use surrogate key?
whom do you report?
How can you write parallel routines in datastage PX?
What is staging variable?