Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...


A flat file contains 200 records. I want to load first 50
records at first time running the job, second 50 records at
second time running and so on, how u can develop this job?



A flat file contains 200 records. I want to load first 50 records at first time running the job, se..

Answer / subhash

1st Way:
1. Add 'row number' column in Seq File stage, so that each
record has a number associated with it.
2. Add a job param with which we can provide the number of
record from where we want to run the job. We can pass this
either using Sequence Start LOOP(List type variables-
50,100,150,200) or by shell script.
3. In the tfm, use a stage variable to run only from the
record number till 50 records by counting each record.

2nd way:
Design the job like this:
1. Add 'row number' column in Seq File stage, so that each
record has a number associated with it.
2. Use filter stage and write the conditions like this:
a. row number column<=50(in 1st link to load the records
in target file/database)
b. row number column>50 (in 2nd link to load the records
in the file with the same name as input file name, in
overwrite mode)


So, first time when your job runs first 50 records will be
loaded in the target and same time the input file records
are overwritten with records next first 50 records i.e. 51
to 200.
2nd time when your job runs first 50 records(i.e. 51-100)
will be loaded in the target and same time the input file
records are overwritten with records next first 50 records
i.e. 101 to 200.
And so on, all 50-50 records will be loaded in each run to
the target

Is This Answer Correct ?    8 Yes 1 No

Post New Answer

More Data Stage Interview Questions

Wat is isolation level and when do u use them?

1 Answers   HP, IBM,


What is orabulk stage?

0 Answers  


i/p o/p1 o/p2 1 1 4 1 1 5 1 1 6 2 2 2 2 2 2 3 3 4 5 6 how to populates i/p rows into o/p1&o/p2 using datastage stages?and also the same scenario using sql?

8 Answers   IBM,


what is the Difference Between Datastage Server Edition and Parallel Edition?

2 Answers   Tech Mahindra,


What is container and then types?

1 Answers  


what is the custome stage in datastage? how can we impliment that one? plz tell me

0 Answers   Accenture,


You enter values in a schema file for RCP and you also entered values in sequential file? which one will it take?

1 Answers   TIAA CREF,


Hi am sundar, i have datas like 00023-1010 00086-1010 00184F2-1010 . . . . SCH-AS-1010 200-0196-039 . . . Now i want the result "SCH-AS" in onee column and "1010" in another column.. Can any one tell the answer...

5 Answers  


What is ds designer?

0 Answers  


Define APT_CONFIG in Datastage?

0 Answers  


if we using two sources having same meta data and how to check the data in two sources is same or not? and if the data is not same i want to abort the job ?how we can do this?

1 Answers   IBM,


How can we read latest records in a text file named file1.txt using seq file stage only? file1 having 100 records in that 5 record sare latest records.How can we read that latest records?

3 Answers   Caterpillar,


Categories