Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...


A flatfile contains 200 records.I want to load first 50
records at first time running the job,second 50 records at
second time running and so on,how u can develop the job?pls
give the steps?pls pls

Answers were Sorted based on User's Feedback



A flatfile contains 200 records.I want to load first 50 records at first time running the job,seco..

Answer / varun

Design the job like this:
1. Read records from input flat file and click on option of
rownumbercolumn in the file. It will generate a unique
number corresponding to each record in that file.
2. Use filter stage and write the conditions like this:
a. rownumbercolumn<=50(in 1st link to load the records
in target file/database)
b. rownumbercolumn>50 (in 2nd link to load the records
in the file with the same name as input file name, in
overwrite mode)


So, first time when your job runs first 50 records will be
loaded in the target and same time the input file records
are overwritten with records next first 50 records i.e. 51
to 200.
2nd time when your job runs first 50 records(i.e. 51-100)
will be loaded in the target and same time the input file
records are overwritten with records next first 50 records
i.e. 101 to 200.
And so on, all 50-50 records will be loaded in each run to
the target

Is This Answer Correct ?    19 Yes 9 No

A flatfile contains 200 records.I want to load first 50 records at first time running the job,seco..

Answer / sreekanth reddy peddakkagari

Answer given by varun is exactly correct but there is a
small correction in the answer, as varun told that at the
same time we can not use same file name as the source as
well as target , so we need to read from actual file
(Ex:Source.txt) write it to new file(Ex:New_Source.txt),
after that we need to rename the new file to old file using
after Subroutene(mv New_Source.txt Source.txt).

Is This Answer Correct ?    7 Yes 1 No

A flatfile contains 200 records.I want to load first 50 records at first time running the job,seco..

Answer / subhash

other than VARUN solution

1. Add 'row number' column in Seq File stage, so that each record has a number associated with it.
2. Add a job param with which we can provide the number of record from where we want to run the job. We can pass this either using Sequence Start LOOP(List type variables-50,100,150,200) or by shell script.
3. In the tfm, use a stage variable to run only from the record number till 50 records by counting each record.

Is This Answer Correct ?    2 Yes 0 No

A flatfile contains 200 records.I want to load first 50 records at first time running the job,seco..

Answer / sarath

i think using global parameter we can achieve this in transformer . first set the parameter value to 0 or 1 and finally after transforming set the parameter value to max(rownum) so that the value will b updated. for the next time using use that parameter as the starting point to load the data.

sorry if it is wrong.

Is This Answer Correct ?    1 Yes 1 No

A flatfile contains 200 records.I want to load first 50 records at first time running the job,seco..

Answer / siva

i thing we can solve this problem using mapping paraeter,workflow variable

Is This Answer Correct ?    0 Yes 0 No

A flatfile contains 200 records.I want to load first 50 records at first time running the job,seco..

Answer / msk22

Follow the steps:
1: import source definition(relational or flat file)
2: create mapping parameter $$m_count with datatype double
3: create expression transformation map all the output ports of source qualifier to expression.
4: create a variable port v_cnt and in expression write v_cnt+1
5: create an output port o_cnt and assign v_cnt to it.
6: take filter transformation, map all the output ports to filter and in filter condition write o_cnt<$$m_count and o_cnt>$$m_count-50
7: map all the output ports to target and save the mapping.
8: create a workflow for the mapping and then create a workflow variable $$wf_count with double datatype and enable persistent option.
9: create an assignment task and in the expression tab for user-defined variable choose $$wf_count and for expression write $$wf_count+50.
ex: $$wf_count=$$wf_count+50
10: now create the session for the mapping and go to session properties then select components tab, go to pre-session variable assignment column then open value and then there select mapping variable $$m_count and parent workflow variable $$wf_count
ex: $$m_count=$$wf_count.
11: save the workflow and execute.
That's it.......

Is This Answer Correct ?    0 Yes 0 No

A flatfile contains 200 records.I want to load first 50 records at first time running the job,seco..

Answer / lakshmi

i think using scheduling proces in server jobs.

Is This Answer Correct ?    1 Yes 4 No

Post New Answer

More Data Stage Interview Questions

whom do you report?

0 Answers   NTT Data,


What a datastage macro?

0 Answers  


I have 2 files 1st contains duplicate records only, 2nd file contains Unique records.EX: File1: 1 subhash 10000 1 subhash 10000 2 raju 20000 2 raju 20000 3 chandra 30000 3 chandra 30000 File2: 1 subhash 10000 5 pawan 15000 7 reddy 25000 3 chandra 30000 Output file:--&#61664; capture all the duplicates in both file with count. 1 subhash 10000 3 1 subhash 10000 3 1 subhash 10000 3 2 raju 20000 2 2 raju 20000 2 3 chandra 30000 3 3 chandra 30000 3 3 chandra 30000 3

2 Answers   TCS,


how to cleansing data

6 Answers   Cap Gemini,


how to find diff between 2 dates without using Icon... funtions?

1 Answers  


CHANGE CAPTURE

0 Answers   CTS,


Hai..,in datastage how to explain project in interview?please explain any domain please.

1 Answers   Wipro,


1)What is ur project architecture ? 2)how to move project from developement to uat? 3)What is the difference between datastage 6,7.1 and datasttage 7.5? 4).How to do error handling in datastage? 5)3.Whta is unit testing, system testing and integration testing? 6)What is the Exact difference between BASIC Transformer and NORMAL Transformer?When we will go for BASIC Or NORMAL Transformer 7)why we use third party tools in datastage? 8)What is the purpose of Debugging stages? In real time Where we will use?

6 Answers   CTS, HCL, IBM, Wipro,


How to use Environment variable's in datastage?(use of process)

1 Answers   CSC,


can we see the data in fixed width file? how can u change the datatype of fixed width files?

1 Answers   Infosys,


i have the source from Uk,north america how can i pass the data two tables based on the locations

2 Answers  


source file is having 5 records while moving into target it want to be 10 records

4 Answers   IBM,


Categories