Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...


What is configuration your file structure 2)I have two
databases both are Oracle while loading data from source to
target the job takes 30 min but I want to load less time how?



What is configuration your file structure 2)I have two databases both are Oracle while loading data..

Answer / subhash

1) configuration file structure:
{
node "node1"
{
fastname "<SERVER_NAME>"
pools ""
resource disk "</dsg/IBM/..DATASETS Path>" {pools
""}
resource scratchdisk "</dsg/..Scratch disk/Buffer
Path>" {pools ""}
}
node "node2"
{
fastname "mctux034"
pools ""
resource disk
"/dsg/IBM/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk
"/dsg/IBM/IBM/InformationServer/Server/Scratch" {pools ""}
}
}
2)
SRC--->COPY---->TGT
Here Copy stage increase performance and works as a Buffer
when diff in reading count and writing record count.
by setting/checking below in Target ORACLE Connector:
Record Count=6000 (default is 2000, should be multiple of
Array Size)
Array Size=3000 (default is 2000)
Write mode=Bulk Load
Rebuild Indexes After Bulk Load

Is This Answer Correct ?    3 Yes 0 No

Post New Answer

More Data Stage Interview Questions

What are the steps needed to create a simple basic datastage job?

0 Answers  


A table containg 100 records B table containg 20 records we have to join two tables in left outer it containg target 100 records but target containg 101 record at that time what is the issue arise

3 Answers   Polaris,


Hi This is Vijay How Can u Read the data from sequential file Parall'y?

5 Answers   Semantic Space,


Drop duplicate records ... SOURCE LIKE .......... ID flag1 flag2 100 N Y 100 N N 100 Y N 101 Y Y 101 N Y 102 Y N 103 N N 104 Y Y 105 N N 106 N Y 102 N Y 105 Y Y in above file if any id having both the flags as "N" then that corresponding id records should be dropped, in above case o/p should be as ID flag1 flag2 101 Y Y 101 N Y 102 Y N 102 N Y 104 Y Y 106 N Y Steps to do : 1) Identified the id’s that got duplicated (both the flag values having vales “N”) 2) Look up with these id’s to existing id’s to drop .

2 Answers  


when we have to go for a sequential file stage & for a dataset in datastage?

1 Answers  


if we take 2 tables(like emp and dept), we use join stage and how to improve the performance?

5 Answers   Cap Gemini,


create a job to get the previous row salary for the current row.if there is no previous row exists for the current row,then the previous row salary should be displayed as null? empid   salary   previoussalary 10      1000     null 20      2000     1000 30      3000     2000       40      4000     3000

5 Answers   Genpact,


This is UNIX question asked in DataStage Interview. Say I have n numbers of records in a text file. I want first 3 records in 1st file, last three records in 3rd file and remaining n-6 records in 2nd file. (Note: we don't know how many records are there in the File. I am getting one file on daily basis and I want three target files as asked above)

2 Answers   CTS,


What modeling tool do you use?

6 Answers   HP,


How to lode data in sequntional files perform faster?

1 Answers   Polaris,


What is the difference between odbc and drs stage?

0 Answers  


What is the surrogate key? what is the use of surrogate key? how to Create surrogate key Generator in scd2 in 8.5?

5 Answers   SLK Software,


Categories