how to do pergformence tuning in datastage?

Answer Posted / venugopal [patni]

1. By using hashfile stage we can improve the performance.
In case of hashfile stage we can define the read cache size
& write cache size but the default size is 128M.B.
2. By using active-to-active link performance also we can
improve the performance.
Here we can improve the performance by enabling the row
buffer, the default row buffer size is 128K.B.
3. By removing unwanted columns.
4. By selecting appropriate update actions.
5. In parallel by replacing transformer with copy or filter
stage we can improve the performance.Because if you are
using more than 5 transformers in a stage the performance
will degrade,so to avoid transformer you can use copy or
filter.
6. In server by using linkpartitioner,linkcollectoe & IPC
stages also we can improve the performance.

Is This Answer Correct ?    20 Yes 3 No



Post New Answer       View All Answers


Please Help Members By Posting Answers For Below Questions

What are stage variables and constants?

806


Could anyone give brief explanation bout datastage admin

2052


how to implement scd2 in datastage 7.5 with lookup stage

5306


What are the types of containers in datastage?

813


How a routine is called in datastage job?

704






Whats difference betweeen operational data stage (ods) and data warehouse?

755


How complex jobs are implemented in datstage to improve performance?

693


What is the difference between Datastage 7.5 and 7.0?

758


How can we improve performance of data stage jobs?

696


what is flow of project?

1632


create a job that splits the data in the Jobs.txt file into four output files. You will direct the data to the different output files using constraints. • Job name: JobLevels • Source file: Jobs.txt • Target file 1: LowLevelJobs.txt − min_lvl between 0 and 25 inclusive. − Same column types and headings as Jobs.txt. − Include column names in the first line of the output file. − Job description column should be preceded by the string “Job Title:” and embedded within square brackets. For example, if the job description is “Designer”, the derived value is: “Job Title: [Designer]”. • Target file 2: MidLevelJobs.txt − min_lvl between 26 and 100 inclusive. − Same format and derivations as Target file 1. • Target file 3: HighLevelJobs.txt − min_lvl between 101 and 500 inclusive. − Same format and derivations as Target file 1. • Rejects file: JobRejects.txt − min_lvl is out of range, i.e., below 0 or above 500. − This file has only two columns: job_id and reject_desc. − reject_desc is a variable-length text field, maximum length 100. It should contain a string of the form: “Level out of range: ”, where is the value in the min_lvl field. My Question is how do you write the stage variable for reject rows.

2300


What are the types of views in datastage director?

1396


In Informatica,for the table I can find coreesponding dependent mappings.Likewise can I find the dependent jobs with all the information by using the table name

2120


Source has 2 columns: USA,NewYork INDIA,MUMBAI INDIA,DELHI UDS,CHICAGO INDIA,PUNE i want data in target like below: INDIA,MUMBAI1 INDIA,DELHI2 INDIA,PUNE3 USA,NEWYORK1 USA,CHICAGO2

443


what is the use of skid in reporting?

2030