how to do pergformence tuning in datastage?
Answer Posted / venugopal [patni]
1. By using hashfile stage we can improve the performance.
In case of hashfile stage we can define the read cache size
& write cache size but the default size is 128M.B.
2. By using active-to-active link performance also we can
improve the performance.
Here we can improve the performance by enabling the row
buffer, the default row buffer size is 128K.B.
3. By removing unwanted columns.
4. By selecting appropriate update actions.
5. In parallel by replacing transformer with copy or filter
stage we can improve the performance.Because if you are
using more than 5 transformers in a stage the performance
will degrade,so to avoid transformer you can use copy or
filter.
6. In server by using linkpartitioner,linkcollectoe & IPC
stages also we can improve the performance.
Is This Answer Correct ? | 20 Yes | 3 No |
Post New Answer View All Answers
What are stage variables and constants?
Could anyone give brief explanation bout datastage admin
how to implement scd2 in datastage 7.5 with lookup stage
What are the types of containers in datastage?
How a routine is called in datastage job?
Whats difference betweeen operational data stage (ods) and data warehouse?
How complex jobs are implemented in datstage to improve performance?
What is the difference between Datastage 7.5 and 7.0?
How can we improve performance of data stage jobs?
what is flow of project?
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range:
What are the types of views in datastage director?
In Informatica,for the table I can find coreesponding dependent mappings.Likewise can I find the dependent jobs with all the information by using the table name
Source has 2 columns: USA,NewYork INDIA,MUMBAI INDIA,DELHI UDS,CHICAGO INDIA,PUNE i want data in target like below: INDIA,MUMBAI1 INDIA,DELHI2 INDIA,PUNE3 USA,NEWYORK1 USA,CHICAGO2
what is the use of skid in reporting?