what is operator combinality in datastage?

Answer Posted / ankit gosain

Hi All,

It's a property of DataStage in which it combines two or
more Operators into a single Process.
You can explicitely enable/disable it with the help of
APT_DISABLE_COMBINATION environment variable at the job
level or at the Project level as well.
At stage level also you can enable/disable it:
In Stage properties>> Advance>> Operator Combinality.
By default it's Auto, so that datastage will decide whether
to combine or not.

You can see this combinality behavior by using the
following Environment Variable:
APT_DUMP_SCORE

For further queries, write me @ ankitgosian@gmail.com

Cheers,
Ankit :)

Is This Answer Correct ?    3 Yes 0 No



Post New Answer       View All Answers


Please Help Members By Posting Answers For Below Questions

How many types of stage?

788


Hi , Can anyone give few examples of scenarios and there corresponding design in datastage..i am new to this tool...confused in design while my manager asking to design the job.. Please post the URL if there..so i can go through it.. Thanks in advance...

3854


How to find value from a column in a dataset?

1847


How can we perform the 2nd time extraction of client database without accepting the data which is already loaded in first time extraction?

1953


Difference between ‘validated ok’ and ‘compiled’ in data stage?

774






How to read multiple files using a single datastage job if files have the same metadata?

848


How can you write parallel routines in datastage PX?

703


What is difference between server jobs & parallel jobs?

710


Which warehouse using in your datawarehouse

1778


Can you explain engine tier in information server?

756


Define APT_CONFIG in Datastage?

747


How to reverse the string using unix?

2928


sed,awk,head

1067


how to write server Routine coding?

1752


create a job that splits the data in the Jobs.txt file into four output files. You will direct the data to the different output files using constraints. • Job name: JobLevels • Source file: Jobs.txt • Target file 1: LowLevelJobs.txt − min_lvl between 0 and 25 inclusive. − Same column types and headings as Jobs.txt. − Include column names in the first line of the output file. − Job description column should be preceded by the string “Job Title:” and embedded within square brackets. For example, if the job description is “Designer”, the derived value is: “Job Title: [Designer]”. • Target file 2: MidLevelJobs.txt − min_lvl between 26 and 100 inclusive. − Same format and derivations as Target file 1. • Target file 3: HighLevelJobs.txt − min_lvl between 101 and 500 inclusive. − Same format and derivations as Target file 1. • Rejects file: JobRejects.txt − min_lvl is out of range, i.e., below 0 or above 500. − This file has only two columns: job_id and reject_desc. − reject_desc is a variable-length text field, maximum length 100. It should contain a string of the form: “Level out of range: ”, where is the value in the min_lvl field. My Question is how do you write the stage variable for reject rows.

2300