If I make any changes in the parallel job,do I need to
implement the changes in the sequencer job,else the changes
will be reflected automatically
Answer Posted / rajesh
once modifications are done,simply compile the job and run
the sequencer.no need to do any changes in sequencer.
| Is This Answer Correct ? | 5 Yes | 0 No |
Post New Answer View All Answers
What are the different plug-ins stages used in your projects?
What can we do with datastage director?
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range:
hi.... am facing typical problem in every interview " I need some critical scenarios faced in real time" plz help me guys
What are the areas of application?
What is the purpose of pivot stage and types of containers in datastage
To see hidden files in LINIX?
8000 jobs r there i given commit, suddenly job will abort? what happens? 2)diff b/t transformer stage & filter stage? 3)how to load the data in the source?
What is the sortmerge collector?
What all the types of jobs you developed?
DB2 connector> transformer > sequential file Data will be exported into a csv format in a sequential file. This file will be send in a email using a sequence job. Problem here is, how to avoid sending a blank csv file? When I ran the job there are chances that it might return zero records but in the sequence job csv file is going blank. how can I avoid this? thanks
How many areas for files does datastage have?
What are sequencers?
What are the steps required to kill the job in Datastage?
What are the benefits of datastage?