my source seq file have
col1
1
2
3
4
5
6
7
8
9
i have 4 targets
t1 t2 t3 t4
1 2 3 4
5 6 7 8
9 like this how we can get?
Answer Posted / bharath
It worked out for round robin also.. plz try..
Is This Answer Correct ? | 1 Yes | 3 No |
Post New Answer View All Answers
What are the primary usages of datastage tool?
Differentiate between operational datastage (ods) and data warehouse?
What is quality stage?
Which algorithm you used for your hashfile?
What are the types of containers in datastage?
how many rows sorted in sort stage by default in server jobs
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range:
Hi, what is use of Macros,functions and Routines..? At what situation you are used. If you know the answer please explain it. Thanks.
what is 'reconsideration error' and how can i respond to this error and how to debug this
What a datastage macro?
What is the difference between an operational datastage and a data warehouse?
What is the flow of loading data into fact & dimensional tables?
Why do we use link partitioner and link collector in datastage?
What is the difference between validated and compiled in the datastage?
What is the difference between hashfile and sequential file?