Answer Posted / disney
In dataset stage, Input tab of partitioning --> used the key
partition instead of auto partition , then check the
perform sort and check the unique, It will remove the
duplicate from dataset
| Is This Answer Correct ? | 25 Yes | 0 No |
Post New Answer View All Answers
What all the types of jobs you developed?
How to convert RGB Value to Hexadecimal values in datastage?
What is process model?
Define orabulk and bcp stages?
how to write server Routine coding?
Why do we use link partitioner and link collector in datastage?
Define Merge?
How can you write parallel routines in datastage PX?
Where do you see different stages in the designer?
Is possible to create skid in dim,fact tables?
Can you highlight the main features of ibm infosphere information server?
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range:
Define ds designer?
What is the flow of loading data into fact & dimensional tables?
What is usage analysis in datastage?