Hi This is Vijay,
How can your remove the duplicates in sequential File?
Answer Posted / prasu
Hi Manoj,
Where u will give the Distinct Keyword...This is not a
table ..it is file...
| Is This Answer Correct ? | 5 Yes | 0 No |
Post New Answer View All Answers
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range:
Can you explain tagbatch restructure operator?
Hi guys, Please design a job for dis requirement with derivation(solution). my source table like dis. emp_no qualification 1 a 1 c 2 a 3 c 3 b To loaded to target like dis emp_no qualification 1 b 2 b 2 c 3 a my requirement is every employer have three qualifications i.e a,b and c. what qualification missed in source table that will be move to target systems. Hope u got it the requirement. Right Thanks.
How you Implemented SCD Type 1 & Type 2 in your project?
In work load management there are three options of Low priority, Medium priority and High Priority Jobs which can be used for resource management. why this feature is developed when there is already jobs prescheduled by scheduler or autosys. what will be the use of workload management then?
Define data aggregation?
What is a merge?
What is active and passive stage?
How do u convert the columns to rows in datastage?
How to find value from a column in a dataset?
How do you run datastage job from the command line?
Differentiate between validated and Compiled in the Datastage?
Is it possible to implement parallelism in Mainframe Jobs ? If Yes how ? If no why ?
Define Job control?
What are the features of datastage flow designer?