How to generate surrogate key without using surrogate key stage?
Answer Posted / nish
sudhir your answer is wrong. just incrementing will not work in a multinode config.
the answers that utilize the @partition num variable are correct
| Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
What is datastage engine?
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range:
Field,NVL,INDEX,REPLACE,TRANSLATE,COLESC
Which commands are used to import and export the datastage jobs?
file having these input and we have to get 3 output using same job Input 1 1 1 2 3 4 4 4 o/p1 o/p2 o/p3 1 1 2 2 1 3 3 1 4 4 4
If you want to use the same piece of code in different jobs, how will you achieve it?
1.new record it will insert but changes of natural key is not present in taget i want to update (here key is composite natural key )can any one help this to explan how to do
What are the partitioning techniques available in link partitioner?
How will you load you daily/monthly jobs datas in to Fact and Dimension table using datastage.
Can you explain kafka connector?
Describe link sort?
sed,awk,head
How can we improve the performance in datastage?
if we using two sources having same meta data and how to check the data in two sources is same or not? and if the data is not same i want to abort the job ?how we can do this?
How to implement complex jobs in data stage?