How much data u can get every day?
2)which data ur project contains?
3) what is the source in ur project?what is the biggest
table & size in ur schema or in ur project?
Answer Posted / kiran
1.How much data u can get every day?
Ana: Exactly no one tell how much data will get and its
depend upon company 2 company and client 2 client.and first
u analyze.Are u in developer or production side.data will
get only in production side not in development side.
Approximately u can tell as like 5 lakhs record per day
2)which data ur project contains?
Ans:Flat files.
3)what is the source in ur project?what is the biggest
table & size in ur schema or in ur project?
Ans:first u will get all data in sourse file as test then
will popuylate 2 oracle then cleansing and finally populated
2 dataware house
| Is This Answer Correct ? | 7 Yes | 2 No |
Post New Answer View All Answers
create a job that splits the data in the Jobs.txt file into
four output files. You will direct the data to the
different output files using constraints. • Job name:
JobLevels
• Source file: Jobs.txt
• Target file 1: LowLevelJobs.txt
− min_lvl between 0 and 25 inclusive.
− Same column types and headings as Jobs.txt.
− Include column names in the first line of the output file.
− Job description column should be preceded by the
string “Job
Title:” and embedded within square brackets. For example, if
the job description is “Designer”, the derived value
is: “Job
Title: [Designer]”.
• Target file 2: MidLevelJobs.txt
− min_lvl between 26 and 100 inclusive.
− Same format and derivations as Target file 1.
• Target file 3: HighLevelJobs.txt
− min_lvl between 101 and 500 inclusive.
− Same format and derivations as Target file 1.
• Rejects file: JobRejects.txt
− min_lvl is out of range, i.e., below 0 or above 500.
− This file has only two columns: job_id and reject_desc.
− reject_desc is a variable-length text field, maximum
length
100. It should contain a string of the form: “Level out of
range:
Enlist various types of routines in datastage.
Can you filter data in hashed file?
file having these input and we have to get 3 output using same job Input 1 1 1 2 3 4 4 4 o/p1 o/p2 o/p3 1 1 2 2 1 3 3 1 4 4 4
If you want to use the same piece of code in different jobs, how will you achieve it?
What is the difference between datastage and informatica?
What are the types of views in datastage director?
If you want to use a same piece of code in different jobs, how will you achieve this?
CHANGE CAPTURE
how to read 100 records at a time in source a) hw is it fr metadata Same and b) if metadata is nt same?
Can you explain kafka connector?
How you Implemented SCD Type 1 & Type 2 in your project?
How do you reject records in a transformer?
Is the value of staging variable stored temporarily or permanently?
What is a merge in datastage?