I am running a job with 1000 records.. If the job gots
aborted after loading 400 records into target... In this
case i want to load the records in the target with 401
record... How will we do it??? This scenario is not for
sequence job it's only in the job Ex: Seq file--> Trans-->
Dataset..
Answers were Sorted based on User's Feedback
Answer / sree
by using look-up stage we can get the answer..
there are two tables like 1000 records(source) table and 400
records(target) table.
take the source table as primary table and 400 records table
as reference table to look-up table
reference table
.
.
.
source............. look-up......... target
Is This Answer Correct ? | 10 Yes | 4 No |
Answer / phani kumar
With the help of available Environment variable as
APT_CHECKPOINT_DIR, we can run the job for remaining records.
This is available in Datastage Administrator like.....
With Datastage Administrator - project-wide defaults for
general environment variables, set per project in the
Projects tab under Properties -> General Tab -> Environment
variables.
Here, we can enable the this checkpoint variable. Then, we
can load the remaining records........
Is This Answer Correct ? | 6 Yes | 3 No |
Answer / bharath
in datastage administrator we can enable add check point on failure option we can start from 401 record....
Is This Answer Correct ? | 7 Yes | 5 No |
Answer / shar
ok now we have an reject job with stages like
seq--tx--dataset1 lets say job1
now take job2 (job) dependency from job1 to job2) in that
get the data from seq use tx as input 1 (primary) and dataset2 as secondary (ref) in which we use path of dataset1 file this as second input. now connect these inputs to lookup stage and reject the unmatched records connect the reject link to dataset3 in which file name should be same as dataset1 with update policy = append.
thats it dude.
Is This Answer Correct ? | 2 Yes | 0 No |
Answer / abc
our design: src-->tfm-->data set.
to load the records from 401, just set the option Use Existing(discard records) in data set update action property
Is This Answer Correct ? | 1 Yes | 0 No |
1) First take seq file after that take one extra column in
that increment the value one by one and send to target
2) Now store the maximum value of extra column and pass this
value to transform thru HASH file and add in extra column
such as (sno+MAX)
Examlpe
SNO
1
2
3
4
5
6
7
Thera are max value is 7
Now add maxvalue means add 7 in SNO
like that
1+7
2+7
3+7
4+7
5+7
6+7
7+7
Is This Answer Correct ? | 2 Yes | 3 No |
Answer / jagannath karmakar
USING SAVE POINT CAN PREVENT THIS PROBLEM
Is This Answer Correct ? | 3 Yes | 7 No |
by usig
1. job level
2. job sequencing level
Is This Answer Correct ? | 1 Yes | 8 No |
Input Data is: Emp_Id, EmpInd 100, 0 100, 0 100, 0 101, 1 101, 1 102, 0 102, 0 102, 1 103, 1 103, 1 I want Output 100, 0 100, 0 100, 0 101, 1 101, 1 Means Indicator should either all ZEROs or all ONEs per EmpId. Impliment this using SQL and DataStage both.
how many write modes are there in ds
iam new to datastage...now i want to know what are fact tables, dimension tables in bank domain...if any body knows plz tell me asap..
How do y read Sequential file from job control?
What is the difference between server job and parallel jobs?
why dataset ?
What are the steps needed to create a simple basic datastage job?
disign the complex job in u r project?(they are aksing only complex job design and then data flow...)
is it possible to access the same job by two users at a time in DataStage?
What are the components of datastage?
source which format u will get either fixed or delimiter length format? what is the symbol of delimiter?
How we can convert rows to columns in datastage?