I am running a job with 1000 records.. If the job gots
aborted after loading 400 records into target... In this
case i want to load the records in the target with 401
record... How will we do it??? This scenario is not for
sequence job it's only in the job Ex: Seq file--> Trans-->
Dataset..
Answers were Sorted based on User's Feedback
Answer / sree
by using look-up stage we can get the answer..
there are two tables like 1000 records(source) table and 400
records(target) table.
take the source table as primary table and 400 records table
as reference table to look-up table
reference table
.
.
.
source............. look-up......... target
| Is This Answer Correct ? | 10 Yes | 4 No |
Answer / phani kumar
With the help of available Environment variable as
APT_CHECKPOINT_DIR, we can run the job for remaining records.
This is available in Datastage Administrator like.....
With Datastage Administrator - project-wide defaults for
general environment variables, set per project in the
Projects tab under Properties -> General Tab -> Environment
variables.
Here, we can enable the this checkpoint variable. Then, we
can load the remaining records........
| Is This Answer Correct ? | 6 Yes | 3 No |
Answer / bharath
in datastage administrator we can enable add check point on failure option we can start from 401 record....
| Is This Answer Correct ? | 7 Yes | 5 No |
Answer / shar
ok now we have an reject job with stages like
seq--tx--dataset1 lets say job1
now take job2 (job) dependency from job1 to job2) in that
get the data from seq use tx as input 1 (primary) and dataset2 as secondary (ref) in which we use path of dataset1 file this as second input. now connect these inputs to lookup stage and reject the unmatched records connect the reject link to dataset3 in which file name should be same as dataset1 with update policy = append.
thats it dude.
| Is This Answer Correct ? | 2 Yes | 0 No |
Answer / abc
our design: src-->tfm-->data set.
to load the records from 401, just set the option Use Existing(discard records) in data set update action property
| Is This Answer Correct ? | 1 Yes | 0 No |
1) First take seq file after that take one extra column in
that increment the value one by one and send to target
2) Now store the maximum value of extra column and pass this
value to transform thru HASH file and add in extra column
such as (sno+MAX)
Examlpe
SNO
1
2
3
4
5
6
7
Thera are max value is 7
Now add maxvalue means add 7 in SNO
like that
1+7
2+7
3+7
4+7
5+7
6+7
7+7
| Is This Answer Correct ? | 2 Yes | 3 No |
Answer / jagannath karmakar
USING SAVE POINT CAN PREVENT THIS PROBLEM
| Is This Answer Correct ? | 3 Yes | 7 No |
by usig
1. job level
2. job sequencing level
| Is This Answer Correct ? | 1 Yes | 8 No |
How do you schedule or monitoring the job?
My source having following data as below, AB1 Aim2 Abnv5 1An8bx and my question is i need the Datastage job the following as in my target 000AB1 00Aim2 0Abnv5 1An8bx Please help me to achive this.
How to get max salary of an organization using data stage stages........... can any body help me plz.......
How one source columns or rows to be loaded in to two different tables?
what is .dsx files
How can we move a DATASTAGE JOB from Development to Testing environment with the help of a datastage job using unix commands.
How do you generate sequence number in datastage?
how can find maximum salary by using Remove duplicate stage?
What is the difference between passive stage and active stage?
what is meant by port ? what is the use of port ? what are the different type of ports and its usage
Why we use surrogate key?
Hi guys, please design job for this, MY INPUT IS COMPANY,LOCATION IBM,CHENNAI IBM,HYDRABAD IBM,PUNE IBM,BANGLOORE TCS,CHENNAI TCS,MUMBAI TCS,BANGLOORE WIPRO,HYDRABAD WIPRO,CHENNAI HSBC,PUNE MY OUTPUT IS COMPANY,LOCATION,COUNT IBM,chennai,hydrabad,pune,banglore,4 TCS,chennai,mumbai,bangloore,3 WIPRO,hydrabad,chennai,2 HSBC,pune,1 Thanks