I am running a job with 1000 records.. If the job gots
aborted after loading 400 records into target... In this
case i want to load the records in the target with 401
record... How will we do it??? This scenario is not for
sequence job it's only in the job Ex: Seq file--> Trans-->
Dataset..
Answers were Sorted based on User's Feedback
Answer / sree
by using look-up stage we can get the answer..
there are two tables like 1000 records(source) table and 400
records(target) table.
take the source table as primary table and 400 records table
as reference table to look-up table
reference table
.
.
.
source............. look-up......... target
| Is This Answer Correct ? | 10 Yes | 4 No |
Answer / phani kumar
With the help of available Environment variable as
APT_CHECKPOINT_DIR, we can run the job for remaining records.
This is available in Datastage Administrator like.....
With Datastage Administrator - project-wide defaults for
general environment variables, set per project in the
Projects tab under Properties -> General Tab -> Environment
variables.
Here, we can enable the this checkpoint variable. Then, we
can load the remaining records........
| Is This Answer Correct ? | 6 Yes | 3 No |
Answer / bharath
in datastage administrator we can enable add check point on failure option we can start from 401 record....
| Is This Answer Correct ? | 7 Yes | 5 No |
Answer / shar
ok now we have an reject job with stages like
seq--tx--dataset1 lets say job1
now take job2 (job) dependency from job1 to job2) in that
get the data from seq use tx as input 1 (primary) and dataset2 as secondary (ref) in which we use path of dataset1 file this as second input. now connect these inputs to lookup stage and reject the unmatched records connect the reject link to dataset3 in which file name should be same as dataset1 with update policy = append.
thats it dude.
| Is This Answer Correct ? | 2 Yes | 0 No |
Answer / abc
our design: src-->tfm-->data set.
to load the records from 401, just set the option Use Existing(discard records) in data set update action property
| Is This Answer Correct ? | 1 Yes | 0 No |
1) First take seq file after that take one extra column in
that increment the value one by one and send to target
2) Now store the maximum value of extra column and pass this
value to transform thru HASH file and add in extra column
such as (sno+MAX)
Examlpe
SNO
1
2
3
4
5
6
7
Thera are max value is 7
Now add maxvalue means add 7 in SNO
like that
1+7
2+7
3+7
4+7
5+7
6+7
7+7
| Is This Answer Correct ? | 2 Yes | 3 No |
Answer / jagannath karmakar
USING SAVE POINT CAN PREVENT THIS PROBLEM
| Is This Answer Correct ? | 3 Yes | 7 No |
by usig
1. job level
2. job sequencing level
| Is This Answer Correct ? | 1 Yes | 8 No |
What are the types of hashed files in data stage
What is the difference between an operational datastage and a data warehouse?
Field,NVL,INDEX,REPLACE,TRANSLATE,COLESC
tab1 tab2 1,a 1,d 2,b 3,c perfoms outerjoin what is the o/p? write sql query for outerjoin?
I am having the 2 source files A and B and I want to get the output as, the data which is in file A and which doesn't in file B to a target 1 and which is in file B and which doesn't in file A to a target 2?
How can we run same job in 1 day 2 times
Define Routines and their types?
What are the types of views in datastage director?
Hi, i did what you mentioned in the answer, i.e. source- >Transformer -> 3 datasets. Iam able to see the data in datasets but its not sort order... Can you tell how sort the data?? i also checked Hash partition with performsort.
source file contains 100 records, i want 10 records in target file how it possible in datastage
What are the different database update actions available?
Tell me the syntax of Configuration file?