I am running a job with 1000 records.. If the job gots
aborted after loading 400 records into target... In this
case i want to load the records in the target with 401
record... How will we do it??? This scenario is not for
sequence job it's only in the job Ex: Seq file--> Trans-->
Dataset..
Answers were Sorted based on User's Feedback
Answer / sree
by using look-up stage we can get the answer..
there are two tables like 1000 records(source) table and 400
records(target) table.
take the source table as primary table and 400 records table
as reference table to look-up table
reference table
.
.
.
source............. look-up......... target
| Is This Answer Correct ? | 10 Yes | 4 No |
Answer / phani kumar
With the help of available Environment variable as
APT_CHECKPOINT_DIR, we can run the job for remaining records.
This is available in Datastage Administrator like.....
With Datastage Administrator - project-wide defaults for
general environment variables, set per project in the
Projects tab under Properties -> General Tab -> Environment
variables.
Here, we can enable the this checkpoint variable. Then, we
can load the remaining records........
| Is This Answer Correct ? | 6 Yes | 3 No |
Answer / bharath
in datastage administrator we can enable add check point on failure option we can start from 401 record....
| Is This Answer Correct ? | 7 Yes | 5 No |
Answer / shar
ok now we have an reject job with stages like
seq--tx--dataset1 lets say job1
now take job2 (job) dependency from job1 to job2) in that
get the data from seq use tx as input 1 (primary) and dataset2 as secondary (ref) in which we use path of dataset1 file this as second input. now connect these inputs to lookup stage and reject the unmatched records connect the reject link to dataset3 in which file name should be same as dataset1 with update policy = append.
thats it dude.
| Is This Answer Correct ? | 2 Yes | 0 No |
Answer / abc
our design: src-->tfm-->data set.
to load the records from 401, just set the option Use Existing(discard records) in data set update action property
| Is This Answer Correct ? | 1 Yes | 0 No |
1) First take seq file after that take one extra column in
that increment the value one by one and send to target
2) Now store the maximum value of extra column and pass this
value to transform thru HASH file and add in extra column
such as (sno+MAX)
Examlpe
SNO
1
2
3
4
5
6
7
Thera are max value is 7
Now add maxvalue means add 7 in SNO
like that
1+7
2+7
3+7
4+7
5+7
6+7
7+7
| Is This Answer Correct ? | 2 Yes | 3 No |
Answer / jagannath karmakar
USING SAVE POINT CAN PREVENT THIS PROBLEM
| Is This Answer Correct ? | 3 Yes | 7 No |
by usig
1. job level
2. job sequencing level
| Is This Answer Correct ? | 1 Yes | 8 No |
col1 123 abc 234 def jkl 768 opq 567 789 but i want two targetss target1 contains only numeric values and target2 contains only alphabet values like trg1 123 234 768 567 789 trg2 abc def jkl opq
How do you start developing a datastage project?
How to remove blank spaces from data?
There are two file are there .1st file contains 5 records and 2nd file contain 10 records in target they want 50 records.how can achieve this
In a table 100 records are there after 50records job is aborted how can u insert all records in target table.
What are the types of views in datastage director?
what is stage is used for below Input columns: dept|mgr|employee|salary Output columns: mgr|count of employee per mgr|avg salary per dept note: each dept has one mgr and each mgr has many employees
Name the third party tools that can be used in datastage?
how can u connect the client system directly at any time?
A flatfile contains 200 records.I want to load first 50 records at first time running the job,second 50 records at second time running and so on,how u can develop the job?pls give the steps?pls pls
why we use hash file for lookup?
What are the components of ascential data stage?