If session fails after loading 10000 records in the
target,how can we load 10001 th record when we run the
session in the nexttime?
Answers were Sorted based on User's Feedback
Answer / rama krishna
Hey man ... first ofall u should need to mention that,
wether u r applied any commit or not.
with out apply any commit, How can u recover the session
yar?
ur quwstion is not valid. if u want exactly improve the
performence, then u must use commit points& commit
intervels.
if u really apply any commit,then if it fail according to u
at 10000 records, then u can recover remaining records by
using OPB_SRVR_RECOVERY table.
Hai friends am i correct ? think once
| Is This Answer Correct ? | 11 Yes | 0 No |
Answer / srikanth
Select the recovery strategy in session properties as
"Resume from the last chk point"
| Is This Answer Correct ? | 10 Yes | 4 No |
Answer / guest
using performance recovery to load the target
they are 3 types of performance recovery
1 ) we enable the recovery option in session
2) we put a commit intervel point in session
3) trancate the target and load once again
| Is This Answer Correct ? | 3 Yes | 0 No |
Answer / lathagarat
First we will get the how many records was loaded in the
target system i.e max(records) (using aggregate
transformation we can get)and incremented from there
onwards.
| Is This Answer Correct ? | 4 Yes | 1 No |
Answer / chaitanya
On selecting lookup cache as PERSISTENT we can able to use same cache for next session also.
| Is This Answer Correct ? | 3 Yes | 0 No |
Answer / gayathri
In session properties we have "suspend on error" option is
there. It starts from the failed task not from the begining.
| Is This Answer Correct ? | 7 Yes | 7 No |
Answer / basu
I think his question is, he wants to loads incremental. so you can go for Change data capture by using mapping variable/parameters in the SQ query overid
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / nidhi
As per me Mr.Rama Krishna is right, every one else is ...
Ms. latha he already mentioned like after loading 10000
records, so there is no need to check again the
max(records),but incremental loading is also one of the ways
i guess..
1)Session recovery
2)Incremental loading are the ways as per my knowledge..
If any 1 think i am wrong plz let me knw..
| Is This Answer Correct ? | 1 Yes | 6 No |
Three date formats are there . How to change these three into One format without using expression transformation ?
what is work of PUSH DOWN option
How to eliminate duplicate records in informatica mapping? Explain with an example....
how to connect two or more table with single source qualifier?
Is snow flake or star schema used? If star schema means why?
If I have 10 flat files with same name abc.txt files with different timestamps as source I need to load them in tgt table oracle. in between job execution fails and rows are not loaded into tgt. how can I make them load in that target even if my job fails?
There are 4 source files which contains same metadata create target that should display the file name along with record please send answer with mapping
How many number of sessions can one group in batches?
when we create source as oracle and target as flat file, how can i specify first row as column in flat file?
Can anyone give some input on "Additional Concurrent Pipelines for Lookup Cache Creation" ? I know that this property is used to build caches in a mapping concurently. But which values should I set into this ( i.e. 1 or 2 or 3 or something else ) for concurrent cache building ?
What differs when we choose the sorted input for aggregator transformation?
In which scenario did you used pushdown optimization?