If session fails after loading 10000 records in the
target,how can we load 10001 th record when we run the
session in the nexttime?
Answers were Sorted based on User's Feedback
Answer / rama krishna
Hey man ... first ofall u should need to mention that,
wether u r applied any commit or not.
with out apply any commit, How can u recover the session
yar?
ur quwstion is not valid. if u want exactly improve the
performence, then u must use commit points& commit
intervels.
if u really apply any commit,then if it fail according to u
at 10000 records, then u can recover remaining records by
using OPB_SRVR_RECOVERY table.
Hai friends am i correct ? think once
| Is This Answer Correct ? | 11 Yes | 0 No |
Answer / srikanth
Select the recovery strategy in session properties as
"Resume from the last chk point"
| Is This Answer Correct ? | 10 Yes | 4 No |
Answer / guest
using performance recovery to load the target
they are 3 types of performance recovery
1 ) we enable the recovery option in session
2) we put a commit intervel point in session
3) trancate the target and load once again
| Is This Answer Correct ? | 3 Yes | 0 No |
Answer / lathagarat
First we will get the how many records was loaded in the
target system i.e max(records) (using aggregate
transformation we can get)and incremented from there
onwards.
| Is This Answer Correct ? | 4 Yes | 1 No |
Answer / chaitanya
On selecting lookup cache as PERSISTENT we can able to use same cache for next session also.
| Is This Answer Correct ? | 3 Yes | 0 No |
Answer / gayathri
In session properties we have "suspend on error" option is
there. It starts from the failed task not from the begining.
| Is This Answer Correct ? | 7 Yes | 7 No |
Answer / basu
I think his question is, he wants to loads incremental. so you can go for Change data capture by using mapping variable/parameters in the SQ query overid
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / nidhi
As per me Mr.Rama Krishna is right, every one else is ...
Ms. latha he already mentioned like after loading 10000
records, so there is no need to check again the
max(records),but incremental loading is also one of the ways
i guess..
1)Session recovery
2)Incremental loading are the ways as per my knowledge..
If any 1 think i am wrong plz let me knw..
| Is This Answer Correct ? | 1 Yes | 6 No |
If the source has duplicate records as id and name columns, values: 1 a, 1 b, 1 c, 2 a, 2 b,the target shd be loaded as 1 a+b+c or 1 a||b||c, what transformations shd be used for this?
I have the source like col1 col2 a l b p a m a n b q x y How to get the target data like below col1 col2 a l,m,n b p,q x y
Differentiate between source qualifier and filter transformation?
There are 2 files, Master and User. We need to compare 2 files and prepare a output log file which lists out missing Rolename for each UserName between Master and User file. Please find the sample data- MASTER.csv ---------- Org|Tmp_UsrID|ShortMark|Rolename ---|---------|----------|------------ AUS|0_ABC_PW |ABC PW |ABC Admin PW AUS|0_ABC_PW |ABC PW |MT Deny all GBR|0_EDT_SEC|CR Edit |Editor GBR|0_EDT_SEC|CR Edit |SEC MT103 GBR|0_EDT_SEC|CR Edit |AB User USER.csv -------- Org|UserName|ShortMark|Rolename ---|--------|---------|------------ AUS|charls |ABC PW |ABC Admin PW AUS|amudha |ABC PW |MT Deny all GBR|sandya |CR Edit |Editor GBR|sandya |CR Edit |SEC MT103 GBR|sandya |CR Edit |AB User GBR|sarkar |CR Edit |Editor GBR|sarkar |CR Edit |SEC MT103 Required Output file: --------------------- Org|Tmp_UsrID|UserName|Rolename |Code ---|---------|--------|------------|-------- AUS|0_ABC_PW |charls |ABC Admin PW|MATCH AUS|0_ABC_PW |charls |MT Deny all |MISSING AUS|0_ABC_PW |amudha |ABC Admin PW|MISSING AUS|0_ABC_PW |amudha |MT Deny all |MATCH GBR|0_EDT_SEC|sandya |Editor |MATCH GBR|0_EDT_SEC|sandya |SEC MT103 |MATCH GBR|0_EDT_SEC|sandya |AB User |MATCH GBR|0_EDT_SEC|sarkar |Editor |MATCH GBR|0_EDT_SEC|sarkar |SEC MT103 |MATCH GBR|0_EDT_SEC|sarkar |AB User |MISSING Both the files are mapped through Organization, Shor_mark. So, based on each Organization, Short_Mark, for each UserName from User.csv, we need to find the Matching and Missing Rolename. I am able to bring Matching records in the output. But really I don't find any concept or logic to achieve "MISSING" records which are present in Master and not in User.csv for each UserName. Please help out guys. Let me know if you need any more information. Note:- In User.csv file, there are n number of Organization, under which n number Shortmark comes which has n number of UserName.
What are the differences between a connected lookup and unconnected lookup?
what r the values tht r passed between informatics server and stored procedure?
Which means the first record should come as last record and last record should come as first record and load into the target file?
case and like function in informtica (my source is XML). case when OS Like'%Windows%' and OS Like '%200%' then 'Windows 200' case when OS Like'%Windows%' and OS Like '%200%'and OS like '%64%' then 'windows 200 64 bit' etc.,,
which quality process u can approach in ur project
What is the difference between warehouse key and surrogate key?
In update strategy target table or flat file which gives more performance ? why?
Howmany ways yoU can update a relational source defintion and what are they?