how the server recognises , if the session fails after
loading the
100 records in to the target
Answers were Sorted based on User's Feedback
Answer / harikrishna
Based on commit intervel session commits those many records
into target.suppose if commit intervel is 1000 if session
fails after 100 records it won't insert not a single record
into target.
| Is This Answer Correct ? | 2 Yes | 0 No |
Answer / bsgsr
i cudnt xactly u?s ur question. server recognising in the
sense if the session failed in between and u configured the
session to enable recovery it uses database logging and
finds out the rowid of the record last commited to the
target and loads from the next record. i am sorry if i am
wrong.
| Is This Answer Correct ? | 0 Yes | 0 No |
Where do you create/define mapping parameter and mapping variable?
what is system requirement(SR) & business requirement(BR)?
How to display session logs based upon particular dates. If I want to display session logs for 1 week from a particular date how can I do it without using unix.
My flat file source is C_Id 1-nov-2011 8-nov-2011 100 2000 1500 101 2500 2000 I want my Target as C_Id Week_Num Amt 100 45 2000 100 46 1500 101 45 2500 101 46 2000
What are the advantages of informatica?
How your source files are coming to your ETL server. Actually at which stage of your mapping it is happen.
1.why we need to use unconnected transformation? 2.where we can static chach,dynamic chach
how remove 1st 3 records & last 3 records in informatics
we r using aggregator with out using groupby?
7 Answers HSBC, Principal Finance,
Load data to multiple targets according date. When First time session runs it should send to 1st target,second time session runs then send to 2nd target and goes on how to achieve it
i having source, router transformation, two targets in my mapping... i given two conditions in router 1)sal >500 2)sal < 5000 --------------- my source is havig two sal records (1)1000 (2)2000 then which target will load first? will both targets are get load or single target only get load...... why?
If my source is having 30 million records, so obviously the cache could not be allocated with sufficient memory. What needs to be done in this case?