3,if our source containing 1 terabyte data so while loading
data into target what are the thing we keep in mind?
Answers were Sorted based on User's Feedback
Answer / chinni
1tera data is huge amount of data so if we use normal load
type it takes so much of time....to overcome this better to
use bulk loading...and also taking the large commit
intervals we can get good performance....and one more
technique is external loading.....
Is This Answer Correct ? | 9 Yes | 1 No |
I agree with you Chinni.
But my suggestion is instead of loading huge amount of data
at a time,we can always split the source if it is a file
using unix and extract the data from all the source files
using indirect type and load into target.
Please let me know if you have any suggestions.
Thanks.
James.
Is This Answer Correct ? | 0 Yes | 0 No |
Answer / chiranjeevi k
We can use push down optimizationto to handle the huge
volume of data.
Thanks
Chiranjeevi
Is This Answer Correct ? | 0 Yes | 0 No |
Answer / bakshu shaik
one more thing we need to take into consideration, along with above suggestions:
If the target having an index, Loading time (huge amount of data) increases/performance hazard......so it's always better to drop the index before loading(Stored procedure transformation) and create again after loading completed...!
Is This Answer Correct ? | 0 Yes | 0 No |
Along with the above suggestions......
Better to use partitioning (single or multiple thread) and and increase the commit level to 50 lacs. Better to load first flat file instead of loading into table.
then if u r db is oracle better to use sql loader to load data from file to table.If DB2 than use db2 loader.
Is This Answer Correct ? | 0 Yes | 0 No |
Answer / upendra
large amount of data in source file.So we can devide partitions and commit level ur own wish,then connected to different target table, each one target table better for bulk load.
Is This Answer Correct ? | 0 Yes | 0 No |
to improve the performance of aggregator we use sorted input option and use sorter t/r befor aggregator. But here we are increasing one more cache in our mapping i.e; sorter. So how can u convince that you are increasing the performance.?
Hi, If any hav Informatica n DWH FAQ's,Plz do fwd to vanibv6@gmail.com Thnx Vani
How can you generate reports in informatica?
1 lac of flat fles in source how to load target at a time?
we have 6 records in source , i need 2nd record in one target and 5th record in one target or 2nd & 5th record in same target.
what is correlated query?
what type of documents you receiving from client later wt can you do? what type of documents have to prepare?
How to load the name of the current processing flat file along with the data into the target using informatica mapping?
can i any one explain me realtime healthcare project explanation..for interview .iam new to informatica .thanks in advance.
my source like dis 10,asd,2000 10,asd,2000 10,asd,2000 20,dsf,3000 20,dsf,3000 20,dsf,3000 like dis and my requirement is first record is inserted into first target and duplicates of first record is inserted into second target ...like dis way ...? how to achieve dis?
How we can create indexes after completing the loan process?
difference between informatica 8.6 and 9
3 Answers Atos Origin, BA Continnum Solutions, Core Logic,