Golgappa.net | Golgappa.org | BagIndia.net | BodyIndia.Com | CabIndia.net | CarsBikes.net | CarsBikes.org | CashIndia.net | ConsumerIndia.net | CookingIndia.net | DataIndia.net | DealIndia.net | EmailIndia.net | FirstTablet.com | FirstTourist.com | ForsaleIndia.net | IndiaBody.Com | IndiaCab.net | IndiaCash.net | IndiaModel.net | KidForum.net | OfficeIndia.net | PaysIndia.com | RestaurantIndia.net | RestaurantsIndia.net | SaleForum.net | SellForum.net | SoldIndia.com | StarIndia.net | TomatoCab.com | TomatoCabs.com | TownIndia.com
Interested to Buy Any Domain ? << Click Here >> for more details...


3,if our source containing 1 terabyte data so while loading
data into target what are the thing we keep in mind?

Answers were Sorted based on User's Feedback



3,if our source containing 1 terabyte data so while loading data into target what are the thing we ..

Answer / chinni

1tera data is huge amount of data so if we use normal load
type it takes so much of time....to overcome this better to
use bulk loading...and also taking the large commit
intervals we can get good performance....and one more
technique is external loading.....

Is This Answer Correct ?    9 Yes 1 No

3,if our source containing 1 terabyte data so while loading data into target what are the thing we ..

Answer / james

I agree with you Chinni.
But my suggestion is instead of loading huge amount of data
at a time,we can always split the source if it is a file
using unix and extract the data from all the source files
using indirect type and load into target.

Please let me know if you have any suggestions.

Thanks.
James.

Is This Answer Correct ?    0 Yes 0 No

3,if our source containing 1 terabyte data so while loading data into target what are the thing we ..

Answer / vasu

Always Partition is the best option for huge volume

Is This Answer Correct ?    0 Yes 0 No

3,if our source containing 1 terabyte data so while loading data into target what are the thing we ..

Answer / chiranjeevi k

We can use push down optimizationto to handle the huge
volume of data.

Thanks
Chiranjeevi

Is This Answer Correct ?    0 Yes 0 No

3,if our source containing 1 terabyte data so while loading data into target what are the thing we ..

Answer / bakshu shaik

one more thing we need to take into consideration, along with above suggestions:

If the target having an index, Loading time (huge amount of data) increases/performance hazard......so it's always better to drop the index before loading(Stored procedure transformation) and create again after loading completed...!

Is This Answer Correct ?    0 Yes 0 No

3,if our source containing 1 terabyte data so while loading data into target what are the thing we ..

Answer / sudheer113

Along with the above suggestions......

Better to use partitioning (single or multiple thread) and and increase the commit level to 50 lacs. Better to load first flat file instead of loading into table.
then if u r db is oracle better to use sql loader to load data from file to table.If DB2 than use db2 loader.

Is This Answer Correct ?    0 Yes 0 No

3,if our source containing 1 terabyte data so while loading data into target what are the thing we ..

Answer / upendra

large amount of data in source file.So we can devide partitions and commit level ur own wish,then connected to different target table, each one target table better for bulk load.

Is This Answer Correct ?    0 Yes 0 No

Post New Answer

More Informatica Interview Questions

How mapping parameter and variable works

1 Answers  


How to join a Flat and Relational Source without using (Joiner, Update and Lookup ) transformations... is it possible? if yes i would like to know how?

10 Answers   Wipro,


in unconnected lookup , what are the other transformations , that can be used in place of that expression transformation ?

9 Answers   Target,


How can a transformation be made reusable?

0 Answers   Informatica,


what is degenerated dimension

6 Answers   Cap Gemini,


What is decode in static cache?

0 Answers  


How to display last 5 records in a table ? With out Top key word and doing order by desc Advance thanks

2 Answers   Puma,


suppose a session is failed after a transformation , from where that session will run again , i.e . from beginning or from that transformation ?

3 Answers   TCS,


Different circumstance which drives informatica server to expel records?

0 Answers  


How can we delete duplicate rows from flat files?

0 Answers  


case and like function in informtica (my source is XML). case when OS Like'%Windows%' and OS Like '%200%' then 'Windows 200' case when OS Like'%Windows%' and OS Like '%200%'and OS like '%64%' then 'windows 200 64 bit' etc.,,

1 Answers  


rank() over (partition by opt2.dim_plat_site_id, opt2.dim_site_opt_sid order by case when opt2.dm_market_flg in ('Y', 'U') then 1 else 2 end, lkp.contact_rank) as rank1, case opt2.contact_type when 'Buyer' then row_number() over (partition by opt2.dim_plat_site_id, opt2.dim_site_opt_sid, lkp.contact_rank order by has_name_flg desc, ship_to_flg desc , last_order_dt desc) when 'Decision Maker' then row_number() over (partition by opt2.dim_plat_site_id, opt2.dim_site_opt_sid, lkp.contact_rank order by has_name_flg desc , last_quote_dt desc , mailability_score desc , source_ranking desc) when 'Influencer' then row_number() over (partition by opt2.dim_plat_site_id, opt2.dim_site_opt_sid, lkp.contact_rank order by has_name_flg desc, mailability_score desc, source_ranking desc) when 'Payer' then row_number() over (partition by opt2.dim_plat_site_id, opt2.dim_site_opt_sid, lkp.contact_rank order by has_name_flg desc, mailability_score desc, source_ranking desc) --elu 05/28/2013 else row_number() over (partition by opt2.dim_plat_site_id, opt2.dim_site_opt_sid, lkp.contact_rank order by has_name_flg desc, mailability_score desc, source_ranking desc) end rank2 row_number() over (partition by opt3.dim_plat_site_id, opt3.dim_site_opt_sid order by rank1,rank2) as "rank", case when "rank"<= opt3.maximum_value then 'Y' else 'N' end as include_flg

0 Answers  


Categories