How to identify bottlenecks in
sources,targets,mappings,workflow,system and how to
increase the performance?
Answers were Sorted based on User's Feedback
Answer / praveenkumar.b
Source:
Create the Filter transformation after all the Source
Qualifiers and make the filter condition FALSE so that the
data will not go beyond this trasformation. Then run the
session and find out the time taken from source. If you
feel there is some lack in performance, then suggest the
necessary index creation in Pre Session.
Note: If the source is File, then there is no possibility
of performance problme in source side
Target:
Delete the target table from the mapping and create the
same structure as a Flat file. Run the session and find out
the time taken to write the file. If you feel problem in
performance, then delete the INDEX of the table before
loading the data. In post Session, Create the same index
Note:If the target is File, then there is no possibility of
performance problme in target side
Mapping:
The below steps need to be consider
#1. Delete all the transformations and make it as single
pass through
#2. Avoid using more number of transformations
#3. If you want to use more filter transformation, then use
Router transformation instead of it
#4. Calculate the index and data cache properly for
Aggregator, Joiner, Ranker, Sorter if the Power center is
lower version. Advance version, Power center itself will
take care of this
#5. Always pass the sorted i/p's to Aggregator
#6. Use incremental aggregation
#7. Dont do complex calculation in Aggregator
transformation.
Session:
Increas the DTM buffer size
System:
#1. Increase the RAM capacity
#2. Avoid paging
| Is This Answer Correct ? | 27 Yes | 1 No |
Answer / kalyan
Run ur session in Verbose Mode and check out the Busy
Percentage in the Log. IF its more at the Reader thread
than ur Source Query is the Bottleneck.Tune your SQ.
If its Writer thread, then you check with your target . May
be you need to drop and recreate the Indexes on the target
table.
If its the Transformation thread , then check with your
mapping logic. Concentrate More on Aggregator part..
Fine tune your logic. Don't drag the fields which are not
used to all the transformations. try to use as less
transformations as possible.
Cache your lookups .Whenever possible use the persistent
lookup concept.
This should help guys..
| Is This Answer Correct ? | 17 Yes | 2 No |
Bottleneck in Informatica
Bottleneck in ETL Processing is the point by which the performance of the ETL Process is slowr.
When ETL Process is in progress first thing login to workflow monitor and observe performance statistic. I.e. observe processing rows per second. In SSIS and Datastage when you run the job you can see at every level how many rows per second is processed by the server.
Mostly bottleneck occurs at source qualifier during fetching data from source joiner aggregator Lookup Cache Building Session.
Removing bottleneck is performance tuning.
| Is This Answer Correct ? | 3 Yes | 0 No |
Answer / srinu
identification of bottelnecks
target:configuring session to write to flatfiletarget
source:add filter t/r after sq t/t to false show that no
data is processed past the filter t/r,if it time takes to
run new session remains same to the original session there
is source bottel necks
mapping:add filter t/f before each target and set filter
condition to false,similar to source
session:use the collect performance data to identify the
session bottel necks
read from desk,write to disk counters other than zero,there
is bottelnecks
| Is This Answer Correct ? | 3 Yes | 5 No |
How to recover sessions in concurrent batches?
What do you mean by worklet?
get me output as if input is like 1 x o/p-->1 x,y,z 1 y 2 a,b 1 z 3 c 2 a 2 b 3 c
What are the types of error logs available in Informatica?
What are the types of maping wizards that r to be provided in Informatica?
what are the limitations for bulk loading in informatica for all kind of databases and transformations?
Explain the types of lookup transformation?
Explain lookup transformation source types in informatica
What is domain and gateway node?
How can we improve session performance in aggregator transformation?
What are the Advantages of de normalized data?
if i have source with 100 records target with 100 records and we lookup on another database table and it has 10 million record so what is the method of limiting that much record in lookup table?