what are testing in a mapping level please give brif
eplanation
Answer / sambasivarao.m
Hi,
find below testing points at mapping level that might be
help you..
Verify Mapping is Available
Verify parameters are defined properly with proper
datatypes
Verify whether the shorcut to source table is used as
source in the mapping from replica connection
Verify whether WHERE clause in the SQ has used properly to
implement delta condition
Verify whether the primary key is selected properly in the
target table
Verify whether versioning is maintained
Verify Source name used in the mapping
Verify Target name used in the mapping
Changes made to the existing mapping(if applicable)
Verify whether the new fields handled for NULL values(if it
is a NOT NULL column in the Target table)
Verify whether lookup is added to the mapping(if applicable)
Verify whether the Lookup override used is proper
Vefify whether the condition for Insert/Update is used in
UPDATE STATERGY transformation
Verify Whether the Filter condition used is proper
Regards,
Sambasivarao.m
Is This Answer Correct ? | 7 Yes | 0 No |
what is the purpose of surrogate key and diff between primary key&surrogate key
in aggregator transformation we want to get middle record how to implement, source containg empno,name sal,deptno,address
what is song in infrmatica...?
How will you convert rows into columns or columns into rows
how much memory (size) occupied by a session at runtime
Hi I have two sources like A, B. My source A contain 10000 million records from that i need 20 attributes. My source B has also same 10000 million records from that i need only 1 attribute. By using Joiner i have to load into target? Is there any issue regarding this? if issue is there how to tune this mapping in best way?
Dimension Object created in Oracle can be imported in Designer Cubes contain measures
MY SOURCE IS LIKE THIS VENKATESH,101||RAJESH,102||SIVA,103||SWATHI,104 MY REQUIRMENT IS NAME ID VENKATESH 101 RAJESH 102 SIVA 103 SWATHI 104 PLEASE PROVIDE ME THE SOLUTION
Do you find any difficulty while working with flat files as source and target?
How can you complete unrcoverable sessions?
If my source is having 30 million records, so obviously the cache could not be allocated with sufficient memory. What needs to be done in this case?
What are partitions in informatica and which one is used for better performance?