What is the difference between Connected and UnConnected
Lookup Transformation.Give me one or two examples please?
Answer Posted / jaimeen shah
1) Connected lookup receives input values directly from the
pipeline whereas Unconnected lookup Receives input values
from the result of a :LKP expression
in another transformation.
2) You can use a dynamic or static cache in connected
lookup whereas in connected lookup you can use static cache
only.
3) In connected lookup, cache includes all lookup columns
used in the mapping (that is, lookup source columns
included in the lookup condition and lookup source columns
linked as output ports to other transformations) whereas in
unconnected lookup, Cache includes all lookup/output ports
in the lookup condition and the lookup/return port.
4) Connected lookup can return multiple columns from the
same row or insert into the dynamic lookup cache whereas
Unconnected lookup designate one return port (R). Returns
one column from each row.
5) In connected lookup, if there is no match for the lookup
condition, the PowerCenter Server returns the default value
for all output ports. If you configure dynamic caching, the
PowerCenter Server inserts rows into the cache or leaves it
unchanged.
In unconnected lookup, If there is no match for the lookup
condition, the PowerCenter Server returns NULL.
6) Connected lookup pass multiple output values to another
transformation. Link lookup/output ports to another
transformation.
Unconnected lookup pass one output value to another
ransformation. The ookup/output/return port passes the alue
to the ransformation calling :LKP expression.
7) Connected lookup supports default values in the port
whereas unconnected lookup doesn't support the default
values.
Is This Answer Correct ? | 2 Yes | 1 No |
Post New Answer View All Answers
How to create Target definition for flat files?
Write the unconnected lookup syntax and how to return more than one column.
What is deployment group?
Write the unconnected lookup syntax?
What are the measure objects?
How identifying bottlenecks in various components of informatica and resolving them?
My source is delimited flat file Flat file data is H|Date D1|ravi|bangalore D2|raju|pune T|4 The data will be send to target if the fallowing two conditions satisfied 1.The first row Date column is equal to SYSDATE 2.Last record second port equal to number of records. How to achieve?
Design time, run time. If you don't create parameter what will happen
Suppose on 1st Nov 2010 you had created a mapping which includes huge aggregator calculations and it is under process for next two days. You will notice that even on 3rd day also its still calculating. So without changing a logic or changing a mapping How will you troubleshot or to run that mapping? Explain the steps
IN SCD1, insource we have 10 billion records and in the first day its uploaded successfully and in the second day its taking time to upload because some records it might get update or insert new records. As a developer what will be the better solution for this??
If I have 10 flat files with same name abc.txt files with different timestamps as source I need to load them in tgt table oracle. in between job execution fails and rows are not loaded into tgt. how can I make them load in that target even if my job fails?
Explain in detail about scd type 1 through mapping.
How you prepared reports for OLAP?
What are the uses of etl tools?
I have two different source structure tables, but I want to load into single target table? How do I go about it? Explain in detail through mapping flow.