I have a scenario with lookup table. in that lookup table have
colomns, but i need to process only 10 colomns out of 50.
Please te me how can we acheive it?
Answers were Sorted based on User's Feedback
Answer / prabhu
You can delete all unused columns from lookup.Then lookup
will create cache only on columns in lookup.This will
improve the performance too.
Prabhu
www.iskilltech.com
| Is This Answer Correct ? | 17 Yes | 0 No |
Answer / sairam
IN CONNECTED LOOKUP TRANSFORMATION OUTPUT PORT YOU PUT IN UNCHECK 40 COLOMNS AND YOU GET RESULT 10 COLOUMNS
| Is This Answer Correct ? | 11 Yes | 1 No |
Answer / priyank
Except the 10 columns, if any other column is not being
used for any matching/filter condition, then remove the
columns from the look up table and the columns which are
used in the filter condition but are not taken forward in
the target, uncheck the oputput port and keep them as Input
port.
| Is This Answer Correct ? | 2 Yes | 0 No |
Answer / ankur saini er.ankur861@gmail.
hey...i think we can use dynamic lookup too..in dynamic
lookup mark ignore the columns you dont want them tp
participate in lookup process rest for every port specify a
expression for lookup..
GUYs...let me know your comments on this...
Thanks
| Is This Answer Correct ? | 0 Yes | 0 No |
Answer / reddy
i think we can use unconnected look up t/r to acieve this.
| Is This Answer Correct ? | 0 Yes | 9 No |
which T/r we can use it mapping parmeter and mapping variable? and which one is reusable for any mapping mapping parmeter or mapping varibale?
Let’s say I have more than have record in source table and I have 3 destination table A,B,C. I have to insert first 1 to 10 records in A then 11 to 20 in B and 21 to 30 in C. Then again from 31 to 40 in A, 41 to 50 in B and 51 to 60 in C……So on up to last record.
how do u move the code from development to production?
How we can confirm all mappings in the repository simultaneously?
Suppose we have a (assume relational) source table Product_Id Month Sales 1 Jan x 1 Feb x . . . . . . 1 Dec x 2 Jan x 2 Feb x . . . . . . 2 Dec x 3 Jan x 3 Feb x . . . . . . 3 Dec x . . . . . . and so on. Assume that there could be any number of product keys and for each product key the sales figures (denoted by 'x' are stored for each of the 12 months from Jan to Dec). So we want the result in the target table in the following form. Product_id Jan Feb March.. Dec 1 x x x x 2 x x x x 3 x x x x . . So how will you design the ETL mapping for this case , explain in temrs of transformations.
SOURCE NAME SAL GANGA 30000 RAJU 20000 PAVAN 25000 TARGET NAME SAL MAXSAL GANGA 30000 30000 RAJU 20000 30000 PAVAN 25000 30000 in mapping level how to achive that
source file name xyz a,0,a,a,a b,b,b,0,b c,c,c,0,c target should be like this xyz a b c how to implement this?
What is the capacity of power cube?
What is active and passive transformation?
why we are using surogate key in real time give me explanation
following scenario empsal table i want who exist one lakshs sal above monthwise? ` empsal empid monthyear sal 1 jan2008 1000 2 march2009 50000 3 april2009 4000 4 feb2009 100000 5 jul2009 600000 6 dec 2008 90000
from Source 100 rows are coming, on target there are 5 m rows which options is better to match data 1. Joiner 2 No cache 3. Static 4. Dynamic