IN SCD1, insource we have 10 billion records and in the first day its uploaded successfully and in the second day its taking time to upload because some records it might get update or insert new records. As a developer what will be the better solution for this??
No Answer is Posted For this Question
Be the First to Post Answer
Can yoU use the maping parameters or variables created in one maping into another maping?
How can you recognise whether or not the newly added rows in the source r gets insert in the target ?
How can you complete unrcoverable sessions?
There are 100 lines in a file. How to print line number 31-50 and 81-90 in unix with a single command.
in static and dynamic which one is better
if i have records like these (source table) rowid name 10001 gdgfj 10002 dkdfh 10003 fjfgdhgjk 10001 gfhgdgh 10002 hjkdghkfh the target table should be like these by using expression tranformation. (Target table) rowid name 10001 gdgfj 10002 dkdfh 10003 fjfgdhgjk xx001 gfhgdgh xx002 hjkdghkfh (that means duplicated records should contain XX in there rowid)
what are the limitation of sorter transformation?
How to find from a source which has 10,000 records, find the average between 500th to 600th record?
If my source is having 30 million records, so obviously the cache could not be allocated with sufficient memory. What needs to be done in this case?
How i can Schdule the Informatica job in "Unix Corn Schduling tool" ?
For stage table data processing, suppose in first run we processed 8 records out of 10 records then in 2nd run we should consider only not processed records (here total no of records =2) along with new records which got loaded in stage table thru real time mapping. Note : In this example, 8 records are those records for which we got transaction number after lookup on trn_no_cod table and 2 records for which lookup returns trn_no as NULL
What are mapplets?