Answer Posted / sagar chowdary
By using the date column in the source we do incremental load,
specifying the start date in source qualifier,changing the
start date in parameter file in future.
With this in mind, I will expect a load pattern like this
Every extract from the source will be a full load
Every load in terms of records, will be equal to last load + new records – deleted records
80-90 % of the extracted records will already exist in the valid table instance
Every load will be incrementally larger than the previous load, as more records are added to the sourceThinking
Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
How can you validate all mappings in the repository simultaneously?
source : xml file target: xml file how can we check data loaded into target xml file using writing sql query. pls help on this asap.
To import the flat file definition into the designer where should the flat file be placed?
What is the difference between informatics 7x and 8x and what is latest version?
Performance tuning in UNIX for informatica mappings?
What is the need of etl tools?
In development project what is the process to follow for an etl developer from day1
If the source has duplicate records as id and name columns, values: 1 a, 1 b, 1 c, 2 a, 2 b, the target should be loaded as 1 a+b+c or 1 a||b||c, what transformations should be used for this?
What are the different transaction levels available in transaction control transformation?
what are the best practices to extract data from flat file source which are bigger than 100 mb memory?
State the limitations where we cannot use joiner in the mapping pipeline?
What are the main issues while working with flat files as source and as targets ?
If i have source as flat file. how can i store the header and trilor into one target and data into one more target. |------>target1(header+trailor) source------ |------>target2(data) can any one please help me
What are multi-group transformations?
Make a note of the quantity vaults made in informatica?