Can we have a Mapping without a Source Qualifier?
Answers were Sorted based on User's Feedback
Answer / rj
why not? If you use a xml file or a VSM(cobol) file u will not have a source qualifier
| Is This Answer Correct ? | 33 Yes | 2 No |
Answer / arnab
yes it is possible , if you have cobol source files
you will not have a source qualifier but instead
a normalizer
| Is This Answer Correct ? | 27 Yes | 2 No |
Answer / ashish
In addition to the Aruna's answer,Integration service require native datatypes and SQ convert the port in to native datatypes to make the data compatible with Integration service.
for Eg. In source table if a column's datatype is "Varchar2" then source qualifier will convert it in "String" type.
| Is This Answer Correct ? | 23 Yes | 6 No |
Answer / aruna1105
No. Source qualifier is required to make the data compatible with Informatica. It basically qualifies the data to a language which informatica can understand.
| Is This Answer Correct ? | 17 Yes | 11 No |
Answer / babu
Hi All,
It can be possible,whenever u r source is COBAL FILE at
that time NORMALIZER T/R ACTS AS A SQ.
| Is This Answer Correct ? | 7 Yes | 1 No |
Answer / amit
Yes it is very much possible to have a mapping without a
source qualfier...If you have COBOL source files you will
NOT have a source qualifier , you will have a Normalizer
instead..
| Is This Answer Correct ? | 5 Yes | 1 No |
Answer / varun dikshit
What If we connect a sequence generator directly to target? No SQ required in such a case :)
| Is This Answer Correct ? | 3 Yes | 1 No |
Answer / sj
sorry guys i wrote the answer wrong.It's not possible with out sq.
| Is This Answer Correct ? | 5 Yes | 6 No |
Answer / ankit
i guess u can just create stored procedure transformation
without any source in mapping or target in mapping...define
connection in session...i haven't tried this...i haven't
lost my nutts yet
| Is This Answer Correct ? | 0 Yes | 2 No |
What is the use of target designer?
How does a rank transform differ from aggregator transform functions max and min?
which quality process u can approach in ur project
If session fails after loading 10000 records in the target,how can we load 10001 th record when we run the session in the nexttime?
How can u insert o ne row in the target if that row does not exists in the target and update if it exists
While using update strategy in the mapping which gives more performance, a flat file or Table? Why? What are the advantages and disadvantages?
in which situations we go for pesistent cache in lookup and which situations go for shared lookup cache?
What is best approach to load 100 different source files (Different structure) to differet target tables ?
What do you mean by enterprise data warehousing?
How to update source definition?
source table have 3 records? and it is sucessfully loaded into target. and 4more records is added in to source .that means 7 records now in source. we have to load the remaining 4 records into the same trgt table with maintian top 3 records. how ?can any one give me the data flow of this logic plz?
Can I use same Persistent cache(X.Dat) for 2 sessions running parallely? If it is not possible why?If yes How?