What are the uses of a Parameter file?
Answers were Sorted based on User's Feedback
Answer / d s r krishna
Parameter files are created with an extesion of .PRM
These are created to pass values those can be changed for
Mapping Parameter and Session Parameter during Mapping run.
Mapping Paramters:
A Parameter is defined in a parameter file for which a
Parameter is create already in the Mapping with Data Type ,
Precision and scale.
The Mapping parameter file syntax (xxxx.prm).
[FolderName.WF:WorkFlowName.ST:SessionName]
$$ParameterName1 =Value
$$ParameterName2 =Value
After that we have to select the properties Tab of Session
and Set Parameter file name including physical path of this
xxxx.prm file.
Session Parameters:
The Session Parameter file syntax (yyyy.prm).
[FolderName.SessionName]
$InputFileValue1=Path of the source Flat file
After that we have to select the properties Tab of Session
and Set Parameter file name including physical path of this
yyyy.prm file.
Do following changes in Mapping Tab of Source Qualifier's
Properties section
Attributes values
Source file Type ---------> Direct
Source File Directory --------> Empty
Source File Name --------> $InputFileValue1
Thanks
| Is This Answer Correct ? | 8 Yes | 0 No |
Answer / bhaskar
Parameter file is one which contains the values of mapping
variables.
type this in notepad.save it .
foldername.sessionname
$$inputvalue1=
in the session properties,under parameter file path specify
the path where this notepad is saved
| Is This Answer Correct ? | 5 Yes | 0 No |
Answer / rahul
parameter file ---- we have parameters and variable.
we can use than in mapping, session and work flow.
session: to change the cache path
workflow ; to know the status of prev tast.
mapping: to maintain constant value or to use last upadated
value
| Is This Answer Correct ? | 1 Yes | 0 No |
On a day, I load 10 rows in my target and on next day if I get 10 more rows to be added to my target out of which 5 are updated rows how can I send them to target? How can I insert and update the record?
How to Join Tables my Source is having 15 table target is one?
What are the different caches used in informatica?
How many mapplets u have created? and what is the logic used
what is the difference between stop and abort
what we require for D.modelling?
generate Unique sequence numbers for each partition in session with Unconnected Lookup ? Hi All, Please help me to resolve the below issue while Applying partitioning concept to my Session. This is a very simple mapping with Source, Lookup , router, and target. I need to Lookup on the target and compare with the source data, if any piece of data is new then Insert, and If any thing change in the existed data then Update. while Inserting the new records to the target table I'm generating sequence numbers with Unconnected lookup, by calling the maximum PK ID from the target table. The above flow is working fine from last one year. Now I wish to apply the Partitioning concept to the above floe(session) At source I used 4 pass through partitions.(For Each partition different filter conditions to pull the data from source) at Target I used 4 passthrough Partitions. it is working fine for some data, but for some rows for Insert Operation , it is throwing Unique key errors, because while Inserting the data it is generating the same sequence key twice. In detail : 1st row is coming from 1st partition and generated the sequence number 1 for that row. 2nd row is coming from 1st partition and generated the sequence number 2 for that row 3rd row is coming from the 2nd partition generated the sequence number 2 again for that row. (it must generate 3 for this row) the issue is becuase of generating the same sequence numbers twice for different partitions. Can any one Please help me to resolve this issue. While Applying partitions how can I generate a Unique Sequence numbers from Unconnected lookup for Each partitioned data. Regrads, N Kiran.
can a port in expression transf be given the name DISTINCT
Hi I have two sources like A, B. My source A contain 10000 million records from that i need 20 attributes. My source B has also same 10000 million records from that i need only 1 attribute. By using Joiner i have to load into target? Is there any issue regarding this? if issue is there how to tune this mapping in best way?
What is primary and backup node?
What is workflow? What are the components of the workflow manager?
Hi If i had source like unique & duplicate records like 1,1,2,3,3,4 then i want load unique records in one target like 2,4 and i want load duplicate records like 1,1,3,3 then can any body please send me what is th scnario. my mail i shek.inform@gmail.com