if the source file is
CID,CCODE,CONNDATE,CREATEDBY
0000000224,1000,20060601,CURA
0000000224,2000,20050517,AFGA
0000000224,3000,20080601,TUNE
0000000225,1000,20020601,CURA
0000000225,2000,20050617,AFGA
0000000225,3000,20080601,TONE
AND TARGET is oracle
following are the validations
cid loaded with unique records
leading zeors has to be deleted while loading cid in target
load only customer who got early connected to company
conn_date should be loaded into oracle date format
cid datatype is varchar2 in target
conn_date is data datatype
ccode is varchar2
0000000224,1000,20060601,CURA
0000000224,1000,20060601,CURA
Answer Posted / rajesh
1)first sort d data in accending order based on date and remove duplicates
2)in transformer write a derivation to delete leading zeros by using function trim. eg- trim(col 1)
3)change date format by using date functions in transformer
4)change d data types to be fitted according to following ask
Is This Answer Correct ? | 0 Yes | 0 No |
Post New Answer View All Answers
What is the sortmerge collector?
I have a few records just I want to store data in to targets cycling way how?
what are .ctl(control files) files ? how the dataset stage have better performance by this files?
Where the datastage stored his repository?
What is the roundrobin collector?
Hi All , in PX Job I have passed 4 Parameters and when i run the same job in sequence i dont want to use those parameters , is this possible if yes then how
how to run a sequential file stage in parallel if the stage is used on the TARGET side
What is use Array size in datastage
What are the different options associated with dsjob command?
what is stage is used for below Input columns: dept|mgr|employee|salary Output columns: mgr|count of employee per mgr|avg salary per dept note: each dept has one mgr and each mgr has many employees
Can you implement SCD2 using join, transformer and funnel stage?
Difference between IBM DATA STAGE8.5 and DATA STAGE9.1 ?
Differentiate between operational datastage (ods) and data warehouse?
What are the types of hashed files in data stage
What is the flow of loading data into fact & dimensional tables?