how can u handle null values in transformer stage.
Answers were Sorted based on User's Feedback
Answer / prabhath.p
by using 1).Null to value(input column)
2.null to empty(input column)
3.if is null(input column_then
three commands are there in transformers for null handling
| Is This Answer Correct ? | 11 Yes | 1 No |
Answer / subhash
in tranformer we have one function like null handling so using the fuction options we can handel null vales in the table
| Is This Answer Correct ? | 9 Yes | 1 No |
Answer / rajeshchunduri
in transformer we have null handling functions in that by
using null to zero we can handle null.
chunduri
| Is This Answer Correct ? | 4 Yes | 1 No |
Answer / prasad
take one stage variable
sV: if (IsNull(colum_name) or Column_name='') Then 0 Else 1
and in constraint use
sV=0 (null records will goes to one target)(we can handle null by using stage variable)
sV=1 (not null records will goes to another target)
Plz Correct me, if am wrong......
| Is This Answer Correct ? | 2 Yes | 0 No |
Answer / subhash
null handling in transformer in two ways
one for
1. identifing the Null values
2. Remove the Null values
1.Ans: by using Null handing function
2.Ans: By using Constrains we can Remove the Null Values Rows
In transformer
| Is This Answer Correct ? | 2 Yes | 1 No |
Answer / sailaja
Hi,
For Null handling in transformer you can use the below function dependind on ur requirement
1. Nulltoempty
2. Nulltozero
3.Nulltovalue
| Is This Answer Correct ? | 2 Yes | 1 No |
How to exclude first and last lines while reading data into a sequential file(having some 1000 records).I guess probably by using unix filter option but not sure which to use
if we using two sources having same meta data and how to check the data in two sources is same or not? and if the data is not same i want to abort the job ?how we can do this?
how to create document in datastage?
Describe stream connector?
What is audit table?
Can we use Round Robin for aggregator?is there any benefit underlying?
what is a nodemap constraint
if i have two tables table1 table2 1a 1a,b,c,d 1b 2a,b,c,d,e 1c 1d 2a 2b 2c 2d 2e how can i get data as same as in tables? how can i implement scd typ1 and type2 in both server and in parallel? field1 field2 field3 suresh , 10,324 , 355 , 1234 ram , 23,456 , 450 , 456 balu ,40,346,23 , 275, 5678 how to remove the duplicate rows,inthe fields?
This is UNIX question asked in DataStage Interview. Say I have n numbers of records in a text file. I want first 3 records in 1st file, last three records in 3rd file and remaining n-6 records in 2nd file. (Note: we don't know how many records are there in the File. I am getting one file on daily basis and I want three target files as asked above)
What is the process of killing a job in datastage?
What is lookup table?
whom do you report?