why dataset ?
Answers were Sorted based on User's Feedback
Answer / srinivas
Dataset is a file which can be used for stored data as intermediate result.
Compared to other files processing is fast by using the Dataset because its internal format is DataStage support format.
Most of the times we can use this as intermediate data storage.
One added advantage is node system we can use multiple nodes
in dataset.
We have lot of uses by using this file dataset.
In server edition we have hashfile same way in parllel edition we will use DataSet.
| Is This Answer Correct ? | 1 Yes | 0 No |
Hi, My source is oracle(eno,ename,sal,commision,...), my requirement is like this, if there is a null values in commission col i want to keep it as null,and for the remaining first two characters of the value in my target. Plz help me
How do you remove duplicate values in datastage?
what is the unix script to run the job? Please mention commands which we use often?
Create a job to load all unique products in one table and the duplicate rows in to another table. The first table should contain the following output A D The second target should contain the following output B B B C C Q2. Create a job to load each product once into one table and the remaining products which are duplicated into another table. The first table should contain the following output A B C D The second table should contain the following output B B C
Tell me the syntax of Configuration file?
Converting Vertical PIVOTing without using PIVOT stage in DataStage. Ex: DEPT_NO EMPNAME 10 Subhash 10 Suresh 10 sravs Output: DEPT_NO EMP1 EMP2 EMP3 10 subhash suresh sravs 2) How to implement Horizontal PIVOTing without using PIVOT stage.
how may datastage variables/parameters will be in trnsformer stage? 1 2 3 4 ?
what is diff b/w datastage 8.1,8.5,8.7?
source file having the data like aabbccc, i want target file result like a1a2b1b2c1c2c3.
what is the differeces between hash and modulus partition methods
how to define satge variables in transformer stage
A flatfile contains 200 records.I want to load first 50 records at first time running the job,second 50 records at second time running and so on,how u can develop the job?pls give the steps?