i want job aborted after some records are loaded into output
by using only sequential stage and dataset
Answer / guest
HI Friend,
Use constraint in transformer stage by selecting abort after
rows column and mention the no.of rows to be rpocessed, then
after those many rows job will abort.
Thanks,
Hanumantha Rao.
| Is This Answer Correct ? | 0 Yes | 1 No |
Can you explain kafka connector?
i want anser this question empno,ename,sal 12,mmm_ww,200 13,nnn_xx,300 14,bbb_qq,400 which stages are take which types of logicks are doing pls help me i don't need "_"ex nnnxx this type i want
i have a scenario like two columns(Empno, Ename) in that duplicate records are there, so my question is how to get second duplicate record in datastage.
How can we improve the performance in datastage?
How do you import and export data into datastage?
Scenario : I have 2 jobs say job A and Job B with parameters x and y respectively. I need to create a sequence job. If we pass parameter x then Job A should run, If we pass parameter y then Job B should run, if we dont pass any parameter then Both Job A & B should run.
if a column contains data like ram,rakesh,madhan,suraj,pradeep,bhaskar then I want to place names separated by commas in another columns how can we do?
i 10 jobs first two jobs are runing in 2nodes,next 2 jobs are running in 4 nodes, next 4 jobs are running in 6 nodes and the remaining jobs are running on 10 nodes. how to change the node configuration?
Is the value of staging variable stored temporarily or permanently?
what is the diff between sequential file and fileset stages?
Hi guys, Design job sequence, we have 3 sources, in that 1st source in abort then only run the remaining sources.. How please design the job. Thanks.
what is normalization and denormalization