How can remove duplicates in a file using UNIX?
Answers were Sorted based on User's Feedback
Answer / j madhava rao
$sort -u filename will sorts the data and removes the duplicates lines from the output
Is This Answer Correct ? | 17 Yes | 1 No |
Answer / vishwababu
You can use UNIQ to remove duplicates frm a file but it can
consider only same consecutive records.
Ex: if u hve records as
1 vishwa
1 Shruti
2 Ravi
3 Naveen
1 vishwa
and if u use UNIQ cmd, then you wil get vishwa, Ravi, Naveen
and vishwa.
So perform sort before using UNIQ so tat to avoid duplicates
Is This Answer Correct ? | 7 Yes | 7 No |
Answer / prasad
Uniq -u File_name
It will remove duplicates records.
Is This Answer Correct ? | 0 Yes | 2 No |
what is mapping lookup
Name the command line functions to import and export the DS jobs?
Hi all, can u explain header and trailer records in file? If a file has header and trailer /master-detail records how will you read it in datastage?
what is parameterset?
if a column contains data like ram,rakesh,madhan,suraj,pradeep,bhaskar then I want to place names separated by commas in another columns how can we do?
What is the difference between server job and parallel jobs?
Why do we use exception activity in Datastage?
There are two file are there .1st file contains 5 records and 2nd file contain 10 records in target they want 50 records.how can achieve this
Differentiate between odbc and drs stage?
hi this is kiran i have one table i want divide the table with two different table like even rows and odd rows how can i do this one tell me plzz
How do you register plug-ins?
my source is sequencial file and my target is dataset. i am running the job in two node configuration file. my source having 10 records how the data move to target?