[PLUG] Correcting duplicate strings in files

david dafr+plug at dafr.us
Wed Jun 20 02:23:22 UTC 2018


On 06/19/2018 06:02 PM, Rich Shepard wrote:
> On Tue, 19 Jun 2018, david wrote:
> 
>> While I believe the answer has already been found, would the 'uniq' 
>> command have been useful as an alternative?
> 
> david,
> 
>    Good question. Can it find a difference in a specific field and change
> only one of them? Perhaps, but I've no idea.

Without a bigger sample size of data from you, I'm not sure.

I use the uniq command a lot when I pull a list of stuff (usually IPs 
and more) with grep or other utilities from log files and then pipe 
things through uniq to get a count of times an entry is found (-c flag).

Provided all data lines are unique, except for your one duplicate line, 
then yes, you could use this. A crude, but effective approach to test 
would be:

cat $file | uniq -u > $outfile

There are a lot of approaches, and I like the awk approach. This might 
just be another tool for you to use in the future to satisfy other needs.

david



More information about the PLUG mailing list