Delete unique rows - optimize script

view story

http://www.unix.com – Hi all, I have the following input - the unique row key is 1st column Code: cat file.txt   [4] A response [1] C request [1] C response [3] D request [2] C request [2] C response [5] E request The desired output should be Code: [1] C request [1] C response [2] C request [2] C response Now i have implemented the below loop which does work but when the input file is bigger than 300 mb in size the whole process of removing the non-pairs rows takes ages since it needs to scan the whole file in a loop. Code: #/bin/bash req=$(mktemp) res=$(mktemp) new=$(mktemp) tmp=$(mk (HowTos)