Deleting specific lines and duplicates from a 11gb wordlist text file

view full story

http://unix.stackexchange.com – I have an 11gb wordlist file which is already sorted as each word is on its own line. I need to remove duplicates and lines starting from 077. I guess I need to run sed and sort -u together but I also want a live output (display whats happening in terminal) and if possible display the time left. All of this in one command and it must be able to run optimally at full performance under a LiveCD or possibly installed BackTrack 5 rc3. Time is not very important but if there is a way for me to calculate the ETA, I might be able to borrow my dad's i7 based CPU which should process it faster obv (HowTos)