my requirement is that i want pull the bad records from input file and move those records in to a seperate file.
that file has to be sent via email..
any suggentions please
I installed sarg from sarg rpm and i am facing issue while generating sarg reports and getting this time different error below
sarg -l /var/log/squid/access.log
SARG: Records in file: 242332, reading: 0.00%
SARG: Records in file: 242332, reading: 2.06%
SARG: Records in file: 242332, reading: 4.13%
SARG: Records in file: 242332, reading: 6.19%
New to unix. I have a couple files of 5 million records. I have a key field on those records. I have about 300 keys that I need to remove off the file, and I don't want to write a program to do it. I have used grep -v in the past and that works great for a few records, but I can't see myself having to do that 300 times/file.
Is there an easier way using grep, egrep, sed/awk, etc....
I have a .csv file which I want to split into smaller files as .csv format only, i have few validations which i want to pass.The validations are as follows :
Input file need to be split into multiple files
a. First file having say 100 records and last file having some 33 and rest will have 1000 records each.
b. Each file should have unique name
c. Each file should be a CSV file.
I have a file that has multiple records.
I have to copy those records that have a code '06' at a specific position, lets say, at position 19 and 20, into a another file. Records don't have any spaces in between.
How can I achieve this using a shell script?
I have the following code for removing duplicate records based on fields in inputfile file & moves the duplicate records in duplicates file(1st Awk) & in 2nd awk i fetch the non duplicate entries in inputfile to tmp file and use move to update the original file.
Can both the awk be combined in single call?