my requirement is that i want pull the bad records from input file and move those records in to a seperate file.
that file has to be sent via email..
any suggentions please
I installed sarg from sarg rpm and i am facing issue while generating sarg reports and getting this time different error below
sarg -l /var/log/squid/access.log
SARG: Records in file: 242332, reading: 0.00%
SARG: Records in file: 242332, reading: 2.06%
SARG: Records in file: 242332, reading: 4.13%
SARG: Records in file: 242332, reading: 6.19%
New to unix. I have a couple files of 5 million records. I have a key field on those records. I have about 300 keys that I need to remove off the file, and I don't want to write a program to do it. I have used grep -v in the past and that works great for a few records, but I can't see myself having to do that 300 times/file.
Is there an easier way using grep, egrep, sed/awk, etc....
I have a .csv file which I want to split into smaller files as .csv format only, i have few validations which i want to pass.The validations are as follows :
Input file need to be split into multiple files
a. First file having say 100 records and last file having some 33 and rest will have 1000 records each.
b. Each file should have unique name
c. Each file should be a CSV file.
I have a file that has multiple records.
I have to copy those records that have a code '06' at a specific position, lets say, at position 19 and 20, into a another file. Records don't have any spaces in between.
How can I achieve this using a shell script?
I have basic knowledge on unix shell scripting(not an expert).
My requirement is reading the csv file using the schema defined in the configuration file and if the condition is not mached then move the unmatched record to a error file and matched good records into other file.
Here i'm defining the schema of an input file in normal text file and I'm calling it as configuration file