I have a script that writes data to a CSV file and then gzips it.
The bizarre thing is the gzipped file is truncated by a few lines (total file size is over 18million lines).
I've manually gzipped the CSV file produced by the script and there is no file truncated. However, when I use Python to gzip the file (I've tried gzip, os, and subprocess), the file is truncated. I can't figure out why this might be happening.
Code snippet below:
#Remove quotes from file
with open(localFile, "r") as csvfile:
csvreader = csv.reader(csvfile, skipinitialspace=True)
#Skip the header row
next(csvreader)
writer = csv.writer(open(outputFile, "w"), quoting=csv.QUOTE_NONE)
for row in csvreader:
writer.writerow(row)
#Zip file
zipCommand = f"gzip {outputFile}"
exit_code =os.system(zipCommand)
total file size is over 18million lines
I assume that holding whole of this in RAM memory is not option. You might give csv.writer gzip's file handle to avoid that. Consider following simple example:
import csv, gzip
with gzip.open("file.csv.gz", "wt") as gf:
writer = csv.writer(gf, quoting=csv.QUOTE_NONE)
writer.writerow([1,2,3])
writer.writerow([4,5,6])
writer.writerow([7,8,9])
this will create file.csv.gz, after gunzip file.csv.gz you will get file with following content
1,2,3
4,5,6
7,8,9
Note: use wt (write-text) mode for usage with csv.writer which emits text.
Related
I am currently reading the csv file in "rb" mode and uploading the file to an s3 bucket.
with open(csv_file, 'rb') as DATA:
s3_put_response = requests.put(s3_presigned_url,data=DATA,headers=headers)
All of this is working fine but now I have to validate the headers in the csv file before making the put call.
When I try to run below, I get an error.
with open(csv_file, 'rb') as DATA:
csvreader = csv.reader(file)
columns = next(csvreader)
# run-some-validations
s3_put_response = requests.put(s3_presigned_url,data=DATA,headers=headers)
This throws
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
As a workaround, I have created a new function which opens the file in "r" mode and does validation on the csv headers and this works ok.
def check_csv_headers():
with open(csv_file, 'r') as file:
csvreader = csv.reader(file)
columns = next(csvreader)
I do not want to read the same file twice. Once for header validation and once for uploading to s3. The upload part also doesn't work if I do it in "r" mode.
Is there a way I can achieve this while reading the file only once in "rb" mode ? I have to make this work using the csv module and not the pandas library.
Doing what you want is possible but not very efficient. Simply opening a file isn't that expensive. The CSV reader only reads only line at a time, not the entire file.
To do what you want you have to :
Read the first line as bytes
Decode it into a string (using the correct encoding)
Convert it to a list of strings
Parse it with csv.reader and finally
Seek to the start of the stream.
Otherwise you'll end up uploading only the data without the headers :
with open(csv_file, 'rb') as DATA:
header=file.readline()
lines=[header.decode()]
csvreader = csv.reader(lines)
columns = next(csvreader)
// run-some-validations
DATA.seek(0)
s3_put_response = requests.put(s3_presigned_url,data=DATA,headers=headers)
Opening the file as text is not only simpler, it allows you to separate the validation logic from the upload code.
To ensure only one line is read at a time you can use buffering=1
def check_csv_headers():
with open(csv_file, 'r', buffering=1) as file:
csvreader = csv.reader(file)
columns = next(csvreader)
// run-some-validations
with open(csv_data, 'rb') as DATA:
s3_put_response = requests.put(s3_presigned_url,data=DATA,headers=headers)
Or
def check_csv_headers():
with open(csv_file, 'r', buffering=1) as file:
csvreader = csv.reader(file)
columns = next(csvreader)
// run-some-validations
//If successful
return True
def upload_csv(filePath):
if check_csv_headers(filePath) :
with open(csv_data, 'rb') as DATA:
s3_put_response = requests.put(s3_presigned_url,data=DATA,headers=headers)
I want to open a variable number of csv files and then I would like to iterate over the csv files opened and upload 1 row of each file at a time to my sql database.
For example, loop through each file uploading the first row of each file to the database, then loop again through each file uploading the second row of each file to the database.
However, I'm stuck in having the csv files ready to be uploaded in a single object.
The error happens at 'csv_data[i] = csv.reader...'
Each file is for a different table, so I cannot append them.
import csv
import sys
i = 0
for argv in sys.argv[1:]:
csv_file = open(argv, newline='', encoding='utf-8-sig')
csv_data[i] = csv.reader(csv_file, dialect='excel', delimiter=',', quotechar='|')
csv_file.close()
i += 1
After this code, I would need something to loop through each file uploading a certain row number.
zip together the files, iterate through them:
file_handles = [open(file, newline='', encoding='utf-8-sig') for file in argv[1:]]
readers = (csv.reader(file, dialect='excel', delimiter=',', quotechar='|') for file in file_handles)
# zip here
for line_group in zip(*readers):
# line_group is a tuple of line i of each file
# don't forget to close your files
for file_handle in file_handles:
try:
file_handle.close()
except:
print("Issue closing one of the files")
I must be missing something very simple here, but I've been hitting my head against the wall for a while and don't understand where the error is. I am trying to open a csv file and read the data. I am detecting the delimiter, then reading in the data with this code:
with open(filepath, 'r') as csvfile:
dialect = csv.Sniffer().sniff(csvfile.read())
delimiter = repr(dialect.delimiter)[1:-1]
csvdata = [line.split(delimiter) for line in csvfile.readlines()]
However, my csvfile is being read as having no length. If I run:
print(sum(1 for line in csvfile))
The result is zero. If I run:
print(sum(1 for line in open(filepath, 'r')))
Then I get five lines, as expected. I've checked for name clashes by changing csvfile to other random names, but this does not change the result. Am I missing a step somewhere?
You need to move the file pointer back to the start of the file after sniffing it. You don't need to read the whole file in to do that, just enough to include a few rows:
import csv
with open(filepath, 'r') as f_input:
dialect = csv.Sniffer().sniff(f_input.read(2048))
f_input.seek(0)
csv_input = csv.reader(f_input, dialect)
csv_data = list(csv_input)
Also, the csv.reader() will do the splitting for you.
My program reads a csv file but recently the input file was changed to be base64 encoded. So currently the read code is:
with open(uploadFile, 'rb') as csvfile:
spreadSheet = csv.reader(csvfile, delimiter=',')
I know the csv is a file descriptor and this can't be done, but I want to do something like:
import base64
with open(uploadFile, 'rb') as csvfile:
spreadSheet = csv.reader(bas64.decode(csvfile), delimiter=',')
That is the file input would be base64 decoded as though in a pipe and then parsed as a csv file.
I can read the file decode it write back into another file and then read that file with the csv reader but that all seems as though there should be a way to do it as a pipe sequence.
Try the following
import base64
import csv
with open(uploadFile, 'rb') as csvfile:
decoded = base64.standard_b64decode(csvfile.read()).decode('utf-8')
spreadSheet = csv.reader(decoded.splitlines(), delimiter=',')
I am trying to add extra columns in a csv file after processing an input csv file. But, I am getting extra new line added after each line in the output.
What's missing or wrong in my below code -
import csv
with open('test.csv', 'r') as infile:
with open('test_out.csv', 'w') as outfile:
reader = csv.reader(infile, delimiter=',')
writer = csv.writer(outfile, delimiter=',')
for row in reader:
colad = row[5].rstrip('0123456789./ ')
if colad == row[5]:
col2ad = row[11]
else:
col2ad = row[5].split(' ')[-1]
writer.writerow([row[0],colad,col2ad] +row[1:])
I am processing huge a csv file so would like to get rid of those extra lines.
I had the same problem on Windows (your OS as well, I presume?). CSV and Windows as combination make a \r\r\n at the end of each line (so: double newline).
You need to open the output file in binary mode:
with open('test_out.csv', 'wb') as outfile:
For other answers:
Python's CSV writer produces wrong line terminator
CSV in Python adding an extra carriage return