I'm having problems reading from a gzipped csv file with the gzip and csv libs. Here's what I got:
import gzip
import csv
import json
f = gzip.open(filename)
csvobj = csv.reader(f,delimiter = ',',quotechar="'")
for line in csvobj:
ts = line[0]
data_json = json.loads(line[1])
but this throws an exception:
File "C:\Users\yaronol\workspace\raw_data_from_s3\s3_data_parser.py", line 64, in download_from_S3
self.parse_dump_file(filename)
File "C:\Users\yaronol\workspace\raw_data_from_s3\s3_data_parser.py", line 30, in parse_dump_file
for line in csvobj:
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
gunzipping the file and opening that with csv works fine. I've also tried decoding the file text to convert from bytes to str...
What am I missing here?
Default mode for gzip.open is rb, if you wish to work with strs, you have to specify it extra:
f = gzip.open(filename, mode="rt")
OT: it is a good practice to write I/O operations in a with block:
with gzip.open(filename, mode="rt") as f:
You are opening the file in binary mode (which is the default for gzip).
Try instead:
import gzip
import csv
f = gzip.open(filename, mode='rt')
csvobj = csv.reader(f,delimiter = ',',quotechar="'")
too late, you can use datatable package in python
import datatable as dt
df = dt.fread(filename)
df.head()
Related
Right now my final output is in excel format. I wanted to compressed my excel file using gzip. Is there a way to do it ?
import pandas as pd
import gzip
import re
def renaming_ad_unit():
with gzip.open('weekly_direct_house.xlsx.gz') as f:
df = pd.read_excel(f)
result = df['Ad unit'].to_list()
for index, a_string in enumerate(result):
modified_string = re.sub(r"\([^()]*\)", "", a_string)
df.at[index,'Ad unit'] = modified_string
return df.to_excel('weekly_direct_house.xlsx',index=False)
Yes, this is possible.
To create a gzip file, you can open the file like this:
with gzip.open('filename.xlsx.gz', 'wb') as f:
...
Unfortunately, when I tried this, I found that I get the error OSError: Negative seek in write mode. This is because the Pandas excel writer moves backwards in the file when writing, and uses multiple passes to write the file. This is not allowed by the gzip module.
To fix this, I created a temporary file, and wrote the excel file there. Then, I read the file back, and write it to the compressed archive.
I wrote a short program to demonstrate this. It reads an excel file from a gzip archive, prints it out, and writes it back to another gzip file.
import pandas as pd
import gzip
import tempfile
def main():
with gzip.open('apportionment-2020-table02.xlsx.gz') as f:
df = pd.read_excel(f)
print(df)
with tempfile.TemporaryFile() as excel_f:
df.to_excel(excel_f, index=False)
with gzip.open('output.xlsx.gz', 'wb') as gzip_f:
excel_f.seek(0)
gzip_f.write(excel_f.read())
if __name__ == '__main__':
main()
Here's the file I'm using to demonstrate this: Link
You could also use io.BytesIO to create file in memory and write excel in this file and next write this file as gzip on disk.
I used link to excel file from Nick ODell answer.
import pandas as pd
import gzip
import io
df = pd.read_excel('https://www2.census.gov/programs-surveys/decennial/2020/data/apportionment/apportionment-2020-table02.xlsx')
buf = io.BytesIO()
df.to_excel(buf)
buf.seek(0) # move to the beginning of file
with gzip.open('output.xlsx.gz', 'wb') as f:
f.write(buf.read())
Similar to Nick ODell answer.
import pandas as pd
import gzip
import io
df = pd.read_excel('https://www2.census.gov/programs-surveys/decennial/2020/data/apportionment/apportionment-2020-table02.xlsx')
with io.BytesIO() as buf:
df.to_excel(buf)
buf.seek(0) # move to the beginning of file
with gzip.open('output.xlsx.gz', 'wb') as f:
f.write(buf.read())
Tested on Linux
I have read the documentation and a few additional posts on SO and other various places, but I can't quite figure out this concept:
When you call csvFilename = gzip.open(filename, 'rb') then reader = csv.reader(open(csvFilename)), is that reader not a valid csv file?
I am trying to solve the problem outlined below, and am getting a coercing to Unicode: need string or buffer, GzipFile found error on line 41 and 7 (highlighted below), leading me to believe that the gzip.open and csv.reader do not work as I had previously thought.
Problem I am trying to solve
I am trying to take a results.csv.gz and convert it to a results.csv so that I can turn the results.csv into a python dictionary and then combine it with another python dictionary.
File 1:
alertFile = payload.get('results_file')
alertDataCSV = rh.dataToDict(alertFile) # LINE 41
alertDataTotal = rh.mergeTwoDicts(splunkParams, alertDataCSV)
Calls File 2:
import gzip
import csv
def dataToDict(filename):
csvFilename = gzip.open(filename, 'rb')
reader = csv.reader(open(csvFilename)) # LINE 7
alertData={}
for row in reader:
alertData[row[0]]=row[1:]
return alertData
def mergeTwoDicts(dictA, dictB):
dictC = dictA.copy()
dictC.update(dictB)
return dictC
*edit: also forgive my non-PEP style of naming in Python
gzip.open returns a file-like object (same as what plain open returns), not the name of the decompressed file. Simply pass the result directly to csv.reader and it will work (the csv.reader will receive the decompressed lines). csv does expect text though, so on Python 3 you need to open it to read as text (on Python 2 'rb' is fine, the module doesn't deal with encodings, but then, neither does the csv module). Simply change:
csvFilename = gzip.open(filename, 'rb')
reader = csv.reader(open(csvFilename))
to:
# Python 2
csvFile = gzip.open(filename, 'rb')
reader = csv.reader(csvFile) # No reopening involved
# Python 3
csvFile = gzip.open(filename, 'rt', newline='') # Open in text mode, not binary, no line ending translation
reader = csv.reader(csvFile) # No reopening involved
The following worked for me for python==3.7.9:
import gzip
my_filename = my_compressed_file.csv.gz
with gzip.open(my_filename, 'rt') as gz_file:
data = gz_file.read() # read decompressed data
with open(my_filename[:-3], 'wt') as out_file:
out_file.write(data) # write decompressed data
my_filename[:-3] is to get the actual filename so that it does get a random filename.
I need to append many binary files in one binary file. All my binary files are saved i one folder:
file1.bin
file2.bin
...
For that I try by using this code:
import numpy as np
import glob
import os
Power_Result_File_Path ="/home/Deep_Learning_Based_Attack/Test.bin"
Folder_path =r'/home/Deep_Learning_Based_Attack/Test_Folder/'
os.chdir(Folder_path)
npfiles= glob.glob("*.bin")
loadedFiles = [np.load(bf) for bf in binfiles]
PowerArray=np.concatenate(loadedFiles, axis=0)
np.save(Power_Result_File_Path, PowerArray)
It gives me this error:
"Failed to interpret file %s as a pickle" % repr(file))
OSError: Failed to interpret file 'file.bin' as a pickle
My problem is how to concatenate binary file it is not about anaylysing every file indenpendently.
Taking your question literally: Brute raw data concatenation
files = ['my_file1', 'my_file2']
out_data = b''
for fn in files:
with open(fn, 'rb') as fp:
out_data += fp.read()
with open('the_concatenation_of_all', 'wb') as fp:
fp.write(out_data)
Comment about your example
You seem to be interpreting the files as saved numpy arrays (i.e. saved via np.save()). The error, however, tells me that you didn't save those files via numpy (because it fails decoding them). Numpy uses pickle to save and load, so if you try to open a random non-pickle file with np.load the call will throw an error.
for file in files:
async with aiofiles.open(file, mode='rb') as f:
contents = await f.read()
if file == files[0]:
write_mode = 'wb' # overwrite file
else:
write_mode = 'ab' # append to end of file
async with aiofiles.open(output_file), write_mode) as f:
await f.write(contents)
I am trying to read CSV files from a directory that is not in the same directory as my Python script.
Additionally the CSV files are stored in ZIP folders that have the exact same names (the only difference being one ends with .zip and the other is a .csv).
Currently I am using Python's zipfile and csv libraries to open and get the data from the files, however I am getting the error:
Traceback (most recent call last): File "write_pricing_data.py", line 13, in <module>
for row in reader:
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
My code:
import os, csv
from zipfile import *
folder = r'D:/MarketData/forex'
localFiles = os.listdir(folder)
for file in localFiles:
zipArchive = ZipFile(folder + '/' + file)
with zipArchive.open(file[:-4] + '.csv') as csvFile:
reader = csv.reader(csvFile, delimiter=',')
for row in reader:
print(row[0])
How can I resolve this error?
It's a bit of a kludge and I'm sure there's a better way (that just happens to elude me right now). If you don't have embedded new lines, then you can use:
import zipfile, csv
zf = zipfile.ZipFile('testing.csv.zip')
with zf.open('testing.csv', 'r') as fin:
# Create a generator of decoded lines for input to csv.reader
# (the csv module is only really happy with ASCII input anyway...)
lines = (line.decode('ascii') for line in fin)
for row in csv.reader(lines):
print(row)
I am failing to open this uploaded csv file. When I use a file from the pc directory it works fine but when I upload it from an html form I get this error:
TypeError: coercing to Unicode: need string or buffer, file found
When trying to read from uploaded csv file
domain_file = request.POST['csv'].file
file = open(domain_file, "r")
csv_file = csv.reader(file, delimiter=",", quotechar='"')
This works fine when am using a file from pc
file = open('/Desktop/csv.csv', "r")
csv_file = csv.reader( file, delimiter=",", quotechar='"')
The file contains a file object, not a path. Use the filename property instead: http://flask.pocoo.org/docs/0.10/patterns/fileuploads/
Maybe something like this:
domain_file = request.files['csv']
if domain_file and allowed_file(domain_file.filename):
file = open(domain_file, 'r')
#...
Also see http://werkzeug.pocoo.org/docs/0.9/wrappers/#werkzeug.wrappers.BaseRequest.files
If you do this you'll be able to iterate through the data in the csv line by line shown in a dict.
import csv
csv_contents = request.POST['csv'].value.decode('utf-8')
file = csv_contents.splitlines()
data = csv.DictReader(file)