In a view, I create a Django HttpResponse object composed entirely of a csv using a simply csv writer:
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="foobar.csv"'
writer = csv.writer(response)
table_headers = ['Foo', 'Bar']
writer.writerow(table_headers)
bunch_of_rows = [['foo', 'bar'], ['foo2', 'bar2']]
for row in bunch_of_rows:
writer.writerow(row)
return response
In a unit test, I want to test some aspects of this csv, so I need to read it. I'm trying to do so like so:
response = views.myview(args)
reader = csv.reader(response.content)
headers = next(reader)
row_count = 1 + sum(1 for row in reader)
self.assertEqual(row_count, 3) # header + 1 row for each attempt
self.assertIn('Foo', headers)
But the test fails with the following on the headers = next(reader) line:
nose.proxy.Error: iterator should return strings, not int (did you open the file in text mode?)
I see in the HttpResponse source that response.content is spitting the string back out as a byte-string, but I'm not sure the correct way to deal with that to let csv.reader read the file correctly. I thought I would be able to just replace response.content with response (since you write to the object itself, not it's content), but that just resulted in a slight variation in the error:
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
Which seems closer but obviously still wrong. Reading the csv docs, I assume I am failing to open the file correctly. How do I "open" this file-like object so that csv.reader can parse it?
response.content provides bytes. You need to decode this to a string:
foo = response.content.decode('utf-8')
Then pass this string to the csv reader using io.StringIO:
import io
reader = csv.reader(io.StringIO(foo))
You can use io.TextIOWrapper to convert the provided bytestring to a text stream:
import io
reader = csv.reader(io.TextIOWrapper(io.BytesIO(response.content), encoding='utf-8'))
This will convert the bytes to strings as they're being read by the reader.
Related
Here is my scenario: I have a zip file that I am downloading with requests into memory rather than writing a file. I am unzipping the data into an object called myzipfile. Inside the zip file is a csv file. I would like to convert each row of the csv data into a dictionary. Here is what I have so far.
import csv
from io import BytesIO
import requests
# other imports etc.
r = requests.get(url=fileurl, headers=headers, stream=True)
filebytes = BytesIO(r.content)
myzipfile = zipfile.ZipFile(filebytes)
for name in myzipfile.namelist():
mycsv = myzipfile.open(name).read()
for row in csv.DictReader(mycsv): # it fails here.
print(row)
errors:
Traceback (most recent call last):
File "/usr/lib64/python3.7/csv.py", line 98, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not int (did you open the file in text mode?)
Looks like csv.DictReader(mycsv) expects a file object instead of raw data. How do I convert the rows in the mycsv object data (<class 'bytes'>) to a list of dictionaries? I'm trying to accomplish this without writing a file to disk and working directly from csv objects in memory.
The DictReader expects a file or file-like object: we can satisfy this expectation by loading the zipped file into an io.StringIO instance.
Note that StringIO expects its argument to be a str, but reading a file from the zipfile returns bytes, so the data must be decoded. This example assumes that the csv was originally encoded with the local system's default encoding. If that is not the case the correct encoding must be passed to decode().
for name in myzipfile.namelist():
data = myzipfile.open(name).read().decode()
mycsv = io.StringIO(data)
reader = csv.DictReader(mycsv)
for row in reader:
print(row)
dict_list = [] # a list
reader = csv.DictReader(open('yourfile.csv', 'rb'))
for line in reader: # since we used DictReader, each line will be saved as a dictionary
dict_list.append(line)
Sample fileobject data contains the following,
b'QmFyY29kZSxRdHkKQTIzMjMsMTAKQTIzMjQsMTUKNjUxMDA1OTUzMjkyNSwxMgpBMjMyNCwxCkEyMzI0LDEKQTIzMjMsMTAK'
And python file contains the following code
string_data = BytesIO(base64.decodestring(csv_rec))
read_file = csv.reader(string_data, quotechar='"', delimiter=',')
next(read_file)
when i run the above code in python, i got the following exception
_csv.Error: iterator should return strings, not int (did you open the file in text mode?)
How can i open a bytes data in text mode ?
You are almost there. Indeed, csv.reader expects iterator which returns strings (not bytes). Such iterator is provided by sibling of BytesIO - io.StringIO.
from io import StringIO
csv_rec = b'QmFyY29kZSxRdHkKQTIzMjMsMTAKQTIzMjQsMTUKNjUxMDA1OTUzMjkyNSwxMgpBMjMyNCwxCkEyMzI0LDEKQTIzMjMsMTAK'
bytes_data = base64.decodestring(csv_rec)
# decode() method is used to decode bytes to string
string_data = StringIO(bytes_data.decode())
read_file = csv.reader(string_data, quotechar='"', delimiter=',')
next(read_file)
I'm trying to generate a CSV that opens correctly in Excel, but using StringIO instead of a file.
output = StringIO("\xef\xbb\xbf") # Tried just writing a BOM here, didn't work
fieldnames = ['id', 'value']
writer = csv.DictWriter(output, fieldnames, dialect='excel')
writer.writeheader()
for d in Data.objects.all():
writer.writerow({
'id': d.id,
'value': d.value
})
response = HttpResponse(output.getvalue(), content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=data.csv')
return response
This is part of a Django view, so I'd really rather not get into the business of dumping out temporary files for this.
I've also tried the following:
response = HttpResponse(output.getvalue().encode('utf-8').decode('utf-8-sig'), content_type='text/csv')
with no luck
What can I do to get the output file correctly encoded in utf-8-sig, with a BOM, so that Excel will open the file and show multi-byte unicode characters correctly?
HttpResponse accepts bytes:
output = StringIO()
...
response = HttpResponse(output.getvalue().encode('utf-8-sig'), content_type='text/csv')
or let Django do encoding:
response = HttpResponse(output.getvalue(), content_type='text/csv; charset=utf-8-sig')
An alternate way:
import codecs
csv: StringIO
# ..
resp = response.text(f'{codecs.BOM_UTF8.decode("utf-8")}{csv.getvalue()}',
content_type='text/csv; charset=utf-8',
headers={'Content-Disposition': f'Attachment; filename={filename}'})
I've been struggling with this simple problem for too long, so I thought I'd ask for help. I am trying to read a list of journal articles from National Library of Medicine ftp site into Python 3.3.2 (on Windows 7). The journal articles are in a .csv file.
I have tried the following code:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream)
data = [row for row in csvfile]
It results in the following error:
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
data = [row for row in csvfile]
File "<pyshell#4>", line 1, in <listcomp>
data = [row for row in csvfile]
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
I presume I should be working with strings not bytes? Any help with the simple problem, and an explanation as to what is going wrong would be greatly appreciated.
The problem relies on urllib returning bytes. As a proof, you can try to download the csv file with your browser and opening it as a regular file and the problem is gone.
A similar problem was addressed here.
It can be solved decoding bytes to strings with the appropriate encoding. For example:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream.read().decode('utf-8')) # with the appropriate encoding
data = [row for row in csvfile]
The last line could also be: data = list(csvfile) which can be easier to read.
By the way, since the csv file is very big, it can slow and memory-consuming. Maybe it would be preferable to use a generator.
EDIT:
Using codecs as proposed by Steven Rumbalski so it's not necessary to read the whole file to decode. Memory consumption reduced and speed increased.
import csv
import urllib.request
import codecs
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(codecs.iterdecode(ftpstream, 'utf-8'))
for line in csvfile:
print(line) # do something with line
Note that the list is not created either for the same reason.
Even though there is already an accepted answer, I thought I'd add to the body of knowledge by showing how I achieved something similar using the requests package (which is sometimes seen as an alternative to urlib.request).
The basis of using codecs.itercode() to solve the original problem is still the same as in the accepted answer.
import codecs
from contextlib import closing
import csv
import requests
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
with closing(requests.get(url, stream=True)) as r:
reader = csv.reader(codecs.iterdecode(r.iter_lines(), 'utf-8'))
for row in reader:
print row
Here we also see the use of streaming provided through the requests package in order to avoid having to load the entire file over the network into memory first (which could take long if the file is large).
I thought it might be useful since it helped me, as I was using requests rather than urllib.request in Python 3.6.
Some of the ideas (e.g using closing()) are picked from this similar post
I had a similar problem using requests package and csv.
The response from post request was type bytes.
In order to user csv library, first I a stored them as a string file in memory (in my case the size was small), decoded utf-8.
import io
import csv
import requests
response = requests.post(url, data)
# response.content is something like:
# b'"City","Awb","Total"\r\n"Bucuresti","6733338850003","32.57"\r\n'
csv_bytes = response.content
# write in-memory string file from bytes, decoded (utf-8)
str_file = io.StringIO(csv_bytes.decode('utf-8'), newline='\n')
reader = csv.reader(str_file)
for row_list in reader:
print(row_list)
# Once the file is closed,
# any operation on the file (e.g. reading or writing) will raise a ValueError
str_file.close()
Printed something like:
['City', 'Awb', 'Total']
['Bucuresti', '6733338850003', '32.57']
urlopen will return a urllib.response.addinfourl instance for an ftp request.
For ftp, file, and data urls and requests explicity handled by legacy
URLopener and FancyURLopener classes, this function returns a
urllib.response.addinfourl object which can work as context manager...
>>> urllib2.urlopen(url)
<addinfourl at 48868168L whose fp = <addclosehook at 48777416L whose fp = <socket._fileobject object at 0x0000000002E52B88>>>
At this point ftpstream is a file like object, using .read() would return the contents however csv.reader requires an iterable in this case:
Defining a generator like so:
def to_lines(f):
line = f.readline()
while line:
yield line
line = f.readline()
We can create our csv reader like so:
reader = csv.reader(to_lines(ftps))
And with a url
url = "http://pic.dhe.ibm.com/infocenter/tivihelp/v41r1/topic/com.ibm.ismsaas.doc/reference/CIsImportMinimumSample.csv"
The code:
for row in reader: print row
Prints
>>>
['simpleci']
['SCI.APPSERVER']
['SRM_SaaS_ES', 'MXCIImport', 'AddChange', 'EN']
['CI_CINUM']
['unique_identifier1']
['unique_identifier2']
I have a view that takes data from my site and then makes it into a zip compressed csv file. Here is my working code sans zip:
def backup_to_csv(request):
response = HttpResponse(mimetype='text/csv')
response['Content-Disposition'] = 'attachment; filename=backup.csv'
writer = csv.writer(response, dialect='excel')
#code for writing csv file go here...
return response
and it works great. Now I want that file to be compressed before it gets sent out. This is where I get stuck.
def backup_to_csv(request):
output = StringIO.StringIO() ## temp output file
writer = csv.writer(output, dialect='excel')
#code for writing csv file go here...
response = HttpResponse(mimetype='application/zip')
response['Content-Disposition'] = 'attachment; filename=backup.csv.zip'
z = zipfile.ZipFile(response,'w') ## write zip to response
z.writestr("filename.csv", output) ## write csv file to zip
return response
But thats not it and I have no idea how to do this.
OK I got it. Here is my new function:
def backup_to_csv(request):
output = StringIO.StringIO() ## temp output file
writer = csv.writer(output, dialect='excel')
#code for writing csv file go here...
response = HttpResponse(mimetype='application/zip')
response['Content-Disposition'] = 'attachment; filename=backup.csv.zip'
z = zipfile.ZipFile(response,'w') ## write zip to response
z.writestr("filename.csv", output.getvalue()) ## write csv file to zip
return response
Note how, in the working case, you return response... and in the NON-working case you return z, which is NOT an HttpResponse of course (while it should be!).
So: use your csv_writer NOT on response but on a temporary file; zip the temporary file; and write THAT zipped bytestream into the response!
zipfile.ZipFile(response,'w')
doesn't seem to work in python 2.7.9. The response is a django.HttpResponse object (which is said to be file-like) but it gives an error "HttpResponse object does not have an attribute 'seek'. When the same code is run in python 2.7.0 or 2.7.6 (I haven't tested it in other versions) it is OK... So you'd better test it with python 2.7.9 and see if you get the same behaviour.