csv.Error: did you open the file in text mode? - python

the python file contains the following code
import csv
import urllib.request
url = "https://gist.githubusercontent.com/aparrish/cb1672e98057ea2ab7a1/raw/13166792e0e8436221ef85d2a655f1965c400f75/lebron_james.csv"
stats = list(csv.reader(urllib.request.urlopen(url)))
When I run the above code in python, I get the following exception:
Error
Traceback (most recent call last)
in ()
1 url = "https://gist.githubusercontent.com/aparrish/cb1672e98057ea2ab7a1/raw/13166792e0e8436221ef85d2a655f1965c400f75/lebron_james.csv"
----> 2 stats = list(csv.reader(urllib.request.urlopen(url)))
Error: iterator should return strings, not bytes (did you open the file in text mode?)
How can I fix this problem?

The documentation for urllib recommends using the requests module.
You must pay attention to two things:
you must decode the data you receive from the internet (which is bytes) in order to have text. With requests, using text takes care of the decoding.
csvreader expects a list of lines, not a block of text. Here, we split it with splitlines.
So, you can do it like this:
import csv
import requests
url = "https://gist.githubusercontent.com/aparrish/cb1672e98057ea2ab7a1/raw/13166792e0e8436221ef85d2a655f1965c400f75/lebron_james.csv"
text = requests.get(url).text
lines = text.splitlines()
stats = csv.reader(lines)
for row in stats:
print(row)
# ['Rk', 'G', 'Date', 'Age', 'Tm', ...]
# ['1', '1', '2013-10-29', '28-303', 'MIA',... ]

I don't really know what is that data, but if you interested in separating them with ,, you can try something like this:
stats = list(csv.reader(urllib.request.urlopen(url).read().decode()))
1.It reads from response data
2.Decode from Bytes to String
3.CSV reader
4.Cast CSV object to list
let me know if you want that data somehow differently so i can edit my answer. Good Luck.

You should read the response of urllib.request.urlopen.
stats = list(csv.reader(urllib.request.urlopen(url).read().decode("UTF-8")))

Try the following code.
import csv
import io
import urllib.request
csv_response = urllib.request.urlopen(url)
lst = list(csv.reader(io.TextIOWrapper(csv_response)))

Related

Bio.SwissProt.parse on swiss formatted in memory data in chunks

I'd like to read in swiss data and get records in chunks instead of reading in the entire file.
So far, I've split the file into chunks as seen below
from io import StringIO
sprot_io = StringIO()
footer = b'CC ---------------------------------------------------------------------------\n'
with gzip.open(response['Body'], "r") as f:
for row in f:
if row == footer:
sprot_io.write(row.decode('utf-8'))
<now parse the record>
sprot_io = list()
else:
sprot_io.write(row.decode('utf-8'))
However, when I try to parse these chunks using Bio.SwissProt.parse, I get an unexpected end of file error
def parse_record(file):
seq = next(SeqIO.parse(file, format='swiss'))
return seq
I use next because the function is actually returning a generator, but I should only be getting one record anyway.
I'm assuming there is something wrong with the format I'm giving to it, but I haven't been able to figure out what could be going wrong from looking at the base code
https://github.com/biopython/biopython/blob/master/Bio/SwissProt/KeyWList.py
This is the file I'm trying to parse, but warning... it is roughly 3 gigs
ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_*.dat.gz
Any help would be greatly appreciated.
I over complicated it by a lot. I ended up using smart-open to stream the data from s3, and then I passed that to the parser, which takes a ```<class '_io.BytesIOWrapper'>`` just fine.
import boto3
from smart_open import open
session = boto3.session()
tp = {'client': session.client('s3')}
handle = open(url, 'rt', transport_params=tp)
seqs = SeqIO.parse(handle, format='swiss')
record = next(seqs)

JSONDecodeError when parsing funky JSON

Recently, I started working with JSON (with Python 3.7 under Debian 9). This is the first (probably of many) data sets in JSON which I've had the pleasure of working with.
I have used the Python built-in JSON module to interpret arbitrary strings and files. I now have a database with ~5570 rows pertaining information regarding to a given list of servers. There are a lot of things in the pipeline, which I have devised a plan for, but I'm stuck on this particular sanitation.
Here's the code I'm using to parse:
#!/usr/local/bin/python3.7
import json
def servers_from_json(file_name):
with open(file_name, 'r') as f:
data = json.loads(f.read())
servers = [{'asn': item['data']['resource'], 'resource': item['data']['allocations'][0]['asn_name']} for item in data]
return servers
servers = servers_from_json('working-things/working-format-for-parse')
print(servers)
My motive
I'm trying to get match each one of these servers to their ASN_NAME (which is a field ripped straight from RIPE's API; thus providing me with information pertaining to the physical dc each server is located at. Then, once that's done I'll write them to an existing SQL table, next to a Boolean.
So, here's where it gets funky. If I run the whole dataset through this I get this error message:
Traceback (most recent call last):
File "./parse-test.py", line 12, in <module>
servers = servers_from_json('2servers.json')
File "./parse-test.py", line 7, in servers_from_json
data = json.loads(f.read())
File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 38 column 2 (char 1098)
I noticed that the problem with my initial data set was that each JSON object wasn't delimited by ,\n.
Did some cleaning, still no luck.
I then added the first 3(?) objects to a completely clean file and.. success. I can get the script to read and interpret them the way I want.
Here's the data set with the comma delimiter:
http://db.farnworth.site/servers.json
Here's the working data set:
http://db.farnworth.site/working-format.json
Anyone got any ideas?
I am here assuming that | will not be present as part of the data. And separate each of the information chunks using | and then convert it into a list and load each list item using json module. Hope it helps!
You can try:
import json
import re
with open("servers.json", 'r') as f:
data = f.read()
pattern = re.compile(r'\}\{')
data = pattern.sub('}|{', data).split('|')
for item in data:
server_info = json.loads(item)
allocations = server_info['data']['allocations']
for alloc in allocations:
print(alloc['asn_name'])
I could read the output.json like this
import json
import re
with open("output.json", 'r') as f:
data = f.read()
server_info = json.loads(data)
for item in server_info:
allocations = item['data']['allocations']
for alloc in allocations:
print(alloc['asn_name'])

Python Error: iterator should return strings, not bytes (did you open the file in text mode?) [duplicate]

I've been struggling with this simple problem for too long, so I thought I'd ask for help. I am trying to read a list of journal articles from National Library of Medicine ftp site into Python 3.3.2 (on Windows 7). The journal articles are in a .csv file.
I have tried the following code:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream)
data = [row for row in csvfile]
It results in the following error:
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
data = [row for row in csvfile]
File "<pyshell#4>", line 1, in <listcomp>
data = [row for row in csvfile]
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
I presume I should be working with strings not bytes? Any help with the simple problem, and an explanation as to what is going wrong would be greatly appreciated.
The problem relies on urllib returning bytes. As a proof, you can try to download the csv file with your browser and opening it as a regular file and the problem is gone.
A similar problem was addressed here.
It can be solved decoding bytes to strings with the appropriate encoding. For example:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream.read().decode('utf-8')) # with the appropriate encoding
data = [row for row in csvfile]
The last line could also be: data = list(csvfile) which can be easier to read.
By the way, since the csv file is very big, it can slow and memory-consuming. Maybe it would be preferable to use a generator.
EDIT:
Using codecs as proposed by Steven Rumbalski so it's not necessary to read the whole file to decode. Memory consumption reduced and speed increased.
import csv
import urllib.request
import codecs
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(codecs.iterdecode(ftpstream, 'utf-8'))
for line in csvfile:
print(line) # do something with line
Note that the list is not created either for the same reason.
Even though there is already an accepted answer, I thought I'd add to the body of knowledge by showing how I achieved something similar using the requests package (which is sometimes seen as an alternative to urlib.request).
The basis of using codecs.itercode() to solve the original problem is still the same as in the accepted answer.
import codecs
from contextlib import closing
import csv
import requests
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
with closing(requests.get(url, stream=True)) as r:
reader = csv.reader(codecs.iterdecode(r.iter_lines(), 'utf-8'))
for row in reader:
print row
Here we also see the use of streaming provided through the requests package in order to avoid having to load the entire file over the network into memory first (which could take long if the file is large).
I thought it might be useful since it helped me, as I was using requests rather than urllib.request in Python 3.6.
Some of the ideas (e.g using closing()) are picked from this similar post
I had a similar problem using requests package and csv.
The response from post request was type bytes.
In order to user csv library, first I a stored them as a string file in memory (in my case the size was small), decoded utf-8.
import io
import csv
import requests
response = requests.post(url, data)
# response.content is something like:
# b'"City","Awb","Total"\r\n"Bucuresti","6733338850003","32.57"\r\n'
csv_bytes = response.content
# write in-memory string file from bytes, decoded (utf-8)
str_file = io.StringIO(csv_bytes.decode('utf-8'), newline='\n')
reader = csv.reader(str_file)
for row_list in reader:
print(row_list)
# Once the file is closed,
# any operation on the file (e.g. reading or writing) will raise a ValueError
str_file.close()
Printed something like:
['City', 'Awb', 'Total']
['Bucuresti', '6733338850003', '32.57']
urlopen will return a urllib.response.addinfourl instance for an ftp request.
For ftp, file, and data urls and requests explicity handled by legacy
URLopener and FancyURLopener classes, this function returns a
urllib.response.addinfourl object which can work as context manager...
>>> urllib2.urlopen(url)
<addinfourl at 48868168L whose fp = <addclosehook at 48777416L whose fp = <socket._fileobject object at 0x0000000002E52B88>>>
At this point ftpstream is a file like object, using .read() would return the contents however csv.reader requires an iterable in this case:
Defining a generator like so:
def to_lines(f):
line = f.readline()
while line:
yield line
line = f.readline()
We can create our csv reader like so:
reader = csv.reader(to_lines(ftps))
And with a url
url = "http://pic.dhe.ibm.com/infocenter/tivihelp/v41r1/topic/com.ibm.ismsaas.doc/reference/CIsImportMinimumSample.csv"
The code:
for row in reader: print row
Prints
>>>
['simpleci']
['SCI.APPSERVER']
['SRM_SaaS_ES', 'MXCIImport', 'AddChange', 'EN']
['CI_CINUM']
['unique_identifier1']
['unique_identifier2']

Python reading CSV file using URL giving error

I am trying to read a CSV file using the requests library but I am having issues.
import requests
import csv
url = 'https://storage.googleapis.com/sentiment-analysis-dataset/training_data.csv'
r = requests.get(url)
text = r.iter_lines()
reader = csv.reader(text, delimiter=',')
I then tried
for row in reader:
print(row)
but it gave me this error:
Error: iterator should return strings, not bytes (did you open the file in text mode?)
How should I fix this?
What you probably want is:
text = r.iter_lines(decode_unicode=True)
This will return a strings-iterator instead of a bytes-iterator. (See here for documentation.)

Working with a pdf from the web directly in Python?

I'm trying to use Python to read .pdf files from the web directly rather than save them all to my computer. All I need is the text from the .pdf and I'm going to be reading a lot (~60k) of them, so I'd prefer to not actually have to save them all.
I know how to save a .pdf from the internet using urllib and open it with PyPDF2. (example)
I want to skip the saving-to-file step.
import urllib, PyPDF2
urllib.urlopen('https://bitcoin.org/bitcoin.pdf')
wFile = urllib.urlopen('https://bitcoin.org/bitcoin.pdf')
lFile = PyPDF2.pdf.PdfFileReader(wFile.read())
I get an error that is fairly easy to understand:
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
fil = PyPDF2.pdf.PdfFileReader(wFile.read())
File "C:\Python27\lib\PyPDF2\pdf.py", line 797, in __init__
self.read(stream)
File "C:\Python27\lib\PyPDF2\pdf.py", line 1245, in read
stream.seek(-1, 2)
AttributeError: 'str' object has no attribute 'seek'
Obviously PyPDF2 doesn't like that I'm giving it the urllib.urlopen().read() (which appears to return a string). I know that this string is not the "text" of the .pdf but a string representation of the file. How can I resolve this?
EDIT: NorthCat's solution resolved my error, but when I try to actually extract the text, I get this:
>>> print lFile.getPage(0).extractText()
ˇˆ˘˘˙˘˘˝˘˛˘ˇ˘ˇ˚ˇˇˇ˘ˆ˘˘˘˚ˇˆ˘ˆ˘ˇ˜ˇ˝˚˘˛˘ˇ ˘˘˘ˇ˛˘˚˚ˆˇˇ!
˝˘˚ˇ˘˘˚"˘˘ˇ˘˚ˇ˘˘˚ˇ˘˘˘˙˘˘˘#˘˘˘ˆ˘˛˘˚˛˙ ˘˘˚˚˘˛˙#˘ˇ˘ˇˆ˘˘˛˛˘˘!˘˘˛˘˝˘˘˘˚ ˛˘˘ˇ˘ˇ˛$%&˘ˇ'ˆ˛
$%&˘ˇˇ˘˚ˆ˚˘˘˘˘ ˘ˆ(ˇˇ˘˘˘˘ˇ˘˚˘˘#˘˘˘ˇ˛!ˇ)˘˘˚˘˘˛ ˚˚˘ˇ˘˝˘˚'˘˘ˇˇ ˘˘ˇ˘˛˙˛˛˘˘˚ˇ˘˘ˆ˘˘ˆ˙
$˘˘˘*˘˘˘ˇˆ˘˘ˇˆ˛ˇ˘˝˚˚˘˘ˇ˘ˆ˘"˘ˆ˘ˇˇ˘˛ ˛˛˘˛˘˘˘˘˘˘˛˘˘˚˚˘$ˇ˘ˇˆ˙˘˝˘ˇ˘˘˘ˇˇˆˇ˘ ˘˛ˇ˝˘˚˚#˘˛˘˚˘˘
˘ˇ˘˚˛˛˘ˆ˛ˇˇˇ ˚˘˘˚˘˘ˇ˛˘˙˘˝˘ˇ˘ˆ˘˛˙˘˝˘ˇ˘˘˝˘"˘˛˘˝˘ˇ ˘˘˘˚˛˘˚)˘˘ˆ˛˘˘
˘˛˘˛˘ˆˇ˚˘˘˘˘˚˘˘˘˘˛˛˚˘˚˝˚ˇ˘#˘˘˚ˆ˘˘˘˝˘˚˘ˆˆˇ˘ˆ
˘˘˘ˆ˘˝˘˘˚"˘˘˚˘˚˘ˇ˘ˆ˘ˆ˘˚ˆ˛˚˛ˆ˚˘˘˘˘˘˘˚˛˚˚ˆ#˘ˇˇˆˇ˘˝˘˘ˇ˚˘ˇˇ˘˛˛˚ ˚˘˘˘ˇ˚˘˘ˇ˘˘˚ˆ˘*˘
˘˘ˇ˘˚ˇ˘˙˘˚ˇ˘˘˘˙˙˘˘˚˚˘˘˝˘˘˘˛˛˘ˇˇ˚˘˛#˘ˆ˘˘ˇ˘˚˘ˇˇ˘˘ˇˆˇ˘$%&˘ˆ˘˛˘˚˘,
Try this:
import urllib, PyPDF2
import cStringIO
wFile = urllib.urlopen('https://bitcoin.org/bitcoin.pdf')
lFile = PyPDF2.pdf.PdfFileReader( cStringIO.StringIO(wFile.read()) )
Because PyPDF2 does not work, there are a couple of solutions, however, require saving the file to disk.
Solution 1
You can use ps2ascii (if you are using linux or mac ) or xpdf (Windows). Example of using xpdf:
import os
os.system('C:\\xpdfbin-win-3.03\\bin32\\pdftotext.exe C:\\xpdfbin-win-3.03\\bin32\\bitcoin.pdf bitcoin1.txt')
or
import subprocess
subprocess.call(['C:\\xpdfbin-win-3.03\\bin32\\pdftotext.exe', 'C:\\xpdfbin-win-3.03\\bin32\\bitcoin.pdf', 'bitcoin2.txt'])
Solution 2
You can use one of online pdf to txt converter. Example of using pdf.my-addr.com
import MultipartPostHandler
import urllib2
def pdf2text( absolute_path ):
url = 'http://pdf.my-addr.com/pdf-to-text-converter-tool.php'
params = { 'file' : open( absolute_path, 'rb' ),
'encoding': 'UTF-8',
}
opener = urllib2.build_opener( MultipartPostHandler.MultipartPostHandler )
return opener.open( url, params ).read()
print pdf2text('bitcoin.pdf')
Code of MultipartPostHandler you can find here. I tried to use the cStringIO instead open(), but it did not work.
Maybe it will be helpful for you.
I know this question is old, but I had the same issue and here is how I solved it.
In the newer docs of Py2PDF there is a section about streaming data
The example there looks like this:
from io import BytesIO
# Prepare example
with open("example.pdf", "rb") as fh:
bytes_stream = BytesIO(fh.read())
# Read from bytes_stream
reader = PdfReader(bytes_stream)
Therefore, what I did instead was this:
import urllib
from io import BytesIO
from PyPDF2 import PdfReader
NEW_PATH = 'https://example.com/path/to/pdf/online?id=123456789&date=2022060'
wFile = urllib.request.urlopen(NEW_PATH)
bytes_stream = BytesIO(wFile.read())
reader = PdfReader(bytes_stream)

Categories

Resources