I'm fetching a simple HTTP plain-text that is in CP-1250 (I can't influence that) and would like to decode it, process it per line and eventually save it as UTF-8.
The first part is causing me problems. After I get the raw data using response.read(), I'm passing it to a reader created by getreader("cp1250") from codecs library. I expect to get a StreamReader instance and simply call readlines to have a list of byte strings.
import codecs
import httplib
# nothing unusual
conn = httplib.HTTPConnection('server')
conn.request('GET', '/')
response = conn.getresponse()
content = response.read()
# the painful part
sr = codecs.getreader("cp1250")(content)
lines = sr.readlines() # d'oh!
But after the call to readlines I only get yells echoing from somewhere deep inside codecs:
[...snip...]
File "./download", line 123, in _parse
lines = sr.readlines()
File "/usr/lib/python2.7/codecs.py", line 588, in readlines
data = self.read()
File "/usr/lib/python2.7/codecs.py", line 471, in read
newdata = self.stream.read()
AttributeError: 'str' object has no attribute 'read'
My prints confirm that sr is instance of StreamReader; it confuses me that the object seemed to initialize well but now fails to execute the readlines ... what is missing here?
Or is the library trying to cryptically tell me that the data is corrupted (not CP-1250)?
Edit: As jorispilot suggests, unicode(content, encoding="cp1250") works, so I'll probably stick with that for my solution. However, I'd still like to know what was wrong with my usage of codecs library.
utf8_lines = []
for line in content.split('\n'):
line = line.strip().decode('cp1250')
utf8_lines.append(line.encode('utf-8'))
According to http://docs.python.org/2/library/codecs.html, getreader() returns a StreamReader. This must be passed a stream, which implements the read() function, not, as you are doing, a string.
To fix this, don't read the data from response, but pass it directly to the StreamReader, as below.
conn = httplib.HTTPConnection('server')
conn.request('GET', '/')
response = conn.getresponse()
reader = codecs.getreader("cp1250")(response)
lines = sr.readlines()
Related
I want to download config file from my router via web scraping. The procedure I want to achieve is this:
Save the config file into disk
Send a factory reset
Load the config file previously downloaded.
So far, I have this code:
with requests.Session() as s: # To login into the modem
pagePostBackUp = 'https://192.168.1.1/goform/BackUp'
s.post(urlLogin, data=loginCredentials, verify=False, timeout=5)
dataBackUp = {'dir': 'admin/','file': 'cmconfig.cfg'}
resultBackUp = s.post(pagePostBackUp, data=dataBackUp, verify=False, timeout=10)
print(resultBackUp.text)
The last line is what I want to save into a file. But, when I try to do it with this code:
f = open('/Users/user/Desktop/file.cfg', 'w')
Throws an error that ascii codec can't encode character. If I save the file with, for example, encode='utf16', differs from what I originally download manually.
So, the question is, How can I save this file with the same encoding the router gives me via web? (As unicode). The content of the file looks like this:
�����g���m��� ������Z������ofpqJ
U\V,.o/����zf��v���~W3=,�D};y�tL�cJ
Change the last line of your code to the following:
with open('/Users/user/Desktop/file.cfg', 'wb') as f:
f.write(resultBackUp.content)
This will treat the payload as data (bytes), not text: the file is opened in binary mode, and the content is taken as-is.
There's no encoding/decoding happening.
I've been struggling with this simple problem for too long, so I thought I'd ask for help. I am trying to read a list of journal articles from National Library of Medicine ftp site into Python 3.3.2 (on Windows 7). The journal articles are in a .csv file.
I have tried the following code:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream)
data = [row for row in csvfile]
It results in the following error:
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
data = [row for row in csvfile]
File "<pyshell#4>", line 1, in <listcomp>
data = [row for row in csvfile]
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
I presume I should be working with strings not bytes? Any help with the simple problem, and an explanation as to what is going wrong would be greatly appreciated.
The problem relies on urllib returning bytes. As a proof, you can try to download the csv file with your browser and opening it as a regular file and the problem is gone.
A similar problem was addressed here.
It can be solved decoding bytes to strings with the appropriate encoding. For example:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream.read().decode('utf-8')) # with the appropriate encoding
data = [row for row in csvfile]
The last line could also be: data = list(csvfile) which can be easier to read.
By the way, since the csv file is very big, it can slow and memory-consuming. Maybe it would be preferable to use a generator.
EDIT:
Using codecs as proposed by Steven Rumbalski so it's not necessary to read the whole file to decode. Memory consumption reduced and speed increased.
import csv
import urllib.request
import codecs
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(codecs.iterdecode(ftpstream, 'utf-8'))
for line in csvfile:
print(line) # do something with line
Note that the list is not created either for the same reason.
Even though there is already an accepted answer, I thought I'd add to the body of knowledge by showing how I achieved something similar using the requests package (which is sometimes seen as an alternative to urlib.request).
The basis of using codecs.itercode() to solve the original problem is still the same as in the accepted answer.
import codecs
from contextlib import closing
import csv
import requests
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
with closing(requests.get(url, stream=True)) as r:
reader = csv.reader(codecs.iterdecode(r.iter_lines(), 'utf-8'))
for row in reader:
print row
Here we also see the use of streaming provided through the requests package in order to avoid having to load the entire file over the network into memory first (which could take long if the file is large).
I thought it might be useful since it helped me, as I was using requests rather than urllib.request in Python 3.6.
Some of the ideas (e.g using closing()) are picked from this similar post
I had a similar problem using requests package and csv.
The response from post request was type bytes.
In order to user csv library, first I a stored them as a string file in memory (in my case the size was small), decoded utf-8.
import io
import csv
import requests
response = requests.post(url, data)
# response.content is something like:
# b'"City","Awb","Total"\r\n"Bucuresti","6733338850003","32.57"\r\n'
csv_bytes = response.content
# write in-memory string file from bytes, decoded (utf-8)
str_file = io.StringIO(csv_bytes.decode('utf-8'), newline='\n')
reader = csv.reader(str_file)
for row_list in reader:
print(row_list)
# Once the file is closed,
# any operation on the file (e.g. reading or writing) will raise a ValueError
str_file.close()
Printed something like:
['City', 'Awb', 'Total']
['Bucuresti', '6733338850003', '32.57']
urlopen will return a urllib.response.addinfourl instance for an ftp request.
For ftp, file, and data urls and requests explicity handled by legacy
URLopener and FancyURLopener classes, this function returns a
urllib.response.addinfourl object which can work as context manager...
>>> urllib2.urlopen(url)
<addinfourl at 48868168L whose fp = <addclosehook at 48777416L whose fp = <socket._fileobject object at 0x0000000002E52B88>>>
At this point ftpstream is a file like object, using .read() would return the contents however csv.reader requires an iterable in this case:
Defining a generator like so:
def to_lines(f):
line = f.readline()
while line:
yield line
line = f.readline()
We can create our csv reader like so:
reader = csv.reader(to_lines(ftps))
And with a url
url = "http://pic.dhe.ibm.com/infocenter/tivihelp/v41r1/topic/com.ibm.ismsaas.doc/reference/CIsImportMinimumSample.csv"
The code:
for row in reader: print row
Prints
>>>
['simpleci']
['SCI.APPSERVER']
['SRM_SaaS_ES', 'MXCIImport', 'AddChange', 'EN']
['CI_CINUM']
['unique_identifier1']
['unique_identifier2']
I have to read a txt ini file from my browser. [this is required]
res = urllib2.urlopen(URL)
inifile = res.read()
Then I want to basically use this the same way as I would have read any txt file.
config = ConfigParser.SafeConfigParser()
config.read( inifile )
But now looks like I can't use it as this is actually a string now
Can anybody suggest a way around?
You want configparser.readfp -- Presumably, you might even be able to get away with:
res = urllib2.urlopen(URL)
config = ConfgiParser.SafeConfigParser()
config.readfp(res)
assuming that urllib2.urlopen returns an object that is sufficiently file-like (i.e. it has a readline method). For easier debugging, you could do:
config.readfp(res, URL)
If you have to read it the data from a string, you could pack the whole thing into a io.StringIO (or StringIO.StringIO) buffer and read from that:
import io
res = urllib2.urlopen(URL)
inifile_text = res.read()
inifile = io.StringIO(inifile_text)
inifile.seek(0)
config.readfp(inifile)
My file is like this, but I can't exec the content correctly. I've spent my whole afternoon on this, and still so confused. The main reason is that I don't know what does that [file_obj[0]['body']] looks like.
here is part of my code
# user_file content
"uid = 'h123456789'"
"data = [something]"
# end of user_file
# code piece
file_obj = req.request.files.get('user_file', None)
for i in file_obj[0]['body']:
i.strip('\n') # I tried comment out this line, still can't work
exec(i)
# I failed
Can you tell me what does the user_file conentent would looks like in the file_obj body? So that I can figure out the solution maybe. I submitted it with http form to tornado.
Really thanks.
Maybe this will help.
#first file object in request.
file1 = self.request.files['file1'][0]
#where the file content actually placed.
content = file1['body']
#split content into lines, unix line terminals assumed.
lines = content.split(b'\n')
for l in lines:
#after decoding into strings, you're free to execute them.
try:
exec(l.decode())
except:
pass
Currently, I'm just serving files like this:
# view callable
def export(request):
response = Response(content_type='application/csv')
# use datetime in filename to avoid collisions
f = open('/temp/XML_Export_%s.xml' % datetime.now(), 'r')
# this is where I usually put stuff in the file
response.app_iter = f
response.headers['Content-Disposition'] = ("attachment; filename=Export.xml")
return response
The problem with this is that I can't close or, even better, delete the file after the response has been returned. The file gets orphaned. I can think of some hacky ways around this, but I'm hoping there's a standard way out there somewhere. Any help would be awesome.
You do not want to set a file pointer as the app_iter. This will cause the WSGI server to read the file line by line (same as for line in file), which is typically not the most efficient way to control a file upload (imagine one character per line). Pyramid's supported way of serving files is via pyramid.response.FileResponse. You can create one of these by passing a file object.
response = FileResponse('/some/path/to/a/file.txt')
response.headers['Content-Disposition'] = ...
Another option is to pass a file pointer to app_iter but wrap it in the pyramid.response.FileIter object, which will use a sane block size to avoid just reading the file line by line.
The WSGI specification has strict requirements that response iterators which contain a close method will be invoked at the end of the response. Thus setting response.app_iter = open(...) should not cause any memory leaks. Both FileResponse and FileIter also support a close method and will thus be cleaned up as expected.
As a minor update to this answer I thought I'd explain why FileResponse takes a file path and not a file pointer. The WSGI protocol provides servers an optional ability to provide an optimized mechanism for serving static files via environ['wsgi.file_wrapper']. FileResponse will automatically handle this if your WSGI server has provided that support. With this in mind, you find it to be a win to save your data to a tmpfile on a ramdisk and providing the FileResponse with the full path, instead of trying to pass a file pointer to FileIter.
http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch/api/response.html#pyramid.response.FileResponse
Update:
Please see Michael Merickel's answer for a better solution and explanation.
If you want to have the file deleted once response is returned, you can try the following:
import os
from datetime import datetime
from tempfile import NamedTemporaryFile
# view callable
def export(request):
response = Response(content_type='application/csv')
with NamedTemporaryFile(prefix='XML_Export_%s' % datetime.now(),
suffix='.xml', delete=True) as f:
# this is where I usually put stuff in the file
response = FileResponse(os.path.abspath(f.name))
response.headers['Content-Disposition'] = ("attachment; filename=Export.xml")
return response
You can consider using NamedTemporaryFile:
NamedTemporaryFile(prefix='XML_Export_%s' % datetime.now(), suffix='.xml', delete=True)
Setting delete=True so that the file is deleted as soon as it is closed.
Now, with the help of with you can always have the guarantee that the file will be closed, and hence deleted:
from tempfile import NamedTemporaryFile
from datetime import datetime
# view callable
def export(request):
response = Response(content_type='application/csv')
with NamedTemporaryFile(prefix='XML_Export_%s' % datetime.now(),
suffix='.xml', delete=True) as f:
# this is where I usually put stuff in the file
response.app_iter = f
response.headers['Content-Disposition'] = ("attachment; filename=Export.xml")
return response
The combination of Michael and Kay's response works great under Linux/Mac but won't work under Windows (for auto-deletion). Windows doesn't like the fact that FileResponse tries to open the already open file (see description of NamedTemporaryFile).
I worked around this by creating a FileDecriptorResponse class which is essentially a copy of FileResponse, but takes the file descriptor of the open NamedTemporaryFile. Just replace the open with a seek(0) and all the path based calls (last_modified, content_length) with their fstat equivalents.
class FileDescriptorResponse(Response):
"""
A Response object that can be used to serve a static file from an open
file descriptor. This is essentially identical to Pyramid's FileResponse
but takes a file descriptor instead of a path as a workaround for auto-delete
not working with NamedTemporaryFile under Windows.
``file`` is a file descriptor for an open file.
``content_type``, if passed, is the content_type of the response.
``content_encoding``, if passed is the content_encoding of the response.
It's generally safe to leave this set to ``None`` if you're serving a
binary file. This argument will be ignored if you don't also pass
``content-type``.
"""
def __init__(self, file, content_type=None, content_encoding=None):
super(FileDescriptorResponse, self).__init__(conditional_response=True)
self.last_modified = fstat(file.fileno()).st_mtime
if content_type is None:
content_type, content_encoding = mimetypes.guess_type(path,
strict=False)
if content_type is None:
content_type = 'application/octet-stream'
self.content_type = content_type
self.content_encoding = content_encoding
content_length = fstat(file.fileno()).st_size
file.seek(0)
app_iter = FileIter(file, _BLOCK_SIZE)
self.app_iter = app_iter
# assignment of content_length must come after assignment of app_iter
self.content_length = content_length
Hope that's helpful.
There is also repoze.filesafe which will take care of generating a temporary file for you, and delete it at the end. I use it for saving files uploaded to my server. Perhaps it can be useful to you too.
Because your Object response is holding a file handle for the file '/temp/XML_Export_%s.xml'. Use del statement to delete handle 'response.app_iter'.
del response.app_iter
both Michael Merickel and Kay Zhu are fine.
I found out that I also need to reset file position at the begninnign of the NamedTemporaryFile before passing it to response, as it seems that the response starts from the actual position in the file and not from the beginning (It's fine, you just need to now it).
With NamedTemporaryFile with deletion set, you can not close and reopen it, because it would delete it (and you can't reopen it anyway), so you need to use something like this:
f = tempfile.NamedTemporaryFile()
#fill your file here
f.seek(0, 0)
response = FileResponse(
f,
request=request,
content_type='application/csv'
)
hope it helps ;)