when i run this
wget.download("http://downloads.dell.com/FOLDER06808437M/1/7760%20AIO-WIN10-A11-5VNTG.CAB")
It shows this Error Code
File "C:\Program Files\Python39\lib\urllib\request.py", line 641, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)urllib.error.HTTPError: HTTP Error 404: Not Found
But when I run wget http://downloads.dell.com/FOLDER06808437M/1/7760%20AIO-WIN10-A11-5VNTG.CAB manually it works perfectly fine
I investigated wget.download source code and there seems to be bug, piece of source code
if PY3K:
# Python 3 can not quote URL as needed
binurl = list(urlparse.urlsplit(url))
binurl[2] = urlparse.quote(binurl[2])
binurl = urlparse.urlunsplit(binurl)
else:
binurl = url
so it make assumption URL needs to be quoted that is illegal character like space replaced by codes after % sign, but this was already done as your url contain %20 rather than space. Your URL is altered although it should not
import urllib.parse as urlparse
url = "http://downloads.dell.com/FOLDER06808437M/1/7760%20AIO-WIN10-A11-5VNTG.CAB"
binurl = list(urlparse.urlsplit(url))
binurl[2] = urlparse.quote(binurl[2])
binurl = urlparse.urlunsplit(binurl)
print(binurl) # http://downloads.dell.com/FOLDER06808437M/1/7760%2520AIO-WIN10-A11-5VNTG.CAB
You might either counterweight this issue by providing URL in form which needs escaping, in this case
import wget
wget.download("http://downloads.dell.com/FOLDER06808437M/1/7760 AIO-WIN10-A11-5VNTG.CAB")
xor use urllib.request.urlretrieve, most basic form is
import urllib.request
urllib.request.urlretrieve("http://downloads.dell.com/FOLDER06808437M/1/7760%20AIO-WIN10-A11-5VNTG.CAB", "776 AIO-WIN10-A11-5VNTG.CAB")
where arguments are URL and filename. Keep in mind that used this way there is not progress indicator (bar), so you need to wait until download complete.
Related
I'm trying to access an authenticated site using a cookies.txt file (generated with a Chrome extension) with Python Requests:
import requests, cookielib
cj = cookielib.MozillaCookieJar('cookies.txt')
cj.load()
r = requests.get(url, cookies=cj)
It doesn't throw any error or exception, but yields the login screen, incorrectly. However, I know that my cookie file is valid, because I can successfully retrieve my content using it with wget. Any idea what I'm doing wrong?
Edit:
I'm tracing cookielib.MozillaCookieJar._really_load and can verify that the cookies are correctly parsed (i.e. they have the correct values for the domain, path, secure, etc. tokens). But as the transaction is still resulting in the login form, it seems that wget must be doing something additional (as the exact same cookies.txt file works for it).
MozillaCookieJar inherits from FileCookieJar which has the following docstring in its constructor:
Cookies are NOT loaded from the named file until either the .load() or
.revert() method is called.
You need to call .load() method then.
Also, like Jermaine Xu noted the first line of the file needs to contain either # Netscape HTTP Cookie File or # HTTP Cookie File string. Files generated by the plugin you use do not contain such a string so you have to insert it yourself. I raised appropriate bug at http://code.google.com/p/cookie-txt-export/issues/detail?id=5
EDIT
Session cookies are saved with 0 in the 5th column. If you don't pass ignore_expires=True to load() method all such cookies are discarded when loading from a file.
File session_cookie.txt:
# Netscape HTTP Cookie File
.domain.com TRUE / FALSE 0 name value
Python script:
import cookielib
cj = cookielib.MozillaCookieJar('session_cookie.txt')
cj.load()
print len(cj)
Output:
0
EDIT 2
Although we managed to get cookies into the jar above they are subsequently discarded by cookielib because they still have 0 value in the expires attribute. To prevent this we have to set the expire time to some future time like so:
for cookie in cj:
# set cookie expire date to 14 days from now
cookie.expires = time.time() + 14 * 24 * 3600
EDIT 3
I checked both wget and curl and both use 0 expiry time to denote session cookies which means it's the de facto standard. However Python's implementation uses empty string for the same purpose hence the problem raised in the question. I think Python's behavior in this regard should be in line with what wget and curl do and that's why I raised the bug at http://bugs.python.org/issue17164
I'll note that replacing 0s with empty strings in the 5th column of the input file and passing ignore_discard=True to load() is the alternate way of solving the problem (no need to change expiry time in this case).
I tried taking into account everything that Piotr Dobrogost had valiantly figured out about MozillaCookieJar but to no avail. I got fed up and just parsed the damn cookies.txt myself and now all is well:
import re
import requests
def parseCookieFile(cookiefile):
"""Parse a cookies.txt file and return a dictionary of key value pairs
compatible with requests."""
cookies = {}
with open (cookiefile, 'r') as fp:
for line in fp:
if not re.match(r'^\#', line):
lineFields = line.strip().split('\t')
cookies[lineFields[5]] = lineFields[6]
return cookies
cookies = parseCookieFile('cookies.txt')
import pprint
pprint.pprint(cookies)
r = requests.get('https://example.com', cookies=cookies)
This worked for me:
from http.cookiejar import MozillaCookieJar
from pathlib import Path
import requests
cookies = Path('/Users/name/cookies.txt')
jar = MozillaCookieJar(cookies)
jar.load()
requests.get('https://path.to.site.com', cookies=jar)
<Response [200]>
I tried editing Tristan answer to add some info to it but it seems SO edit q is full therefore, I am writing this answer, since, I have struggled real bad on using existing cookies with python request.
First, get the cookies from the Chrome. Easiest way would be to use an extension called 'cookies.txt'
https://chrome.google.com/webstore/detail/get-cookiestxt/bgaddhkoddajcdgocldbbfleckgcbcid/related
After downloading those cookies, use the below code to make sure that you are able to parse the file without any issues.
import re, requests, pprint
def parseCookieFile(cookiefile):
"""Parse a cookies.txt file and return a dictionary of key value pairs
compatible with requests."""
cookies = {}
with open (cookiefile, 'r') as fp:
for line in fp:
if not re.match(r'^\#', line):
lineFields = re.findall(r'[^\s]+', line) #capturing anything but empty space
try:
cookies[lineFields[5]] = lineFields[6]
except Exception as e:
print (e)
return cookies
cookies = parseCookieFile('cookies.txt') #replace the filename
pprint.pprint(cookies)
Next, use those cookies with python request
x = requests.get('your__url', verify=False, cookies=cookies)
print (x.content)
This should save your day from going on different SO posts and trying those cookielib and other methods which never worked for me.
I finally found a way to make it work (I got the idea by looking at curl's verbose ouput): instead of loading my cookies from a file, I simply created a dict with the required value/name pairs:
cd = {'v1': 'n1', 'v2': 'n2'}
r = requests.get(url, cookies=cd)
and it worked (although it doesn't explain why the previous method didn't). Thanks for all the help, it's really appreciated.
I am unable to download a xls file from a url. I have tried with both urlopen and urlretrive. But I recieve an really long error message starting with:
Traceback (most recent call last):
File "C:/Users/Henrik/Documents/Development/Python/Projects/ImportFromWeb.py", line 6, in
f = ur.urlopen(dls)
File "C:\Users\Henrik\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 163, in urlopen
return opener.open(url, data, timeout)
and ending with:
urllib.error.HTTPError: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Found
Unfortionally I can't provide the url I am using since the data is sensitive. However I will give you the url with some parts removed.
https://xxxx.xxxx.com/xxxxlogistics/w/functions/transportinvoicelist?0-8.IBehaviorListener.2-ListPageForm-table-TableForm-exportToolbar-xlsExport&antiCache=1477160491504
As you can see the url dosn't end with a "/file.xls" for example. I don't know if that matters but most of the threds regarding this issue has had those types of links.
If I enter the url in my address bar the download file window appears:
Image of download window
The code I have written look like this:
import urllib.request as ur
import openpyxl as pyxl
dls = 'https://xxxx.xxxx.com/xxxxlogistics/w/functions/transportinvoicelist?0-8.IBehaviorListener.2-ListPageForm-table-TableForm-exportToolbar-xlsExport&antiCache=1477160491504'
f = ur.urlopen(dls)
I am grateful for any help you can provide!
Using the following code:
with open('newim','wb') as f:
f.write(requests.get(repr(url)))
where the url is:
url = 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFoAAAArCAYAAAD41p9mAAAAzUlEQVR42u3awQ4DIQhFUf7/p9tNt20nHQGl5yUuh4c36BglgoiIiIiIiGiVHq+RGfvdiGG+lxKonGiWd4vvKZNd5V/u2zXRO953c2jx3bGiMrewLt+PgbJA/xJ3RS5dvl9PEdXLduK3baeOrKrc1bcF9MnLP7WqgR4GOjtOl28L6AlHtLSqBhpooIEGGmiggQYaaKCBBhpodx3H3XW4vQN6HugILyztoL0Zhlfw9G4tfR0FfR0VnTw6lQoT0XtXmMxfdJPuALr0x5Pp+wT35KKWb6NaVgAAAABJRU5ErkJggg=='
I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python33\lib\site-packages\requests\api.py", line 69, in get
return request('get', url, params=params, **kwargs)
File "C:\Python33\lib\site-packages\requests\api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 567, in send
adapter = self.get_adapter(url=request.url)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 641, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url)
I have seen other posts with what, at first glance, appears to be a similar problem but I haven't had any luck just adding 'https://' or anything like that...I seriously want to avoid having to do this in webdriver+Autoit or something because I have to do a similar exercise for thousands of images.
There seems to be a problem with your understanding of the concept of embedded images. The url you have posted is, actually, what your browser returns when you select 'View Image' or 'Copy Image Location' (or something similar, depending on the browser) from the context menu, and formally is called a data URI.
It is not an http url pointing to an image, and you can not use it to retrieve actual images from any server: this is exactly what requests points out in the error message.
So, how do we get these images?
The following script will handle this task:
import requests
from lxml import html
import binascii as ba
i = 0
url="<Page URL goes here>" #Ex: http://server/dir/images.html
page = requests.get(url)
struct = html.fromstring(page.text)
images = struct.xpath('//img/#src')
for img in images:
i += 1
ext = img.partition('data:image/')[2].split(';')[0]
with open('newim'+str(i)+'.'+ext,'wb') as f:
f.write(ba.a2b_base64(img.partition('base64,')[2]))
print("Done")
To run it you will need to install, along with requests, the lxml library which can be found here.
Here follows a short description of how the script functions:
First it requests the url from the server and, after it gets the server's response, it stores it in a Response object (page).
Then it utilizes html.fromstring() from lxml to transform the "textified" content of page into a tree-structure which can be processed by commands utilizing XPath syntax, like this one: images = struct.xpath('//img/#src').
The result is a list containing the contents of the src attribute of every image in the page. In this case (embedded images) these are the data URIs.
Then, for every image in the list, it first gets the image type (which will be used as the newim's extension), using partition() and split() and stores it in ext. Then it converts the base64 encoded data to binary (using a2b_base64() from binascii module) and writes the output to the file.
As a small demo, save this html code (as, eg, images.html) somewhere in your server
<h1>Images</h1>
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEcAAAAmCAYAAACWGIbgAAACKElEQVR4nO2aPWsUQRiAnznSKYI/IbBJoTkCEhDuuhQnMY2VhdjYbAoLO9N4VfwBlyKFFlYKYqNN9iCFnQeCCHrJFdkFf4IgKcO9FnLJJtzOfs3OGJgHtpi5nZl3H+abUyIieObSch3A/4yXo8HL0eDlaPByNHg5GrwcDV6OhnJyPt0bK6XYGhqMYLiFUir1dNlNDNafpmz8kkskU1gHZPaEUX6pfGKZ9nkOSPe9xJfz6AxmmTWpHn+2nDe8S1doVk5KAqFcqC4Kz9uq05CB+OfL0VRsRM7H3s9MAfFAWnCtVluG4p8/5zyRR/JPHBIPaMH1gqO0AAny/eBt5s/BMqdwd5Z8/XKX0lOQofjtr1bJPgs77BV+f/SB/aYm6BzsyzmMxlM4KV5gxCRuLhwd9uX8PhiXLXL0p/zIMoFlOQnyix/pnO76Ru6Hf/k8DJqLKZursUM+PHbSdSzLiWGHb3bbrM7V6DmOsCxnCfqslS62soyLSceynAC1yGrZUkUm7SawP6xu9trpZJGV6PYNJx3HgZyV++1y2/kOt5aaC0eHfTnBJqd9nhZ+v/OQTSf9xslqFaDu9B6fJS/vYZJjFuDrLBm+eOZmTFFRTu3t/IO99rTPNgCjCReOTvGEs7NXGPFqo1ZLcykcf6W7ESO3dDk3gWauG2vFX+myK/2cf1hF0jd/INCRQV3zhuJXIv5fFln444MGL0eDl6PBy9Hg5WjwcjR4ORr+Aq7+02kTcdF1AAAAAElFTkSuQmCC" />
<br />
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFoAAAArCAYAAAD41p9mAAAAzUlEQVR42u3awQ4DIQhFUf7/p9tNt20nHQGl5yUuh4c36BglgoiIiIiIiGiVHq+RGfvdiGG+lxKonGiWd4vvKZNd5V/u2zXRO953c2jx3bGiMrewLt+PgbJA/xJ3RS5dvl9PEdXLduK3baeOrKrc1bcF9MnLP7WqgR4GOjtOl28L6AlHtLSqBhpooIEGGmiggQYaaKCBBhpodx3H3XW4vQN6HugILyztoL0Zhlfw9G4tfR0FfR0VnTw6lQoT0XtXmMxfdJPuALr0x5Pp+wT35KKWb6NaVgAAAABJRU5ErkJggg=="></img>
<br />
<img src="data:image/jpg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAhADYDASIAAhEBAxEB/8QAHAABAQACAgMAAAAAAAAAAAAAAAoICQILBAYH/8QAKxAAAQQDAAICAQIGAwAAAAAABQMEBgcBAggACQoWFRQhERMXNjh2d7W2/8QAGQEAAgMBAAAAAAAAAAAAAAAAAAQBAgMF/8QAIxEAAgMAAgMBAQEAAwAAAAAAAwQBAgUSEwAGERQjIRUiJP/aAAwDAQACEQMRAD8Av48eYVd0Vd2Ba1RphuOuxQHFUyEvnUgktmP+aIn0vIC4IWPWXSi0cBT+dxiFRjUg5xtkwbKRyYEVG2rdENgEumu7dRu/DrvK7+hOs/YbYPQVx2helhf0dokEpOrcncosOV7hxU3sbI8TodlxUyUQEMN3LrYcJSd6jx+HC2jRBLVTbGYzZnR1W8uIle6aDejchJi0mXXzHnxlXoObVkZmESoTByrsCINhiVrgqrZ2X/iOYLRme6pn87PrQccajM9ohStQ9ycb1IEJqOUgAWAmpagbMANJahv38eSO/Kc9k9vcyRrk/irn2yZTU8z68myq9s2BX5JeO2GApUNIY5GtgcQlzTfUhE304PnnWjo0H/llPxcUICP1KQ0wRbPcQfe1LYt6C+lvWH0j67K/ifOsbk2LLg3RlMVIFZV/W/TFc1G5rDcQPuaMxpMeDn0xahLJnbQHYsoFlZuFMGUZC3PbEhzHdCiZhMyiYxKK5+l7I16sm/e0WH/yGejVh9lqs9cL5irjCmbdihGGO+mmSydAoAtoXMEtSsJrUs1qK+vz7KZCvEVvwXIzZYAy3njZ9rPzdbREAlBAkIs2n6pvp/8Akug8eaUfZnzb2d7Oqtq3n3lTpDbjPmC268Xsi+elAbNeWWVNwsgGsvodBwmFxycwM59XlDEgRO3AaezGKCyMe+vRRi5lwsvNourCrUlOdvfHg95vPfLkCujewgV2WdQgwkyiah2MQLpOjLbsQjWoxvP65fviY0VNguzyX6BkHD+Xb15L08FozJizVfLsiwgIrO2rhuDJmsaWkbHy5MO5CMaICHWKNpekQTPDVpZiKSfk20mAmkik0kRMjazRhDxj7KV66IUs8Oq/UX0VQIHqA47rmvE1dLZNgJJqCv5gOmDlutqO1con2rHjx48z8189dl/9pSj/AF03/wBY6869r4R/+Q/sP/40pr/3FgeXb9NPOpk4KsL5VragLCl58fIhBJboK8rFpeNxXDwOq3CG2Olc8736UnOUyi2NysbX2r3G7BDOGkp0cOf4tZZvRr6SPal6eLwuKwTK/Ad7wq+YtFojLwYro/oWv5BF8x2WbGm8rAvHnFMvZyBZiMKSJrrEXv11Aw7djt1pYHTZqbrxg/x9o0WzfxVL64xliOWOAyPHz/YBjDWZ/wBinZoJUszaKqUsaYuev52ug2f6euAVF/VmPYMvSsEf/a9Ekn0CHLaI/wA51GA96L1mzJKi/mG0lXg2APyzo0fjvtk9WlqmlVUq9KRuAx8c4/HqbItTdf8AR2ZBMt8O0ElF3i34abxNXZjphRRvonooglnd5vjb6F85HGdgfrX11xnO2S/VmMYxjOc5zlnz3jGMYx++c5z+2MY/fOfKgPb36oa69rtBxWvTM4IVDcdPTRCy6CuYYI0kf0iYJaIokB8gjKj8T9jiElaNmiZYe2LiSDQmMBG2T/fIlUWTw9v71M9Cez3pzje4fZk351gdO8Tj3Z4PRHNlpWdeDfom1ZK/jxOTF7FlVo0ZQGlaVsi8gMKwlAAsds03JhLiShCk9EaKs3+ySaNTI4uKe8rCzPfd72Bt21azScXeVh/9C44mvc0q+NnLslW9mC3ui5NaqsMFVeI9ZXXd9goOGpb9IWwBIhmKGrq5yj2CARL2jjRZpTVU05cmvWIaWsG/JgaYnNuPIo7FHcK8zC7SkI0FrUnKVOs7ClEjJIig4NODVLHkpSZOFyrhNsNFidBj50QIkXSSDRo2VculUk099tdFvG3Ku/sL9lkv983RAfMN5yraGYrr1swudMHkYMG6ph7U+ht2BPhp39FqBisv+wzmaU21PIIvXcflw2fumolnHYSZkGQvvi4X9ovsXqCPcscS2rzFSPPkgQSK36ZtKxrdi1lWY+YEN1AdbMR9f0fYAgVV4/DVhITzjMl/MTM1swCuhgWPR17pNtC8f+PP8jqTGo9Fb79uQ+waFMmgge4a8e96d5TEdMapckWjadRNaGyun28UkqBuK7ExeI9IVm4ErhxqwKLIMVllNOzV0uv7I/skrTNYY0XoyTtTa6uYbWIyDR2jCHxKe4Vmio514kP5kzabtRGdayWsjjVSHletoY8XK+IWcjTXElA4PoJ5gglXxRdswD48wuBt6liRS5AKZpr/AJb6YD3wMH7AqwZFBb1oSGEmjZ+OIsHKLxg/YPEdHDR6ydt91G7po6bqJrtnKCiiK6KmiqW+2m2u2XnNo0asGjViybpNGTJui0aNW+mqSDZq2T1Rbt0UtMY0TSRS00TT01xjXTTXXXXGMYx48XJw7L9XPq526+yYknD7PDnNYis34/OU1iI+/fkfPGBdnWPu4d3Cnb1TaR9nGOzrm8RaacvvCbRFuPz7H3755Hjx48p5fx48ePDw8ePHjw8PHjx48PDz/9k="/>
and point to it in the script: requests.get("http://yourserver/somedir/images.html").
When you run the script you will get the following 3 images:
, , , respectively named newim1.png, newim2.png and newim3.jpg.
As a reminder, do note that this script (in its current form) will only handle embedded images. If you want to process also ordinary linked images, then you have to modify it accordingly (but this is not difficult).
This is an image encoded in base64. Quoting the URL below: "base64 equals to text (string) representation of the image itself".
Read this for a detailed explanation:
http://www.stoimen.com/blog/2009/04/23/when-you-should-use-base64-for-images/
In order to use them you'll have to implement a base64 decoder. Luckily SO already provides you with the answer on how to do it:
Python base64 data decode
I have been having a persistent problem getting an rss feed from a particular website. I wound up writing a rather ugly procedure to perform this function, but I am curious why this happens and whether any higher level interfaces handle this problem properly. This problem isn't really a show stopper, since I don't need to retrieve the feed very often.
I have read a solution that traps the exception and returns the partial content, yet since the incomplete reads differ in the amount of bytes that are actually retrieved, I have no certainty that such solution will actually work.
#!/usr/bin/env python
import os
import sys
import feedparser
from mechanize import Browser
import requests
import urllib2
from httplib import IncompleteRead
url = 'http://hattiesburg.legistar.com/Feed.ashx?M=Calendar&ID=543375&GUID=83d4a09c-6b40-4300-a04b-f88884048d49&Mode=2013&Title=City+of+Hattiesburg%2c+MS+-+Calendar+(2013)'
content = feedparser.parse(url)
if 'bozo_exception' in content:
print content['bozo_exception']
else:
print "Success!!"
sys.exit(0)
print "If you see this, please tell me what happened."
# try using mechanize
b = Browser()
r = b.open(url)
try:
r.read()
except IncompleteRead, e:
print "IncompleteRead using mechanize", e
# try using urllib2
r = urllib2.urlopen(url)
try:
r.read()
except IncompleteRead, e:
print "IncompleteRead using urllib2", e
# try using requests
try:
r = requests.request('GET', url)
except IncompleteRead, e:
print "IncompleteRead using requests", e
# this function is old and I categorized it as ...
# "at least it works darnnit!", but I would really like to
# learn what's happening. Please help me put this function into
# eternal rest.
def get_rss_feed(url):
response = urllib2.urlopen(url)
read_it = True
content = ''
while read_it:
try:
content += response.read(1)
except IncompleteRead:
read_it = False
return content, response.info()
content, info = get_rss_feed(url)
feed = feedparser.parse(content)
As already stated, this isn't a mission critical problem, yet a curiosity, as even though I can expect urllib2 to have this problem, I am surprised that this error is encountered in mechanize and requests as well. The feedparser module doesn't even throw an error, so checking for errors depends on the presence of a 'bozo_exception' key.
Edit: I just wanted to mention that both wget and curl perform the function flawlessly, retrieving the full payload correctly every time. I have yet to find a pure python method to work, excepting my ugly hack, and I am very curious to know what is happening on the backend of httplib. On a lark, I decided to also try this with twill the other day and got the same httplib error.
P.S. There is one thing that also strikes me as very odd. The IncompleteRead happens consistently at one of two breakpoints in the payload. It seems that feedparser and requests fail after reading 926 bytes, yet mechanize and urllib2 fail after reading 1854 bytes. This behavior is consistend, and I am left without explanation or understanding.
At the end of the day, all of the other modules (feedparser, mechanize, and urllib2) call httplib which is where the exception is being thrown.
Now, first things first, I also downloaded this with wget and the resulting file was 1854 bytes. Next, I tried with urllib2:
>>> import urllib2
>>> url = 'http://hattiesburg.legistar.com/Feed.ashx?M=Calendar&ID=543375&GUID=83d4a09c-6b40-4300-a04b-f88884048d49&Mode=2013&Title=City+of+Hattiesburg%2c+MS+-+Calendar+(2013)'
>>> f = urllib2.urlopen(url)
>>> f.headers.headers
['Cache-Control: private\r\n',
'Content-Type: text/xml; charset=utf-8\r\n',
'Server: Microsoft-IIS/7.5\r\n',
'X-AspNet-Version: 4.0.30319\r\n',
'X-Powered-By: ASP.NET\r\n',
'Date: Mon, 07 Jan 2013 23:21:51 GMT\r\n',
'Via: 1.1 BC1-ACLD\r\n',
'Transfer-Encoding: chunked\r\n',
'Connection: close\r\n']
>>> f.read()
< Full traceback cut >
IncompleteRead: IncompleteRead(1854 bytes read)
So it is reading all 1854 bytes but then thinks there is more to come. If we explicitly tell it to read only 1854 bytes it works:
>>> f = urllib2.urlopen(url)
>>> f.read(1854)
'\xef\xbb\xbf<?xml version="1.0" encoding="utf-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">...snip...</rss>'
Obviously, this is only useful if we always know the exact length ahead of time. We can use the fact the partial read is returned as an attribute on the exception to capture the entire contents:
>>> try:
... contents = f.read()
... except httplib.IncompleteRead as e:
... contents = e.partial
...
>>> print contents
'\xef\xbb\xbf<?xml version="1.0" encoding="utf-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">...snip...</rss>'
This blog post suggests this is a fault of the server, and describes how to monkey-patch the httplib.HTTPResponse.read() method with the try..except block above to handle things behind the scenes:
import httplib
def patch_http_response_read(func):
def inner(*args):
try:
return func(*args)
except httplib.IncompleteRead, e:
return e.partial
return inner
httplib.HTTPResponse.read = patch_http_response_read(httplib.HTTPResponse.read)
I applied the patch and then feedparser worked:
>>> import feedparser
>>> url = 'http://hattiesburg.legistar.com/Feed.ashx?M=Calendar&ID=543375&GUID=83d4a09c-6b40-4300-a04b-f88884048d49&Mode=2013&Title=City+of+Hattiesburg%2c+MS+-+Calendar+(2013)'
>>> feedparser.parse(url)
{'bozo': 0,
'encoding': 'utf-8',
'entries': ...
'status': 200,
'version': 'rss20'}
This isn't the nicest way of doing things, but it seems to work. I'm not expert enough in the HTTP protocols to say for sure whether the server is doing things wrong, or whether httplib is mis-handling an edge case.
I find out in my case, send a HTTP/1.0 request , fix the problem, just adding this to the code:
import httplib
httplib.HTTPConnection._http_vsn = 10
httplib.HTTPConnection._http_vsn_str = 'HTTP/1.0'
after I do the request :
req = urllib2.Request(url, post, headers)
filedescriptor = urllib2.urlopen(req)
img = filedescriptor.read()
after I back to http 1.1 with (for connections that support 1.1) :
httplib.HTTPConnection._http_vsn = 11
httplib.HTTPConnection._http_vsn_str = 'HTTP/1.1'
I have fixed the issue by using HTTPS instead of HTTP and its working fine. No code change was required.
I have to load some url with cyrillic symbols. My script should work with this:
http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/
If I'll use this in browser it would replaced into normal symbols, but urllib code fails with 404 error. How to decode correctly this url?
When I'm using that url directly in code, like address = 'that address', it works perfect. But I used parsing page for getting this url. I have a list of urls which contents cyrillic. Maybe they have uncorrect encoding? Here is more code:
requestData = urllib2.Request( %SOME_ADDRESS%, None, {"User-Agent": user_agent})
requestHandler = pageHandler.open(requestData)
pageData = requestHandler.read().decode('utf-8')
soupHandler = BeautifulSoup(pageData)
topicLinks = []
for postBlock in soupHandler.findAll('a', href=re.compile('%SOME_REGEXP%')):
topicLinks.append(postBlock['href'])
postAddress = choice(topicLinks)
postRequestData = urllib2.Request(postAddress, None, {"User-Agent": user_agent})
postHandler = pageHandler.open(postRequestData)
postData = postHandler.read()
File "/usr/lib/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
I have a list of urls which contents cyrillic.
OK, if it contains raw (not %-encoded) Cyrillic characters that's not like the example, and in fact it isn't a URL at all.
An address with non-ASCII characters in it is known as an IRI. IRIs shouldn't be used in an HTML link, but browsers tend to fix up these mistakes.
To convert an IRI to a URI which you can then open with urllib, you have to:
encode non-ASCII characters in the hostname part using Punycode (IDNA).
encode non-ASCII characters in rest of the IRI to UTF-8 bytes and URL-encode them (resulting in %D0%BF... like in the example URL).
an example implementation.
You can try to use the urllib.unquote method.
>>> import urllib
>>> string = urllib.unquote("http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/")
>>> print string.decode("utf-8")
http://wincode.org/программирование/
the following code worked for me (modified code from Arseny above):
import urllib.parse
string='http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/'
string = urllib.parse.unquote(string,encoding='utf-8') # http://wincode.org/программирование/