Related
My Error Message when running my python scripts using a raspberry pi
Traceback (most recent call last):>Traceback (most recent call last):
File "test.py", line 6, in (module)
import appengineauth
File "/home/pi/Downloads/google_appengine/appengineauth.py", line 30, in (module)
auth_resp = urllib2.urlopen(auth_req)
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
I'm able to access the website. Not too sure what is the actual problem.
If you're using https://github.com/adafruit/Tweet-a-Watt/blob/master/appengineauth.py (you don't tell us where you got your appengineauth.py from, thus forcing us to guess), and its line
auth_uri = 'https://www.google.com/accounts/ClientLogin'
then you're likely running into the deprecation documented at https://developers.google.com/identity/protocols/AuthForInstalledApps , and I quote:
Important: ClientLogin has been officially deprecated since April 20, 2012 and is now no longer available. Requests to ClientLogin will fail with a HTTP 404 response. We encourage you to migrate to OAuth 2.0 as soon as possible.
I.e, the 404 you're getting would then be exactly the symptom the warning tells you about, now that ClientLogin has been removed, more than 3.5 years after the original deprecation warning.
Not sure how best to connect your Raspberry Pi to App Engine (or any other Google service requiring authentication) with OAuth 2.0 (since ClientLogin is not an option any more). http://guy.carpenter.id.au/gaugette/2012/11/06/using-google-oauth2-for-devices/ (written shortly after the deprecation but smartly avoiding reliance on the already-deprecated ClientLogin service) recommends an "OAuth2 for Devices" library and summarizes how to use it; I haven't tried that library myself (and I don't have a Raspberry Pi to try it on) but it does seem like a potentially fruitful avenue for you to explore.
So I'm trying to get the URL of a page in python3...
If I do the following,
from urllib.request import urlopen
html = urlopen("http://google.com/")
html.read()
I get the html as desired.
However, if I were to choose a different url, as in the following,
from urllib.request import urlopen
html = urlopen("http://www.stackoverflow.com/")
html.read()
I get the following error after the second line:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/urllib/request.py", line 153, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/urllib/request.py", line 461, in open
response = meth(req, response)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/urllib/request.py", line 574, in http_response
'http', request, response, code, msg, hdrs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/urllib/request.py", line 499, in error
return self._call_chain(*args)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/urllib/request.py", line 433, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/urllib/request.py", line 582, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
Any ideas why this would be happening and how to fix it?
If you look closer at the error message you'll see that it is a HTTP error and a special one:
HTTP Error 403: Forbidden
So you talked to the server and got your response back but you don't know why you were denied.
You can get a more detailed message in an HTML returned by the server with something like this:
from urllib.request import urlopen
from urllib.error import HTTPError
try:
html = urlopen("http://www.stackoverflow.com/")
except HTTPError as e:
print(e.read().decode('utf-8'))
html.read()
For me it says:
<h2 data-translate="what_happened">What happened?</h2>
<p>The owner of this website (www.stackoverflow.com) has banned your access based on your browser's signature (213702c58d2116a6-ua48).</p>
You can treat HTTPError as a file object (https://docs.python.org/3/library/urllib.error.html#urllib.error.HTTPError):
Though being an exception (a subclass of URLError), an HTTPError can
also function as a non-exceptional file-like return value (the same
thing that urlopen() returns). This is useful when handling exotic
HTTP errors, such as requests for authentication.
I tried to google and search for similar question on stackOverflow, but still can't solve my problem.
I need my python script to perform http connections via proxy.
Below is my test script:
import urllib2, urllib
proxy = urllib2.ProxyHandler({'http': 'http://255.255.255.255:3128'})
opener = urllib2.build_opener(proxy, urllib2.HTTPHandler)
urllib2.install_opener(opener)
conn = urllib2.urlopen('http://www.whatismyip.com/')
return_str = conn.read()
webpage = open('webpage.html', 'w')
webpage.write(return_str)
webpage.close()
This script works absolutely fine on my local computer (Windows 7, Python 2.7.3), but when I try to run it on the server, it gives me the following error:
Traceback (most recent call last):
File "proxy_auth.py", line 18, in <module>
conn = urllib2.urlopen('http://www.whatismyip.com/')
File "/home/myusername/python/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/home/myusername/python/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/home/myusername/python/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/home/myusername/python/lib/python2.7/urllib2.py", line 378, in _call_chai n
result = func(*args)
File "/home/myusername/python/lib/python2.7/urllib2.py", line 1207, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/home/myusername/python/lib/python2.7/urllib2.py", line 1177, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 110] Connection timed out>
I also tried to use requests library, and got the same error.
# testing request library
r = requests.get('http://www.whatismyip.com/', proxies={'http':'http://255.255.255.255:3128'})
If I don't set proxy, then the program works fine.
# this works fine
conn = urllib2.urlopen('http://www.whatismyip.com/')
I think the problem is that on my shared hosting account it is not possible to set an environment variable for proxy ... or something like that.
Are there any workarounds or alternative approaches that would let me set proxies for http connections? How should I modify my test script?
The problem was in closed ports.
I had to buy a dedicated IP before tech support could open the ports I needed.
Now my script works fine.
Conclusion: when you are on a shared hosting, most ports are probably closed and you will have to contact tech support to open ports.
I want to open and read https://yande.re/ with urllib.request, but I'm getting an SSL error. I can open and read the page just fine using http.client with this code:
import http.client
conn = http.client.HTTPSConnection('www.yande.re')
conn.request('GET', 'https://yande.re/')
resp = conn.getresponse()
data = resp.read()
However, the following code using urllib.request fails:
import urllib.request
opener = urllib.request.build_opener()
resp = opener.open('https://yande.re/')
data = resp.read()
It gives me the following error: ssl.SSLError: [Errno 1] _ssl.c:392: error:1411809D:SSL routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat list. Why can I open the page with HTTPSConnection but not opener.open?
Edit: Here's my OpenSSL version and the traceback from trying to open https://yande.re/
>>> import ssl; ssl.OPENSSL_VERSION
'OpenSSL 1.0.0a 1 Jun 2010'
>>> import urllib.request
>>> urllib.request.urlopen('https://yande.re/')
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
urllib.request.urlopen('https://yande.re/')
File "C:\Python32\lib\urllib\request.py", line 138, in urlopen
return opener.open(url, data, timeout)
File "C:\Python32\lib\urllib\request.py", line 369, in open
response = self._open(req, data)
File "C:\Python32\lib\urllib\request.py", line 387, in _open
'_open', req)
File "C:\Python32\lib\urllib\request.py", line 347, in _call_chain
result = func(*args)
File "C:\Python32\lib\urllib\request.py", line 1171, in https_open
context=self._context, check_hostname=self._check_hostname)
File "C:\Python32\lib\urllib\request.py", line 1138, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 1] _ssl.c:392: error:1411809D:SSL routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat list>
>>>
What a coincidence! I'm having the same problem as you are, with an added complication: I'm behind a proxy. I found this bug report regarding https-not-working-with-urllib. Luckily, they posted a workaround.
import urllib.request
import ssl
##uncomment this code if you're behind a proxy
##https port is 443 but it doesn't work for me, used port 80 instead
##proxy_auth = '{0}://{1}:{2}#{3}'.format('https', 'username', 'password',
## 'proxy:80')
##proxies = { 'https' : proxy_auth }
##proxy = urllib.request.ProxyHandler(proxies)
##proxy_auth_handler = urllib.request.HTTPBasicAuthHandler()
##opener = urllib.request.build_opener(proxy, proxy_auth_handler,
## https_sslv3_handler)
https_sslv3_handler =
urllib.request.HTTPSHandler(context=ssl.SSLContext(ssl.PROTOCOL_SSLv3))
opener = urllib.request.build_opener(https_sslv3_handler)
urllib.request.install_opener(opener)
resp = opener.open('https://yande.re/')
data = resp.read().decode('utf-8')
print(data)
Btw, thanks for showing how to use http.client. I didn't know that there's another library that can be used to connect to the internet. ;)
This is due to a bug in the early 1.x OpenSSL implementation of elliptic curve cryptography. Take a closer look at the relevant part of the exception:
_ssl.c:392: error:1411809D:SSL routines:SSL_CHECK_SERVERHELLO_TLSEXT:tls invalid ecpointformat list
This is an error from the underlying OpenSSL library code which is a result of mishandling the EC point format TLS extension. One workaround is to use the SSLv3 instead of SSLv23 method, the other workaround is to use a cipher suite specification which disables all ECC cipher suites (I had good results with ALL:-ECDH, use openssl ciphers for testing). The fix is to update OpenSSL.
The problem is due to the hostnames that your giving in the two examples:
import http.client
conn = http.client.HTTPSConnection('www.yande.re')
conn.request('GET', 'https://yande.re/')
and...
import urllib.request
urllib.request.urlopen('https://yande.re/')
Note that in the first example, you're asking the client to make a connection to the host: www.yande.re and in the second example, urllib will first parse the url 'https://yande.re' and then try a request at the host yande.re
Although www.yande.re and yande.re may resolve to the same IP address, from the perspective of the web server these are different virtual hosts. My guess is that you had an SNI configuration problem on your web server's side. Seeing as that the original question was posted on May 21, and the current cert at yande.re starts May 28, I'm thinking that you already fixed this problem?
Try this:
import connection #imports connection
import url
url = 'http://www.google.com/'
webpage = url.open(url)
try:
connection.receive(webpage)
except:
webpage = url.text('This webpage is not available!')
connection.receive(webpage)
I currently get a Network is unreachable error to any request I'm making with python. No matter if im using the urllib library or the requests library.
After some more research its likely that its being caused by an incorrect setup ipv6 tunnel, which seems to be still active:
$ ip -6 addr show
$ ip -6 route
default dev wlan0 metric 1
Some context: I'm running Archlinux and updated the system today, although there didnt seem to be any special updates today related to python. I'm also running this under a virtualenv, but other virtualenvs and using my Python outside the virtualenv also have the same problem.
I'm using a VPN, but also without the VPN I get the same error.
I also tried restarting the PC, haha that normally helps with any problem lol but also didnt help.
I got a feeling it may be Archlinux related but I'm not sure.
This is what I tried before to setup a ipv6 tunnel:
sudo modprobe ipv6
sudo ip tunnel del sit1
sudo ip tunnel add sit1 mode sit remote 59.66.4.50 local $ipv4
sudo ifconfig sit1 down
sudo ifconfig sit1 up
sudo ifconfig sit1 add 2001:da8:200:900e:0:5efe:$ipv4/64
sudo ip route add ::/0 via 2001:da8:200:900e::1 metric 1
Also used this command:
ip -6 addr add 2001:0db8:0:f101::1/64 dev eth0
Update 3 after removing line of ipv6 in my /etc/systemctl.conf, some urls started working:
>>> urllib2.urlopen('http://www.google.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1207, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1177, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>
>>> urllib2.urlopen('http://baidu.com')
<addinfourl at 27000560 whose fp = <socket._fileobject object at 0x7f4d1fed5e50>>
This is the error log from ipython.
In [1]: import urllib2
In [2]: urllib2.urlopen('http://google.com')
---------------------------------------------------------------------------
URLError Traceback (most recent call last)
/home/samos/<ipython console> in <module>()
/usr/lib/python2.7/urllib2.py in urlopen(url, data, timeout)
124 if _opener is None:
125 _opener = build_opener()
--> 126 return _opener.open(url, data, timeout)
127
128 def install_opener(opener):
/usr/lib/python2.7/urllib2.py in open(self, fullurl, data, timeout)
398 req = meth(req)
399
--> 400 response = self._open(req, data)
401
402 # post-process response
/usr/lib/python2.7/urllib2.py in _open(self, req, data)
416 protocol = req.get_type()
417 result = self._call_chain(self.handle_open, protocol, protocol +
--> 418 '_open', req)
419 if result:
420 return result
/usr/lib/python2.7/urllib2.py in _call_chain(self, chain, kind, meth_name, *args)
376 func = getattr(handler, meth_name)
377
--> 378 result = func(*args)
379 if result is not None:
380 return result
/usr/lib/python2.7/urllib2.py in http_open(self, req)
1205
1206 def http_open(self, req):
-> 1207 return self.do_open(httplib.HTTPConnection, req)
1208
1209 http_request = AbstractHTTPHandler.do_request_
/usr/lib/python2.7/urllib2.py in do_open(self, http_class, req)
1175 except socket.error, err: # XXX what error?
1176 h.close()
-> 1177 raise URLError(err)
1178 else:
1179 try:
URLError: <urlopen error [Errno 101] Network is unreachable>
I can access google.com normally from a web browser and im pretty sure the network is reachable.
Are you sure you are not using any http proxy server to get to internet?
Try changing the network settings in your browser to no-proxy and check if it is still connecting to internet.
And if you are using proxy ( assume proxy address as http://yourproxy.com) then try doing this to check if this solves the issue.
import urllib2
proxy = urllib2.ProxyHandler({'http': 'yourproxy.com'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
urllib2.urlopen('http://www.google.com')
The big thing was that I edited my /etc/hosts file by putting back the backup of the file everything started working again. I did this to bypass the great firewall to manually set the ipv6 addresses of facebook etc, so it was still using those ipv6 addresses...
Lesson learned: Don't work too much and dont do things in a hurry. Always try to understand what you're doing and write down what you did exactly. So you have a way to fall back.
Removing the following line seemed to help a little bit in /etc/systemctl.conf:
net.ipv6.conf.all.forwarding = 1
It didn't help in the end, still getting the error Network is unreachable for google.com although its accesible in my browser:
>>> urllib2.urlopen('http://baidu.com')
<addinfourl at 27000560 whose fp = <socket._fileobject object at 0x7f4d1fed5e50>>
>>> urllib2.urlopen('http://www.google.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1207, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1177, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>