I'm doing this:
import requests
r = requests.get("http://non-existent-domain.test")
And getting
ConnectionError: HTTPConnectionPool(host='non-existent-domain.test', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x10b0170f0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
However, if I try to catch it like this:
try:
r = requests.get("http://non-existent-domain.test")
except ConnectionError:
print("ConnectionError")
Nothing changes, I still have ConnectionError unhandled. How to catch it properly?
That's a different ConnectionError. You are catching the built-in one, but requests has its own. So this should be
try:
r = requests.get("http://non-existent-domain.test")
except requests.ConnectionError:
print("ConnectionError")
# Output: ConnectionError
Related
My code:
import requests
url = 'https://hacker-news.firebaseio.com/v0/topstories.json'
r = requests.get(url)
print('Status code:', r.status_code)
and error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='hacker-news.firebaseio.com', port=443): Max retries exceeded with url: /v0/topstories.json (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1125)')))
What's the problem?
There might be a larger network problem at play here causing this ssql error but to remedy this quicker do the following:
requests.get(url, verify=False)
You are bypassing the SSL for now.
I've a little script in python that work in loop.
Every 30 seconds it get an url with requests to check if content of the page is changed.
But sometime I get script error (about 1 time a day):
ConnectionError: HTTPSConnectionPool(host='www.example.com', port=443): Max retries exceeded with url: /test/ (Caused by NewConnectionError(<urllib3.connection.VerifiedHTTPSConnection object at 0x.......>: [Errno -3] Temporary failure in name resolution))
What it the best way to intercept exception and, if occurs, wait another 30 seconds and continue the script instead stop it?
The exception to caught is ConnectionError or NewConnectionError ?
Put a try/except around your code, like this:
try:
# your code here
except ConnectionError:
pass
except NewConnectionError:
pass
requests.get("http://172.19.235.178", timeout=75)
is my piece of code.
It is trying a get request on the url which is a phone and is supposed to wait upto 75 seconds for it to return a 200OK.
This request works perfectly on one Ubuntu machine but does not wait for 75 seconds on another machine.
according the documentation on https://2.python-requests.org/en/master/user/advanced/#timeouts you can set a timeout in the requests connection part but the timeout you are encountering is an OS related socket timeout.
notice that if you do:
requests.get("http://172.19.235.178", timeout=1)
you get:
ConnectTimeout: HTTPConnectionPool(host='172.19.235.178', port=80):
Max retries exceeded with url: / (Caused by
ConnectTimeoutError(, 'Connection to 172.19.235.178 timed out. (connect
timeout=1)'))
while when you do
requests.get("http://172.19.235.178", timeout=75)
you get:
ConnectionError: HTTPConnectionPool(host='172.19.235.178', port=80): Max
retries exceeded with url: / (Caused by
NewConnectionError(': Failed to establish a new connection: [Errno
10060] A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond',))
while you could change you OS behavior as stated here: http://willbryant.net/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout
In your case I would put a timeout of 10 and iterate over it a few times with a try except statement
Why would the proxy error not be caught by the first except: clause? I am not quite understanding why it is defaulting to the second clause (or if I remove the second cause it will just throw an error)
from requests.exceptions import ProxyError
try:
login(acc)
except ProxyError:
pass
except Exception as e:
print e
Output:
HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: /mail (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 403 Forbidden',)))
You've hit a bit of an edge-case here. The ProxyError exception is not actually the requests.exceptions exception; it an exception with the same name from the embedded urllib3 library, and it is wrapped in a MaxRetryError exception.
This is really a bug, and was indeed filed as such a while ago, see issue #3050. It was fixed with this pull request, to raise the proper requests.exceptions.ProxyError exception instead. This fix has been released as part of requests 2.9.2.
Normally, requests unwraps the MaxRetryError exception for you, but not for this specific exception. If you can’t upgrade to 2.9.2 or newer you can catch it specifically (unwrapping two layers now):
from requests.exceptions import ConnectionError
from requests.packages.urllib3.exceptions import MaxRetryError
from requests.packages.urllib3.exceptions import ProxyError as urllib3_ProxyError
try:
# ...
except ConnectionError as ce:
if (isinstance(ce.args[0], MaxRetryError) and
isinstance(ce.args[0].reason, urllib3_ProxyError)):
# oops, requests should have handled this, but didn't.
# see https://github.com/kennethreitz/requests/issues/3050
pass
or apply the change from the pull request to your local install of requests.
I am trying to establish an elasticsearch connection and creating an index. But i get the following error:
elasticsearch.exceptions.ConnectionError: ConnectionError(HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /test-index (Caused by <class 'socket.error'>: [Errno 111] Connection refused)) caused by: MaxRetryError(HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /test-index (Caused by <class 'socket.error'>: [Errno 111] Connection refused))
My code is as follows:
self.es = Elasticsearch(hosts=[{"host": "http://192.168.0.5:9200", "port": 9200}], timeout=10)
self.es.indices.create(index='test-index', ignore=400 )
You do not configure the client correctly. It still tries to connect to localhost:9200. Since 9200 is the default port you can omit it.
Try this instead:
self.es = Elasticsearch(hosts=[{"host": "192.168.0.5"}], timeout=10)
You can find more info in the documentation