Python caught exception and continue - python

I've a little script in python that work in loop.
Every 30 seconds it get an url with requests to check if content of the page is changed.
But sometime I get script error (about 1 time a day):
ConnectionError: HTTPSConnectionPool(host='www.example.com', port=443): Max retries exceeded with url: /test/ (Caused by NewConnectionError(<urllib3.connection.VerifiedHTTPSConnection object at 0x.......>: [Errno -3] Temporary failure in name resolution))
What it the best way to intercept exception and, if occurs, wait another 30 seconds and continue the script instead stop it?
The exception to caught is ConnectionError or NewConnectionError ?

Put a try/except around your code, like this:
try:
# your code here
except ConnectionError:
pass
except NewConnectionError:
pass

Related

How to increase PSAW Max Retries

I keep receiving the following error when trying to collect a large data from pushshift.io using PSAW.
Exception: Unable to connect to pushshift.io. Max retries exceeded.
How can I increase the "max retries" so that this won't happen?
my_reddit_submissions=api.search_submissions(before=int(end_epoch.timestamp()),
post_hint='image',
filter=['id','full_link','title', 'url', 'subreddit', 'author_fullname'],
limit = frequency
)
for submission_x in my_reddit_submissions:
data_new=data_new.append(submission_x.d_, ignore_index=True)
BTW, my code works fine till a point...
You Should take a look at this question [Might help] : Max retries exceeded with URL in requests
This Exception Is Raised When The Server Actively refuses to communicate with you. This May happen if you request too many times to the server in a short period of time.
To OverCome This, you should wait for a few seconds before retrying
Here is an example :
import time
import requests
with requests.Session() as session:
while 1: # Infinite Loop used to send infinite requests to the server without waiting time
try:
response = session.get("https://www.example.com")
except requests.exceptions.ConnectionError:
response.status_code = "Connection Refused By The Server"
time.sleep(2) # Sleeping For 2 seconds to resolve the server overload error
print(response.status_code)

How solve python requests error: "Max retries exceeded with url"

I have the follow code:
res = requests.get(url)
I use multi-thread method that will have the follow error:
ConnectionError: HTTPConnectionPool(host='bjtest.com', port=80): Max retries exceeded with url: /rest/data?method=check&test=123 (Caused by : [Errno 104] Connection reset by peer)
I have used the follow method, but it still have the error:
s = requests.session()
s.keep_alive = False
OR
res = requests.get(url, headers={'Connection': 'close'})
So, I should how do it?
BTW, the url is OK, but it only can be visited internal, so the url have no problem. Thanks!
you run your script on Mac? I also meet similar problem, you can execute ulimit -n to check how many files you can handle in a time.
you can use below to enlarge the configuration.
resource.setrlimit(resource.RLIMIT_NOFILE, (the number you reset,resource.RLIM_INFINITY))
hoping can help you.
my blog which associated with your problem
I got a similar case, hopefully it can save some time to you:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8001): Max retries exceeded with url: /enroll/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10f96ecc0>: Failed to establish a new connection: [Errno 61] Connection refused'))
The problem was actually silly... the localhost was down at port 8001! Restarting the server solved it.
The error message (which is admittedly a little confusing) actually means that requests failed to connect to your requested URL at all.
In this case that's because your url is http://bjtest.com/rest/data?method=check&test=123, which isn't a real website.
It has nothing to do with the format you made the request in. Fix your url and it should (presumably) work for you.

Cannot catch ConnectionError with requests

I'm doing this:
import requests
r = requests.get("http://non-existent-domain.test")
And getting
ConnectionError: HTTPConnectionPool(host='non-existent-domain.test', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x10b0170f0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
However, if I try to catch it like this:
try:
r = requests.get("http://non-existent-domain.test")
except ConnectionError:
print("ConnectionError")
Nothing changes, I still have ConnectionError unhandled. How to catch it properly?
That's a different ConnectionError. You are catching the built-in one, but requests has its own. So this should be
try:
r = requests.get("http://non-existent-domain.test")
except requests.ConnectionError:
print("ConnectionError")
# Output: ConnectionError

Python requests.get(url, timeout=75) does not wait for specified timeout

requests.get("http://172.19.235.178", timeout=75)
is my piece of code.
It is trying a get request on the url which is a phone and is supposed to wait upto 75 seconds for it to return a 200OK.
This request works perfectly on one Ubuntu machine but does not wait for 75 seconds on another machine.
according the documentation on https://2.python-requests.org/en/master/user/advanced/#timeouts you can set a timeout in the requests connection part but the timeout you are encountering is an OS related socket timeout.
notice that if you do:
requests.get("http://172.19.235.178", timeout=1)
you get:
ConnectTimeout: HTTPConnectionPool(host='172.19.235.178', port=80):
Max retries exceeded with url: / (Caused by
ConnectTimeoutError(, 'Connection to 172.19.235.178 timed out. (connect
timeout=1)'))
while when you do
requests.get("http://172.19.235.178", timeout=75)
you get:
ConnectionError: HTTPConnectionPool(host='172.19.235.178', port=80): Max
retries exceeded with url: / (Caused by
NewConnectionError(': Failed to establish a new connection: [Errno
10060] A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond',))
while you could change you OS behavior as stated here: http://willbryant.net/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout
In your case I would put a timeout of 10 and iterate over it a few times with a try except statement

Python Requests ProxyError not caught

Why would the proxy error not be caught by the first except: clause? I am not quite understanding why it is defaulting to the second clause (or if I remove the second cause it will just throw an error)
from requests.exceptions import ProxyError
try:
login(acc)
except ProxyError:
pass
except Exception as e:
print e
Output:
HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: /mail (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 403 Forbidden',)))
You've hit a bit of an edge-case here. The ProxyError exception is not actually the requests.exceptions exception; it an exception with the same name from the embedded urllib3 library, and it is wrapped in a MaxRetryError exception.
This is really a bug, and was indeed filed as such a while ago, see issue #3050. It was fixed with this pull request, to raise the proper requests.exceptions.ProxyError exception instead. This fix has been released as part of requests 2.9.2.
Normally, requests unwraps the MaxRetryError exception for you, but not for this specific exception. If you can’t upgrade to 2.9.2 or newer you can catch it specifically (unwrapping two layers now):
from requests.exceptions import ConnectionError
from requests.packages.urllib3.exceptions import MaxRetryError
from requests.packages.urllib3.exceptions import ProxyError as urllib3_ProxyError
try:
# ...
except ConnectionError as ce:
if (isinstance(ce.args[0], MaxRetryError) and
isinstance(ce.args[0].reason, urllib3_ProxyError)):
# oops, requests should have handled this, but didn't.
# see https://github.com/kennethreitz/requests/issues/3050
pass
or apply the change from the pull request to your local install of requests.

Categories

Resources