Set max retries on requests.post - python

I want to set a max retry limit on my script to eliminate these errors:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='173.180.119.132', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x03F9E2E0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
I can't find a way to send a post request with max retries.
This is my code:
import requests
from requests.adapters import HTTPAdapter
requests.adapters.DEFAULT_RETRIES = 2
f = open("hosts.txt", "r")
payload = {
'inUserName': 'ADMIN',
'inUserPassword': '1234'
}
i = 0
for line in f:
i += 1
print(i)
r = requests.post("http://" + line, data=payload)
if "401 - Unauthorized" in r:
pass
else:
if r.status_code != 200:
pass
else:
with open("output.txt", "a+") as output_file:
output_file.write(line)

This error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='173.180.119.132', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x03F9E2E0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
is caused by sending too many requests to the server and the only way to detect is by a response from the server-side, i.e. there is no way of knowing when this error will be thrown on client-side.
There are a few ways in which you can get around this error.
You can catch the error and you can break out of the loop.
try:
page1 = requests.get(ap)
except requests.exceptions.ConnectionError:
#r.status_code = "Connection refused"
break
You can also simply add a sleep(unit) line in your code to add a gap between each request made to the server. This often overcomes the maxRetry error.
from time import sleep
sleep(5) # 5 seconds sleep cmd

Related

How to redo a try statement within a certain amount of tries

Currently I have a program that has proxies and makes a request to get my ip address with that proxy and returns it back in json.
An example request back is this:
Got Back: {'ip': '91.67.240.45', 'country': 'Germany', 'cc': 'DE'}
I want my program to try and make a request to the url and if it does not get the request because the proxy is down I want to try again 5 times before moving onto the next ip address.
I thought this except block would work but it is not breaking out of the loop when the 5 iterations are over and I am not sure why.
My program does however work when the proxy is up for the first try as it breaks after the first attempt and then moves onto the next ip address.
Here is what I currently have:
import requests
import time
proxies = [
"95.87.220.19:15600",
"91.67.240.45:3128",
"85.175.216.32:53281",
"91.236.251.131:8118",
"91.236.251.131:8118",
"88.99.10.249:1080",
]
def sol(ip):
max_tries = 5
for i in range(1, max_tries+1):
try:
print(f"Using Proxy: {ip}")
r = requests.get('https://api.myip.com', proxies={"https": ip})
print(f"Got Back: {r.json()}")
break
except OSError:
time.sleep(5)
print(f"Retrying...: {i}")
break
for i in proxies:
sol(i)
How can I make it s my loop has 5 tries before moving onto the next ip address.
My program does however work when the proxy is up for the first try as it breaks after the first attempt and then moves onto the next ip address.
It does this unconditionally, because you have an unconditional break after the except block. Code keeps going past a try/except when the except is entered, assuming it doesn't have an abnormal exit of its own (another exception, return etc.).
So,
I thought this except block would work but it is not breaking out of the loop when the 5 iterations are over and I am not sure why.
It doesn't break out "after the 5 iterations are over" because it breaks out after the first iteration, whether or not that attempt was successful.
If I understand correctly, you can just remove break from the last line. If you have an unconditional break in a loop, it will always iterate once.
def sol(ip):
max_tries = 5
for i in range(1, max_tries+1):
try:
print(f"Using Proxy: {ip}")
r = requests.get('https://api.myip.com', proxies={"https": ip})
print(f"Got Back: {r.json()}")
break
except OSError:
time.sleep(5)
print(f"Retrying...: {i}")
# break <---- Remove this line
Using retrying would look something like the following:
from retrying import retry
import requests
#retry(stop_max_attempt_number=5)
def bad_host():
print('trying get from bad host')
return requests.get('https://bad.host.asdqwerqweo79ooo/')
try:
bad_host()
except IOError as ex:
print(f"Couldn't connect because: {str(ex)}")
...which give the following output:
trying get from bad host
trying get from bad host
trying get from bad host
trying get from bad host
trying get from bad host
Couldn't connect to bad host because: HTTPSConnectionPool(host='bad.host.asdqwerqweo79ooo', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x10b6bd910>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
Getting fancy
If you want to get fancy, you could also add things like exponential backoff and selectively retrying certain exceptions.
Here's an example:
import random
import time
def retry_ioerror(exception):
return isinstance(exception, IOError)
#retry(
wait_exponential_multiplier=100,
wait_exponential_max=1000,
retry_on_exception=retry_ioerror,
stop_max_attempt_number=10)
def do_something():
t = time.time()
print(f'trying {t}')
r = random.random()
if r > 0.9:
return 'yay!'
if r > 0.8:
raise RuntimeError('Boom!')
else:
raise IOError('Bang!')
try:
result = do_something()
print(f'Success! {result}')
except RuntimeError as ex:
print(f"Failed: {str(ex)}")

read text file with ip address list and make a curl connection

i'm tying to create a python script that reads a text file with a list of ip addresses then would send a curl request to port 80 and supply a user name and password to see if I can log into the web interface and return whatever the web page displays. any help is most appreciated.
import sys
import requests
f = open('APs.txt', 'r')
c = f.read()
for i in c:
r = requests.get('http://' + i, auth=('user', 'pass'))
print(i, r.status_code)
f.close()
I think this is working as expected. Does anyone have a better way?
import sys
import requests
with open(r'APs.txt', 'r') as ips:
for line in ips:
ip = line.strip()
r = requests.get('http://' +ip, auth=('user', 'pass'))
print(ip,',',r.status_code)
This is what I have now but I am getting a error and trying to work with try/except but having issues.
import sys
import requests
with open(r'APs.txt', 'r') as ips:
for line in ips:
try:
ip = line.strip()
r = requests.get('http://' +ip, auth=('user', 'pass'), timeout=2)
print(ip,',',r.status_code)
except ConnectTimeout:
print(ip,',','error')
here is the error:
requests.exceptions.ConnectTimeout: HTTPConnectionPool(host='1.1.1.1', port=80): Max retries exceeded with url: / (Caused by ConnectTimeoutError(, 'Connection to 1.1.1.1 timed out. (connect timeout=1)'))

Why requests throw ConnectionError instead of ReadTimeout in HTTPAdapter?

I'm using requests.adapters.HTTPAdapter to retry requests in my python3 scripts. And I found that when a request times out, it will throw a ConnectionError instead of a ReadTimeout.
I'm using python3.7.4 and requests==2.22.0.
And requests#2392 may be helpful, but I'm not sure are they the same things.
import requests
from requests.adapters import HTTPAdapter
# request1
try:
requests.get('http://httpbin.org/delay/2', timeout=1)
except requests.ReadTimeout as e:
print('request1', e)
s = requests.Session()
s.mount('http://', HTTPAdapter(max_retries=1))
# request2
try:
s.get('http://httpbin.org/delay/2', timeout=1)
except requests.ReadTimeout as e:
print('this line will not be printed')
except requests.ConnectionError as e:
print('request2', e)
# request3
try:
s.get('http://github.com:88', timeout=1)
except requests.ConnectTimeout as e:
print('request3', e)
s.close()
Here is the output:
request1 HTTPConnectionPool(host='httpbin.org', port=80): Read timed out. (read timeout=1)
request2 HTTPConnectionPool(host='httpbin.org', port=80): Max retries exceeded with url: /delay/2 (Caused by ReadTimeoutError("HTTPConnectionPool(host='httpbin.org', port=80): Read timed out. (read timeout=1)"))
request3 HTTPConnectionPool(host='github.com', port=88): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x10d1a6c10>, 'Connection to github.com timed out. (connect timeout=1)'))
In request2, I expect ReadTimeout can catch the exception, not ConnectionError.
So could anyone tell me why?
Is 'port=88' correct in request 3? All others are port=80.
# request3
try:
s.get('http://github.com:88', timeout=1)
except requests.ConnectTimeout as e:
print('request3', e)
I did a 'verbose' curl to that url and I get a timeout.
curl --url "http://github.com:88" --verbose
Rebuilt URL to: http://github.com:88/
Trying 140.82.113.3...
TCP_NODELAY set
connect to 140.82.113.3 port 88 failed: Timed out
Failed to connect to github.com port 88: Timed out
Closing connection 0
curl: (7) Failed to connect to github.com port 88: Timed out
Same call but port 80 is quick and connects
curl --url "http://github.com:80" --verbose
Rebuilt URL to: http://github.com:80/
Trying 192.30.253.113...
TCP_NODELAY set
Connected to github.com (192.30.253.113) port 80 (#0)
GET / HTTP/1.1
Host: github.com
User-Agent: curl/7.55.1
Accept: /
>
< HTTP/1.1 301 Moved Permanently
< Content-length: 0
< Location: https://github.com/
<
Connection #0 to host github.com left intact

except ConnectionError or TimeoutError not working

In case of a connection error, I want Python to wait and re-try. Here's the relevant code, where "link" is some link:
import requests
import urllib.request
import urllib.parse
from random import randint
try:
r=requests.get(link)
except ConnectionError or TimeoutError:
print("Will retry again in a little bit")
time.sleep(randint(2500,3000))
r=requests.get(link)
Except I still periodically get a connection error. And I never see the text "Will retry again in a little bit" so I know the code is not re-trying. What am I doing wrong? I'm pasting parts of the error code below in case I'm misreading the error. TIA!
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None))
During handling of the above exception, another exception occurred:
requests.exceptions.ConnectionError: ('Connection aborted.', TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None))
For me, using a custom user agent in the request fixes this issue. With this method you spoof your browser.
Works:
url = "https://www.nasdaq.com/market-activity/stocks/amd"
headers = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4'}
response = requests.get(url, headers=headers)
Doesn't work:
url = "https://www.nasdaq.com/market-activity/stocks/amd"
response = requests.get(url)
The second request is not inside a try block so exceptions are not caught. Also in the try-except block you're not catching other exceptions that may occur.
You could use a loop to attempt a connection two times, and break if the request is successful.
for _ in range(2):
try:
r = requests.get(link)
break
except (ConnectionError, TimeoutError):
print("Will retry again in a little bit")
except Exception as e:
print(e)
time.sleep(randint(2500,3000))
I think you should use
except (ConnectionError, TimeoutError) as e:
print("Will retry again in a little bit")
time.sleep(randint(2500,3000))
r=requests.get(link)
See this similar question, or check the docs.
I had the same problem. It turns out that urlib3 relies on socket.py, which raises an OSError. So, you need to catch that:
try:
r = requests.get(link)
except OSError as e:
print("There as an error: {}".format(e))

Python Requests ignoring time outs

I'm trying to make some kind of a scanner with python (just for fun)
it will send a get request to an random ip and see if there is any answer
the problem is that every time that the connection fails the program will stop running .
this is the code
import time
import requests
ips = open("ip.txt", "r")
for ip in ips:
r = requests.get(url="http://"+ip+"/",verify=False)
print(r.status_code)
time.sleep(0.5)
this is what i get by trying just a random ip :
requests.exceptions.ConnectionError: HTTPConnectionPool(host='177.0.0.0', port=80): Max retries exceeded with url:
This is throwing an error. To protect against this, use a try/except statement:
for ip in ips:
try:
r = requests.get(url="http://"+ip+"/",verify=False)
print(r.status_code)
except requests.exceptions.RequestException as e:
print('Connecting to ip ' + ip + ' failed.', e)
time.sleep(0.5)

Categories

Resources