I'm using requests.adapters.HTTPAdapter to retry requests in my python3 scripts. And I found that when a request times out, it will throw a ConnectionError instead of a ReadTimeout.
I'm using python3.7.4 and requests==2.22.0.
And requests#2392 may be helpful, but I'm not sure are they the same things.
import requests
from requests.adapters import HTTPAdapter
# request1
try:
requests.get('http://httpbin.org/delay/2', timeout=1)
except requests.ReadTimeout as e:
print('request1', e)
s = requests.Session()
s.mount('http://', HTTPAdapter(max_retries=1))
# request2
try:
s.get('http://httpbin.org/delay/2', timeout=1)
except requests.ReadTimeout as e:
print('this line will not be printed')
except requests.ConnectionError as e:
print('request2', e)
# request3
try:
s.get('http://github.com:88', timeout=1)
except requests.ConnectTimeout as e:
print('request3', e)
s.close()
Here is the output:
request1 HTTPConnectionPool(host='httpbin.org', port=80): Read timed out. (read timeout=1)
request2 HTTPConnectionPool(host='httpbin.org', port=80): Max retries exceeded with url: /delay/2 (Caused by ReadTimeoutError("HTTPConnectionPool(host='httpbin.org', port=80): Read timed out. (read timeout=1)"))
request3 HTTPConnectionPool(host='github.com', port=88): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x10d1a6c10>, 'Connection to github.com timed out. (connect timeout=1)'))
In request2, I expect ReadTimeout can catch the exception, not ConnectionError.
So could anyone tell me why?
Is 'port=88' correct in request 3? All others are port=80.
# request3
try:
s.get('http://github.com:88', timeout=1)
except requests.ConnectTimeout as e:
print('request3', e)
I did a 'verbose' curl to that url and I get a timeout.
curl --url "http://github.com:88" --verbose
Rebuilt URL to: http://github.com:88/
Trying 140.82.113.3...
TCP_NODELAY set
connect to 140.82.113.3 port 88 failed: Timed out
Failed to connect to github.com port 88: Timed out
Closing connection 0
curl: (7) Failed to connect to github.com port 88: Timed out
Same call but port 80 is quick and connects
curl --url "http://github.com:80" --verbose
Rebuilt URL to: http://github.com:80/
Trying 192.30.253.113...
TCP_NODELAY set
Connected to github.com (192.30.253.113) port 80 (#0)
GET / HTTP/1.1
Host: github.com
User-Agent: curl/7.55.1
Accept: /
>
< HTTP/1.1 301 Moved Permanently
< Content-length: 0
< Location: https://github.com/
<
Connection #0 to host github.com left intact
Related
I want to set a max retry limit on my script to eliminate these errors:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='173.180.119.132', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x03F9E2E0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
I can't find a way to send a post request with max retries.
This is my code:
import requests
from requests.adapters import HTTPAdapter
requests.adapters.DEFAULT_RETRIES = 2
f = open("hosts.txt", "r")
payload = {
'inUserName': 'ADMIN',
'inUserPassword': '1234'
}
i = 0
for line in f:
i += 1
print(i)
r = requests.post("http://" + line, data=payload)
if "401 - Unauthorized" in r:
pass
else:
if r.status_code != 200:
pass
else:
with open("output.txt", "a+") as output_file:
output_file.write(line)
This error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='173.180.119.132', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x03F9E2E0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
is caused by sending too many requests to the server and the only way to detect is by a response from the server-side, i.e. there is no way of knowing when this error will be thrown on client-side.
There are a few ways in which you can get around this error.
You can catch the error and you can break out of the loop.
try:
page1 = requests.get(ap)
except requests.exceptions.ConnectionError:
#r.status_code = "Connection refused"
break
You can also simply add a sleep(unit) line in your code to add a gap between each request made to the server. This often overcomes the maxRetry error.
from time import sleep
sleep(5) # 5 seconds sleep cmd
In case of a connection error, I want Python to wait and re-try. Here's the relevant code, where "link" is some link:
import requests
import urllib.request
import urllib.parse
from random import randint
try:
r=requests.get(link)
except ConnectionError or TimeoutError:
print("Will retry again in a little bit")
time.sleep(randint(2500,3000))
r=requests.get(link)
Except I still periodically get a connection error. And I never see the text "Will retry again in a little bit" so I know the code is not re-trying. What am I doing wrong? I'm pasting parts of the error code below in case I'm misreading the error. TIA!
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None))
During handling of the above exception, another exception occurred:
requests.exceptions.ConnectionError: ('Connection aborted.', TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None))
For me, using a custom user agent in the request fixes this issue. With this method you spoof your browser.
Works:
url = "https://www.nasdaq.com/market-activity/stocks/amd"
headers = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4'}
response = requests.get(url, headers=headers)
Doesn't work:
url = "https://www.nasdaq.com/market-activity/stocks/amd"
response = requests.get(url)
The second request is not inside a try block so exceptions are not caught. Also in the try-except block you're not catching other exceptions that may occur.
You could use a loop to attempt a connection two times, and break if the request is successful.
for _ in range(2):
try:
r = requests.get(link)
break
except (ConnectionError, TimeoutError):
print("Will retry again in a little bit")
except Exception as e:
print(e)
time.sleep(randint(2500,3000))
I think you should use
except (ConnectionError, TimeoutError) as e:
print("Will retry again in a little bit")
time.sleep(randint(2500,3000))
r=requests.get(link)
See this similar question, or check the docs.
I had the same problem. It turns out that urlib3 relies on socket.py, which raises an OSError. So, you need to catch that:
try:
r = requests.get(link)
except OSError as e:
print("There as an error: {}".format(e))
I'm trying to make some kind of a scanner with python (just for fun)
it will send a get request to an random ip and see if there is any answer
the problem is that every time that the connection fails the program will stop running .
this is the code
import time
import requests
ips = open("ip.txt", "r")
for ip in ips:
r = requests.get(url="http://"+ip+"/",verify=False)
print(r.status_code)
time.sleep(0.5)
this is what i get by trying just a random ip :
requests.exceptions.ConnectionError: HTTPConnectionPool(host='177.0.0.0', port=80): Max retries exceeded with url:
This is throwing an error. To protect against this, use a try/except statement:
for ip in ips:
try:
r = requests.get(url="http://"+ip+"/",verify=False)
print(r.status_code)
except requests.exceptions.RequestException as e:
print('Connecting to ip ' + ip + ' failed.', e)
time.sleep(0.5)
I am trying to fetch a list of server certificates and using the python standard SSL library to accomplish this. This is how I am doing it:
import ssl
from socket import *
urls = [i.strip().lower() for i in open("urls.txt")]
for urls in url:
try:
print ssl.get_server_certificate((url, 443))
except error:
print "No connection"
However for some URLs,there are connectivity issues and the connection just times out.However it waits for the default ssl timeout value(which is quite long) before timing out.How do i specify a timeout in the ssl.get_server_certificate method ? I have specified timeouts for sockets before but I am clueless as to how to do it for this method
From the docs:
SSL sockets provide the following methods of Socket Objects:
gettimeout(), settimeout(), setblocking()
So should just be as simple as:
import ssl
from socket import *
settimeout(10)
urls = [i.strip().lower() for i in open("urls.txt")]
for urls in url:
try:
print ssl.get_server_certificate((url, 443))
except (error, timeout) as err:
print "No connection: {0}".format(err)
This versions runs for me using with Python 3.9.12 (hat tip #bchurchill):
import ssl
import socket
socket.setdefaulttimeout(2)
urls = [i.strip().lower() for i in open("urls.txt")]
for url in urls:
try:
certificate = ssl.get_server_certificate((url, 443))
print (certificate)
except Exception as err:
print(f"No connection to {url} due to: {err}")
I'm trying to make an http proxy in python. So far I've got everything except https working, hence the next step is to implement the CONNECT method.
I'm slightly confused with the chain of events that need to occur when doing https tunnelling.
From my understanding I should have this when connecting to google:
Broswer -> Proxy
CONNECT www.google.co.uk:443 HTTP/1.1\r\n\r\n
Then the proxy should establish a secure connection to google.co.uk, and confirm it by sending:
Proxy -> Browser
HTTP/1.1 200 Connection established\r\n\r\n
At this point I'd expect the browser to now go ahead with whatever it was going to do in the first place, however, I either get nothing, or get a string of bytes that I can't decode(). I've been reading anything and everything to do with ssl tunnelling, and I think I'm supposed to be forwarding any and all bytes from browser to server, as well as the other way around. However, when doing this, I get a:
HTTP/1.0 400 Bad Request\r\n...\r\n
Once I've sent the 200 code, what should I be doing next?
My code snippet for the connect method:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if headers["Method"] == "CONNECT":
client = ssl.wrap_socket(client)
try:
client.connect(( headers["Host"], headers["Port"] ))
reply = "HTTP/1.0 200 Connection established\r\n"
reply += "Proxy-agent: Pyx\r\n"
reply += "\r\n"
browser.sendall( reply.encode() )
except socket.error as err:
print(err)
break
while True:
now not sure
Help is much appreciated!
After finding this answer to a related question: HTTPS Proxy Implementation (SSLStream)
I realised that the initial connection on port 443 of the target server (in this case google.co.uk) should NOT be encrypted. I therefore removed the
client = ssl.wrap_socket(client)
line to continue with a plain text tunnel rather than ssl. Once the
HTTP/1.1 200 Connection established\r\n\r\n
message is sent, the browser and end server will then form their own ssl connection through the proxy, and so the proxy doesn't need to do anything related to the actual https connection.
The modified code (includes byte forwarding):
# If we receive a CONNECT request
if headers["Method"] == "CONNECT":
# Connect to port 443
try:
# If successful, send 200 code response
client.connect(( headers["Host"], headers["Port"] ))
reply = "HTTP/1.0 200 Connection established\r\n"
reply += "Proxy-agent: Pyx\r\n"
reply += "\r\n"
browser.sendall( reply.encode() )
except socket.error as err:
# If the connection could not be established, exit
# Should properly handle the exit with http error code here
print(err)
break
# Indiscriminately forward bytes
browser.setblocking(0)
client.setblocking(0)
while True:
try:
request = browser.recv(1024)
client.sendall( request )
except socket.error as err:
pass
try:
reply = client.recv(1024)
browser.sendall( reply )
except socket.error as err:
pass
References:
HTTPS Proxy Implementation (SSLStream)
https://datatracker.ietf.org/doc/html/draft-luotonen-ssl-tunneling-03
http://www.ietf.org/rfc/rfc2817.txt