Adding timeout while fetching server certs via python - python

I am trying to fetch a list of server certificates and using the python standard SSL library to accomplish this. This is how I am doing it:
import ssl
from socket import *
urls = [i.strip().lower() for i in open("urls.txt")]
for urls in url:
try:
print ssl.get_server_certificate((url, 443))
except error:
print "No connection"
However for some URLs,there are connectivity issues and the connection just times out.However it waits for the default ssl timeout value(which is quite long) before timing out.How do i specify a timeout in the ssl.get_server_certificate method ? I have specified timeouts for sockets before but I am clueless as to how to do it for this method

From the docs:
SSL sockets provide the following methods of Socket Objects:
gettimeout(), settimeout(), setblocking()
So should just be as simple as:
import ssl
from socket import *
settimeout(10)
urls = [i.strip().lower() for i in open("urls.txt")]
for urls in url:
try:
print ssl.get_server_certificate((url, 443))
except (error, timeout) as err:
print "No connection: {0}".format(err)

This versions runs for me using with Python 3.9.12 (hat tip #bchurchill):
import ssl
import socket
socket.setdefaulttimeout(2)
urls = [i.strip().lower() for i in open("urls.txt")]
for url in urls:
try:
certificate = ssl.get_server_certificate((url, 443))
print (certificate)
except Exception as err:
print(f"No connection to {url} due to: {err}")

Related

Python How to add print exception that the website supports my defined protocol

How can I print errors if the tls protocol I've used isn't supported by the website I'm targeting? And how can I print that the tls protocol is supported?
I have no idea how to add an exception. Everytime an error happens it says "Connection has been terminated by the external host", which means that the tls protocol isn't supported and when the protocol is supported nothing happens.
import socket
import ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ssl_sock = context.wrap_socket(s, server_hostname='www.verisign.com')
ssl_sock.connect(('www.verisign.com', 443))
This is a generic exception
try:
ssl_sock.connect(('www.verisign.com', 443))
except Exception as e:
print("error: %s" %e)
But is not usual to request URL by socket connection low level, you can use some library (requests or curl) to request for URL

Can't reach IP using Python httplib

I can't connect to anything on my network using the IP address of the host. I can open a browser and connect and I can ping the host just fine. Here is my code:
from httplib import HTTPConnection
addr = 192.168.14.203
conn = HTTPConnection(addr)
conn.request('HEAD', '/')
res = conn.getresponse()
if res.status == 200:
print "ok"
else:
print "problem : the query returned %s because %s" % (res.status, res.reason)
The following error gets returned:
socket.error: [Errno 51] Network is unreachable
If I change the addr var to google.com I get a 200 response. What am I doing wrong?
You should check the address and your proxy settings.
For making HTTP requests I recommend the requests library. It's much more high-level and user friendly compared to httplib and it makes it easy to set proxies:
import requests
addr = "http://192.168.14.203"
response = requests.get(addr)
# if you need to set a proxy:
response = requests.get(addr, proxies={"http": "...proxy address..."})
# to avoid using any proxy if your system sets one by default
response = requests.get(addr, proxies={"http": None})

Python program to find active port on a website?

My college has some ports. Something like this
http://www.college.in:913
I want a program to find the active ones. I mean I want those port number in which the website is working.
Here is a code. But it takes a lot of time.
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
for i in range(1,10000):
req = Request("http://college.edu.in:"+str(i))
try:
response = urlopen(req)
except URLError as e:
print("Error at port"+str(i) )
else:
print ('Website is working fine'+str(i))
It might be faster to try open a socket connection to each port in the range and then only try to make a request if the socket is actually open. But it's often slow to iterate through a bunch of ports. if it takes 0.5 seconds for each, and you're scanning 10000 ports that's a lot of time waiting.
# create an INET, STREAMing socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# now connect to the web server on port 80 - the normal http port
s.connect(("www.python.org", 80))
s.close()
from https://docs.python.org/3/howto/sockets.html
You might also consider profiling the code and finding out where the slow parts are.
You can use python-nmap, which is similar to nmap.

urlopen with timeout fails behind proxy

python 2.7.3 under linux: getting strange behaviour when trying to use the timeout parameter
from urllib2 import urlopen, Request, HTTPError, URLError
url = "http://speedtest.website-solution.net/speedtest/random350x350.jpg"
try:
#f = urlopen(url, timeout=30) #never works - always times out
f = urlopen(url) #always works fine, returns after < 2 secs
print("opened")
f.close()
print("closed")
except IOError as e:
print(e)
pass
EDIT:
Digging into this more, it seems lower level.. the following code has the same issue:
s = socket.socket()
s.settimeout(30)
s.connect(("speedtest.website-solution.net", 80)) #times out
print("opened socket")
s.close()
It's running behind a socks proxy. Running using tsocks python test.py. Wonder if that can be screwing up the socket timeout for some reason? Seems strange that timeout=None works fine though.
OK.. figured it out. This is indeed related to the proxy. No idea why, but the following code seems to fix it:
Source: https://code.google.com/p/socksipy-branch/
Put this at the start of the code:
import urllib2
from urllib2 import urlopen, Request, HTTPError, URLError
import httplib
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "192.168.56.1", 101)
socks.wrapmodule(urllib2)
Now everything works fine..

How to handle timeouts with httplib (python 2.6)?

I'm using httplib to access an api over https and need to build in exception handling in the event that the api is down.
Here's an example connection:
connection = httplib.HTTPSConnection('non-existent-api.com', timeout=1)
connection.request('POST', '/request.api', xml, headers={'Content-Type': 'text/xml'})
response = connection.getresponse()
This should timeout, so I was expecting an exception to be raised, and response.read() just returns an empty string.
How can I know if there was a timeout? Even better, what's the best way to gracefully handle the problem of a 3rd-party api being down?
Even better, what's the best way to gracefully handle the problem of a 3rd-party api being down?
what's mean API is down , API return http 404 , 500 ...
or you mean when the API can't be reachable ?
first of all i don't think you can know if a web service in general is down before trying to access it so i will recommend for first one you can do like this:
import httplib
conn = httplib.HTTPConnection('www.google.com') # I used here HTTP not HTTPS for simplify
conn.request('HEAD', '/') # Just send a HTTP HEAD request
res = conn.getresponse()
if res.status == 200:
print "ok"
else:
print "problem : the query returned %s because %s" % (res.status, res.reason)
and for checking if the API is not reachable i think you will be better doing a try catch:
import httplib
import socket
try:
# I don't think you need the timeout unless you want to also calculate the response time ...
conn = httplib.HTTPSConnection('www.google.com')
conn.connect()
except (httplib.HTTPException, socket.error) as ex:
print "Error: %s" % ex
You can mix the two ways if you want something more general ,Hope this will help
urllib and httplib don't expose timeout. You have to include socket and set the timeout there:
import socket
socket.settimeout(10) # or whatever timeout you want
This is what I found to be working correctly with httplib2. Posting it as it might still help someone :
import httplib2, socket
def check_url(url):
h = httplib2.Http(timeout=0.1) #100 ms timeout
try:
resp = h.request(url, 'HEAD')
except (httplib2.HttpLib2Error, socket.error) as ex:
print "Request timed out for ", url
return False
return int(resp[0]['status']) < 400

Categories

Resources