I'm trying to figure out, why I'm having a problem that Python code is throwing a SSLCertVerificationError for valid LetsEncrypt certificates on a virtual host with multiple domains and certificates at the same IP If I delete all certificates except one it's fine, but with more than one certificate requests ignores the domain to which Python sent the request and pulls the most recent LetsEncrypt certificate, which is incorrect, causing the domain SSLCertVerificationError.
My understanding was that under SNI (Server Name Indication) requests should only pull the certificate for the domain to which the request is being made, not simply the most recent one. I have checked, and I'm running Python, 3.8, requests 2.5 under a version of Nginx that has been compiled with SNI support. I can suppress the error by turning off SSL validation, but that seems a poor workaround.
Any idea what is going on?
Why does SNI work fine when browsers requests page from Nginx, pullign the proper certificate, but fail under when the same is done under Python's requests package?
I have read everything I can find, and the docs say it should just work under the current builds of nginx, requests,OpenSSL, etc., but it clearly isn't here.
To replicate, I can do requests.get{'https://kedrosky.org') error-free from a local machine. But on scripts run at that server -- a hosted domain -- a newer certificate for the wrong domain is returned, causing an SSLCertVerificationError.
The problem is that the server configuration is likely only properly done for IPv4 even though the domain also resolved to an IPv6 address. With IPv4 it returns the correct certificate:
$ openssl s_client -connect kedrosky.org:443 -4
...
subject=CN = kedrosky.com
But with IPv6 it returns a different certificate (this needs IPv6 connectivity to the internet on your local machine):
$ openssl s_client -connect kedrosky.org:443 -6
...
subject=CN = paulandhoward.com
Likely this is because there is only a listen 443 but not listen [::]:443, the latter needed for IPv6. In this case virtual hosts only properly work for IPv4 but with IPv6 it will just return the default, i.e. usually the first certificate configured.
And the reason that you are seeing different results from different hosts is that one has only IPv4 connectivity while the other can do IPv6 too.
Related
So I have this web application running with Python's Flask and I use gevent.pywsgi.WSGIServer in order to make my application ready for production. My website is accessible from the Internet with all my devices and even others with different networks.
However I tried to add the HTTPS possibility by running the test with certbot letsencrypt... I passed the tests and obtain the certfile and keyfile but when I put them as arguments in the following function :
app_server = gevent.pywsgi.WSGIServer(
(CONFIG['Flask']['host'], int(CONFIG['Flask']['port'])),
app,
certfile="fullchain.pem",
keyfile="privkey_rsa.pem"
)
Well I get this error ssl.SSLError: [SSL] PEM lib.
PS: I opened my port for HTTPS server
That's why I wonder if the problem comes from:
the domain name passed for the letsencrypt test
domain name registrar
...
Or something else?
Thank you in advance.
I know this question is old and I came here looking for an answer to something else, but I have gone through your exact situation and couldn't help answering it.
I actually resolved the situation the proper way of building a website. I registered with a free dns server and routed all traffic coming to my router on ports 80/443 to a virtual machine in the LAN running NGINX, where I had already setup letsencrypt certs. Using this setup I don't have to enable SSL on any other machine in the network. Off topic, but to enable local network level ssl you need to have all the machines in the network. But you can simply forward requests from NGINX to any machine running in your local network using http and to the outside world all the traffic happens over SSL.
NGINX configuration is simple for LAN and you can create one yourself with a little google search, but the basic structure contains one server and several child location blocks where each location block corresponds to one web application in the LAN.
Hope this helps a bit. I can put a more detailed answer with specific steps if you are still looking.
I'm attempting to use requests to access a remote server over SSL. Unfortunately it's misconfigured such that it responds with the error TLSV1_UNRECOGNIZED_NAME during the SNI handshake, which is ignored by browsers but raises an exception in requests.
This appears to be the same issue as this question, but in Python rather than Java: SSL handshake alert: unrecognized_name error since upgrade to Java 1.7.0`
The connection works perfectly in Python 2, which doesn't support SNI. How can I disable SNI in Python 3 so that the connection works?
I couldn't find a way to disable SNI on the requests level, but I found a hack that will trick it into thinking SNI isn't supported. Add this code before importing requests (not after, or it will be too late):
import ssl
ssl.HAS_SNI = False
I'm having problem developing a "provider" in APNS. My server is trying to send messages using apns-client, it seems there are no problems occuring while sending messages, but the device isn't receiving any messages at all.
Recently I've changed the *.pem file to a new one. Messages were properly received while using the previous *.pem file, so I'm sure that there are no problems at server connections and sending script (written in Python). The reason is, probably, because the old *.pem file is valid but the new *.pem file is not.
I strongly desire to have an "error" response from the APNS server if the *.pem file is invalid, but it seems that the APNS server or apns-client library isn't returning any error signals even if *.pem file is invalid. I've proved this fact by adding one hundred 'a's to the line before before -----END RSA PRIVATE KEY----- in *.pem, and running the same python script. Yes, it still didn't receive any error messages.
Since APNS server is returning no error messages, it's nearly impossible to check if the *.pem file is valid... Aren't there any methods to check if the *.pem file is valid?
Here's some troubleshooting info suggested by Apple:
Problems Connecting to the Push Service
One possibility is that your server is unable to connect to the push
service. This can mean that you don't have the certificate chain
needed for TLS/SSL to validate the connection to the service. In
addition to the SSL identity (certificate and associated private key)
created by Member Center, you should also install the Entrust CA
(2048) root certificate on your provider. This allows TLS/SSL to
verify the full APNs server cert chain. If you need to get this root
certificate, you can download it from Entrust's site. Also verify that
these identities are installed in the correct location for your
provider and that your provider has permission to read them.
You can test the TLS/SSL handshake using the OpenSSL s_client command,
like this:
$ openssl s_client -connect gateway.sandbox.push.apple.com:2195 -cert
YourSSLCertAndPrivateKey.pem -debug -showcerts -CAfile
server-ca-cert.pem
where server-ca-cert.pem is the Entrust CA (2048) root certificate.
Be sure the SSL identity and the hostname are the correct ones for the
push environment you're testing. You can configure your App ID in
Member Center separately for the sandbox and production environment,
and you will be issued a separate identity for each environment.
Using the sandbox SSL identity to try to connect to the production
environment will return an error like this:
CRITICAL | 14:48:40.304061 | Exception creating ssl connection to
Apple: [Errno 1] _ssl.c:480: error:14094414:SSL
routines:SSL3_READ_BYTES:sslv3 alert certificate revoked
To test you PRODUCTION cert, open Terminal and do this:
openssl s_client -connect gateway.push.apple.com:2195 -cert PushProdCer.pem -key PushProdKey.pem
I am not familiar with the python-client you are using but surely there is a way to simply attempt opening a connection with Apple's PNS servers and detecting whether that connection failed or not. If the connection fails, then something is wrong with the PEM file - either the format or the certificate values themselves.
If you want to get an error message that's a little more explicative than "pass or fail," I recommend you look into 3rd party shell scripts that can return some basic information about the PEM file. This thread contains a few sample scripts.
Of course, you can also check for some basic format validations that are widely available. I provided one such example here but there are others.
The code below:
import urllib2
file = urllib2.urlopen("http://foo.bar.com:82")
works just fine on my mac (OS X 10.8.4 running Python 2.7.1. It opens the URL and I can parse the file with no problems.
When I try the EXACT same code (these two lines) in GoDaddy Python 2.7.3 (or 2.4) I receive an error:
urllib2.URLError: <urlopen error (111, 'Connection refused')
The problem has something to do with the port :82 that is an essential part of the address. I have tried using a forwarding address with masking, etc., and nothing works.
Any idea why it would work in one environment and not in the other (ostensibly similar) environment? Any ideas how to get around this? I also tried Mechanize to no avail. Previous posts have suggested focusing on urllib2.HTTPBasicAuthHandler, but it works fine on my OS X environment without anything special.
Ideas are welcome.
Connection refused means that your operating system tried to contact the remote host, but got a "closed port" message.
Most likely, this is because of a firewall between GoDaddy and foo.bar.com. Most likely, foo.bar.com is only reachable from your computer or your local network, but it also could be GoDaddy preventing access to strange ports.
From a quick look at the GoDaddy support forums, it looks like they only support outgoing requests to ports 80 (HTTP) and 443 (HTTPS) on their shared hosts. See e.g.
http://support.godaddy.com/groups/web-hosting/forum/topic/curl-to-ports-other-than-80/
I've got a python script that simply grabs a page with urllib2, and then proceeds to use BeautifulSoup to parse that stuff. Code is:
class Foo(Bar):
def fetch(self):
try:
self.mypage = urllib2.urlopen(self.url + 'MainPage.htm', timeout=30).read()
except urllib2.URLError:
sys.stderr.write("Error: system at %s not responding\n" % self.url)
sys.exit(1)
the system I'm trying to access is remote and behind a linux router that does port forwarding between the public static ip and the lan ip of the actual system.
I was getting failures on some systems and at first I thought about a bug in urllib2/python, or some weird TCP stuff (the http server is actually an embedded card in some industrial system). But then I tried other systems and urllib2 works as expected, and I can also correctly access the http server using links2 or wget even on systems where urllib2 fails.
Ubuntu 10.04 LTS 32bit behind Apple Airport nat on remote adsl: everythin works
Mac OSX 10.6 in LAN with the server, remote behind nat, etc...: everything works
Ubuntu 10.04 LTS 64bit with public ip: urllib2 times out, links and wget work
Gentoo Linux with public ip: urllib2 times out, links and wget work
I have verified with tcpdump on the linux router (http server end) and urllib2 always completes the tcp handshake even from the problematic systems, but then it seems to hang there. I tried toggling on/off syncookies and ECN but that didn't change anything.
How could I debug and possibly solve this issue?
You could also switch to using httplib2.
After nearly 17 months I don't have access to that specific system anymore, so I won't be able to accept any real answer to this question.
At least I can tell future readers what answers are not good:
changing to httplib2
no, we're not getting ICMP redirects
no, we don't even drop ICMP fragmentation packets
cheers.