I intend to connect to the remote host example.com over TLS but I have to connect through a proxy IP address with DNS name example-proxy.com.
I don't have control over the SSL certificate and I cannot ask the admin at example.com to add example-proxy.com to its certificate's SAN.
Using example-prxoy.com would cause OpenSSL to error out because the host name does not match the name in the certificate. How can I split the host parameter into two: (1) domain name for the network connection and (2) domain name for the certificate verification.
I don't have the resources to modify the OpenSSL library but I can make changes to the Python libraries. According to this doc, I could have modified the match_hostname method to implement this feature but it is no longer available as of Python 3.7+.
Asks
How can I use Python 3.7+ to specify both a host name and a certificate name?
From the security standpoint, How could my implementation go wrong?
Just give a different hostname for TCP connection and TLS handshake, i.e. set server_hostname in wrap_socket. To modify the example from the official documentation for this:
import socket
import ssl
tls_hostname = 'www.example.com'
context = ssl.create_default_context()
with socket.create_connection(('127.0.0.1',8443)) as sock:
with context.wrap_socket(sock, server_hostname=tls_hostname) as ssock:
print(ssock.version())
This will connect to ('127.0.0.1',8443) but do the TLS handshake with www.example.com.
Note that this will use tls_hostname for both SNI extension in the TLS handshake and for validating the certificate. But this seems to be what you need based on your question anyway: connect to IP:port but do TLS handshake and validation with a specific hostname.
Related
I am trying to create a connection to a TLS (TLSv1) secured MQTT Broker(Rabbitmq with MQTT Plugin enabled) with the python implementation of the eclipse paho client. The same works fine with the MQTTFX application which is based on the java implementation of paho. For this i am using self signed certificates.
Java version uses:
CA-File: ca_certificate.crt
Client Certificate client_cert.crt
Client Key File: client_key.key
Python Version should use:
CA-File: ca_certificate.pem
Client Certificate: client_cert.pem
Client key file: client_key.key
I tried to establish a connection like this:
import ssl
import paho.mqtt.client as paho
# Locations of CA Authority, client certificate and client key file
ca_cert = "ca_certificate.pem"
client_cert = "client_certificate.pem"
client_key = "client_key.pem"
# Create ssl context with TLSv1
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
context.load_verify_locations(ca_cert)
context.load_cert_chain(client_cert, client_key)
# Alternative to using ssl context but throws the exact same error
# client.tls_set(ca_certs=ca_cert, certfile=client_cert, keyfile=client_key, tls_version=ssl.PROTOCOL_TLSv1)
client = paho.Client()
client.username_pw_set(username="USER", password="PASSWORD")
client.tls_set_context(context)
client.tls_insecure_set(False)
client.connect_async(host="HOSTNAME", port="PORT")
client.loop_forever()
Which results in the following error:
ssl.SSLError: [SSL: NO_CIPHERS_AVAILABLE] no ciphers available (_ssl.c:997)
Could it be that I need to explicitly pass a cipher that the broker supports or could it be due of an older openssl version? I am a little bit lost right now, maybe someone has a clue on how to solve this.
Edit: I got it to work by myself but still not sure why exactly it works now.
Changed context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
to context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
Changed client.tls_insecure_set(False)
to client.tls_insecure_set(True)
PROTOCOL_TLSv1 forces the client to only use TLS v1.0 which is old and unless you have explicitly forced your broker to only use the same version unlikely to match.
Using PROTOCOL_TLS_CLIENT will allow Python to negotiate across the full range of TLS v1.0 to TLS v1.3 until it finds one that both the client and the broker support.
Why you are having to set client.tls_insecure_set(True) is hard to answer without knowing more about the certificates you are using with the broker. Does it container a CA/SAN entry that matches the HOSTNAME you are using to connect? The documentation says it will explicitly enforce the hostname check.
ssl.PROTOCOL_TLS_CLIENT
Auto-negotiate the highest protocol version that both the client and
server support, and configure the context client-side connections. The
protocol enables CERT_REQUIRED and check_hostname by default.
I'm writing some Python code that needs to communicate with a remote host via a TLS connection. I set up an SSL context like this:
ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
cxt.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
Then, I connected to domain d over port p like this:
s = ctx.wrap_socket(socket.create_connection(d, p))
I was met with a protocol violation on an unexpected EOF. The fix was to create the socket like this:
s = ctx.wrap_socket(socket.create_connection(d, p), server_hostname=d)
As I know next to nothing about TLS, this is pretty confusing. Why would the server hostname be required for a successful connection?
If it matters, I tested a connection to domain d = 'drewdevault.com' on port p = 1965; I'm writing a Gemini client. This was not reproducible with all remote hosts.
The server_hostname argument will be used in the TLS handshake to provide the server with the expected hostname. It is not strictly required in TLS, but it is needed one servers which have multiple certificates for different domain but on the same IP address. Without this information the server does not know which certificate to provide to the client.
I'm trying to implement mutual authentication on a ftps connection using ftplib module.
Here is my code:
Context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
Context.load_verify_locations(cafile=trusted.txt,capath=path)
Context.load_cert_chain(certfile=mycert.txt,keyfile=mikey.txt,password=xxxx)
Context.verify_mode=True
Ftp = ftplib.FTP_TLS(Context=Context)
Ftp.connect(host, port)
Ftp.auth()
Ftp.prot_p()
Ftp.set_pasv(True)
Ftp.cwd(dest_dir)
Ftp.storlines(xx,xx)
Ftp.close()
However above works fine only with client authentication set as no on ftps server side. When we try with client Auth yes
Error code is as below.
Ssl.SSLError: [SSL:SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:777)
I have the servers cert chain on ca file defined.
I have my trusted on servers side defined.
Still connection doesn't work well. And it works well if client Auth is disabled on server side.
Any suggestions on what could be wrong. Could it be ciphers?
I tried setting up ciphers but don't know how exchange happens in realtime. Or could this be that ftplib does not support fully mutually authentication at all??
Ssl.SSLError: [SSL:SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:777)
If you get this error in the client then the server failed to validate the client certificate, i.e. your mycert.txt and mikey.txt.
Since validation of the client certificate is done by the server you have to look at the server configuration and logs for more information of why your client certificate was not accepted. Typical problems are that the client certificate is a self-signed certificate, that the CA which issued the client certificate is not trusted in the server or that intermediate certificates are required to verify the certificate but the client is not sending these.
Has anyone found a way to specify a DNS server for OpenSSL connections on a Linux OS? We have internal and external DNS servers and I am building a monitor for SSL certificate usage. I need the ability to specify a DNS server to be used on hostname connections. It works just fine against the internal DNS, but I am having difficulty finding a way to tie in a DNS server. I am fairly new to changing networks through Python and am not sure where to begin. Is it possible to do this through the dns.resolver module's nameservers function?
This looks like a viable solution for Windows, but I am hoping to find something similar for Linux.
How to Change DNS Servers Programmatically in Windows?
Below is my code that works against the default DNS host.
def readCerts(self,host,port,cast):
"""readCerts prompts terminal for username.
Attributes:
host: Host or IP of SSL connection
port: Port of SSL connection
cast: Format of returned results (JSON currently only structure supported)
Response:
Returns certificate attributes in specified format
"""
sslContext = SSL.Context(SSL.SSLv23_METHOD)
apiSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sslConnection = SSL.Connection(sslContext,apiSocket)
try:
sslConnection.connect((host,port))
except Exception as e:
raise e
else:
#Block the socket
sslConnection.setblocking(1)
#Set the hostname field for servers that support SNI. Format must be in bytestring.
sslConnection.set_tlsext_host_name(host.encode('utf-8'))
try:
sslConnection.do_handshake()
except:
pass
else:
#print "handshake succeeded"
sslConnection.close()
if cast.upper()=='JSON':
attributes = self._FormatJSON(sslConnection.get_peer_cert_chain())
return attributes
I try to connect to a FTP Server which only supports TLS 1.2
Using Python 3.4.1
My Code:
import ftplib
import ssl
ftps = ftplib.FTP_TLS()
ftps.ssl_version = ssl.PROTOCOL_TLSv1_2
print (ftps.connect('108.61.166.122',31000))
print(ftps.login('test','test123'))
ftps.prot_p()
print (ftps.retrlines('LIST'))
Error on client side:
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:598)
Error on server side:
Failed TLS negotiation on control channel, disconnected. (SSL_accept():
error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol)
The credentials in the example are working for testing.
See the end of this post for the final solution. The rest are the steps needed to debug the problem.
I try to connect to a FTP Server which only supports TLS 1.2 Using Python 3.4.1
How do you know?
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:598)
I would suggest one of the many SSL problems between client and server, like the server not supporting TLS 1.2, no common ciphers etc. These problems are hard to debug because you either get only some SSL alert or the server will simply close the connection without any obvious reason. If you have access to the server look for error messages on the server side.
You may also try to not to enforce an SSL version but use the default instead, so that client and server will agree to the best SSL version both support. If this will still not work try with a client which is known to work with this server and make a packet capture of the good and bad connections and compare. If you need help with that post the packet captures to cloudshark.org.
Edit#1: just tried it with python 3.4.0 and 3.4.2 against a test server:
python 3.4.0 does a TLS 1.0 handshake, i.e. ignores the setting
python 3.4.2 does a successful TLS 1.2 handshake
In both versions ftplib has the minor bug, that it sends AUTH SSL instead of AUTH TLS if ftps.ssl_version is something else then TLS 1.0, i.e. SSLv3 or TLS1.1.+. While I doubt that this is the origin of the problem it might actually be if the FTP server handles AUTH TLS and AUTH SSL differently.
Edit#2 and Solution:
A packet capture shows that setting ftps.ssl_version has no effect and the SSL handshake will still be done with TLS 1.0 only. Looking at the source code of ftplib in 3.4.0 gives:
ssl_version = ssl.PROTOCOL_TLSv1
def __init__(self, host='', user='', passwd='', acct='', keyfile=None,
certfile=None, context=None,
timeout=_GLOBAL_DEFAULT_TIMEOUT, source_address=None):
....
if context is None:
context = ssl._create_stdlib_context(self.ssl_version,
certfile=certfile,
keyfile=keyfile)
self.context = context
Since __init__ is called when ftplib.FTP_TLS() is called the SSL context will be created with the default ssl_version used by ftplib (ssl.PROTOCOL_TLSv1) and not with your own version. To enforce another SSL version you must to provide your own context with the needed SSL version. The following works for me:
import ftplib
import ssl
ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1_2)
ftps = ftplib.FTP_TLS(context=ctx)
print (ftps.connect('108.61.166.122',31000))
print(ftps.login('test','test123'))
ftps.prot_p()
print (ftps.retrlines('LIST'))
Alternatively you could set the protocol version globally instead of only for this FTP_TLS object:
ftplib.FTP_TLS.ssl_version = ssl.PROTOCOL_TLSv1_2
ftps = ftplib.FTP_TLS()
And just a small but important observation: it looks like that ftplib does not do any kind of certificate validation, since it accepts this self-signed certificate which does not match the name without complaining. This makes a active man-in-the-middle attack possible. Hopefully they will fix this insecure behavior in the future, in which case the code here will fail because of an invalid certificate.
Firstly AFAIK no ftp supports SSL directly, for which ftps is introduced. Also sftp and ftps are two different concepts: http://en.wikipedia.org/wiki/FTPS .Now, your problem is regarding the programming and not related to SSL or FTPs or any such client-server communication
import ftplib
import ssl
ftps = ftplib.FTP_TLS()
#ftps.ssl_version = ssl.PROTOCOL_TLSv1_2
print (ftps.connect('108.61.166.122',31000))
print(ftps.login('test','test123'))
ftps.prot_p()
print (ftps.retrlines('LIST'))
as ftplib has no attribute PROTOCOL_TLSv1_2 besides which it works fine. and well, your host is not responding !
Hopefully it helps !