Cannot connect to aiohttp server serving over HTTPS - python

Background: I'm writing a web server using aiohttp with a websocket endpoint at /connect. The app was originally served via HTTP (and clients would connect to ws://host/connect). This worked locally using localhost, but when I deployed to Heroku, the app was served via HTTPS and it didn't allow clients to connect to an insecure websocket. Therefore, I tried to change my server so that it would use HTTPS locally. Now the client can't even complete the TLS handshake with the server. Here is my setup:
server.py
from aiohttp import web
import ssl
app = web.Application()
app.router.add_get('/', handle)
app.router.add_get('/connect', wshandler)
ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ssl_context.load_default_certs()
web.run_app(app, port=8443, ssl_context=ssl_context)
# web.run_app(app, port=8443) # original
When I run the server and try to navigate to https://localhost:8443/ (using Chrome 80), I get the following traceback:
Traceback (most recent call last):
File "/Users/peterwang/anaconda3/lib/python3.7/asyncio/sslproto.py", line 625, in _on_handshake_complete
raise handshake_exc
File "/Users/peterwang/anaconda3/lib/python3.7/asyncio/sslproto.py", line 189, in feed_ssldata
self._sslobj.do_handshake()
File "/Users/peterwang/anaconda3/lib/python3.7/ssl.py", line 763, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: NO_SHARED_CIPHER] no shared cipher (_ssl.c:1056)
I looked at ssl_context.get_ciphers() and found that it does include the ciphersuites that Chrome 80 uses also with TLS1.3. I also used Wireshark to trace the communication between the client and my server. I see the TLS Client Hello, which says that it handles TLS1.0 through TLS1.3 and is compatible with a multitude of ciphers that overlap with ssl_context.get_ciphers(). There is no response from the server.
Does anyone have any advice? (I am using Python 3.7, OpenSSL 1.1.1d, and aiohttp 3.6.2)

A SSL server has to to be configured to use a certificate matching the servers domain and the associated private key, typically using load_cert_chain. Your server is not configured to use a server certificate and key and thus cannot offer any ciphers which requires this - which means it can not offer any ciphers which are typically expected by the client. This means there are no shared ciphers, hence this error.

Related

Deploy to FTP via python script with GitLab CI

I'm new to GitLab. I am building my first pipeline to deploy the contents of my GitLab project to an FTP server with TLS encryption. I've written a Python script using ftplib to upload the files to the FTP server that works perfectly when I run it on my local Windows machine. The script uploads the full contents of the project to a folder on the FTP server. Now I'm trying to get it to work on GitLab by calling the script in the project's .gitlab-ci.yml file. Both the script and the yml file are in the top level of my GitLab project. The setup is extremely simple for the moment:
image: python:latest
deploy:
stage: deploy
script:
- python ftpupload.py
only:
- main
However, the upload always times out with the following error message:
File "/usr/local/lib/python3.9/ftplib.py", line 156, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout,
File "/usr/local/lib/python3.9/socket.py", line 843, in create_connection
raise err
File "/usr/local/lib/python3.9/socket.py", line 831, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
Cleaning up file based variables
ERROR: Job failed: exit code 1
Here's the basic setup for establishing the connection in the Python script that works fine locally but fails on GitLab:
class ReusedSslSocket(ssl.SSLSocket):
def unwrap(self):
pass
class MyFTP_TLS(ftplib.FTP_TLS):
"""Explicit FTPS, with shared TLS session"""
def ntransfercmd(self, cmd, rest=None):
conn, size = ftplib.FTP.ntransfercmd(self, cmd, rest)
if self._prot_p:
conn = self.context.wrap_socket(conn,
server_hostname=self.host,
session=self.sock.session) # reuses TLS session
conn.__class__ = ReusedSslSocket # we should not close reused ssl socket when file transfers finish
return conn, size
session = MyFTP_TLS(server, username, password, timeout=None)
session.prot_p()
I know there are other tools like lftp and git-ftp that I could use in GitLab CI, but I've built a lot of custom functionality into the Python script and would like to use it. How can I successfully deploy the script within GitLab CI? Thanks in advance for your help!
This requires that the GitLab Runner (which executes the pipeline) is able to make an SFTP connection to your FTP server.
Shared runners are likely locked down to only connect to the GitLab server (to prevent an attack vector).
To work around this, install your own runner and register it to your GitLab.

Node.js serving over https

I'm testing Node.js application over https connection where I created certificates for localhost,
Certificate creation,
$ openssl genrsa -out localhost.key 2048
$ openssl req -new -x509 -key localhost.key -out localhost.cert -days 3650 -subj /CN=localhost
Use this in server,
var options = {
key: fs.readFileSync('./localhost.key'),
cert: fs.readFileSync('./localhost.cert'),
};
var http2 = require('http2');
var app = express();
const server = http2.createSecureServer( options, app);
server.listen({ host: app_host, port: port});
Start the node.js server as,
$ node server.js
Tested using simple curl command as,
$ curl -k https://localhost:9000/getcpuinfo
{"hw": ...}
"-k" option is to ignote certificate validation step.
But if I try to use pythons 'requests' module as shown below the request fails,
$ python
import requests
requests.get("https://localhost:9000/getcpuinfo")
requests.exceptions.SSLError: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",)
So I used 'verify' option to make the request, it still fails.
requests.get("https://localhost:9000/getcpuinfo", verify=False)
requests.exceptions.SSLError: ("bad handshake: SysCallError(-1, 'Unexpected EOF')",)
What am I doing wrong? How do I workaround this issue using 'requests' module'? Shouldn't 'verify' prevent the check?
You cant generate https certificates over localhost.
The Python requests module does not connect to HTTP/2 servers, it only supports up to HTTP/1.1:
Requests allows you to send organic, grass-fed HTTP/1.1 requests, without the need for manual labor. There's no need to manually add query strings to your URLs, or to form-encode your POST data. Keep-alive and HTTP connection pooling are 100% automatic, thanks to urllib3.
If you compile curl with HTTP/2 support, then it will work. The curl packages pre-installed on most Linux distros and MacOS aren't and probably won't work.
Since HTTP/2 support in Node is experimental and client support is pretty bad outside of modern web browsers, I would not suggest you use it at this time unless you're specifically targeting web browsers or want to use a HTTP/2-capable web server that can support both HTTP/2 and HTTPS.
If you do need to connect to HTTP/2 servers from Python, there is the (also unstable) hyper module that does connect to a node.js HTTP/2 server. It currently doesn't allow you to disable certificate verification, so it will not be a drop-in replacement for requests.
It seems that there is a utility as part of nghttp2 called 'h2load' which works out of box for both protocols (http/1 and http/2). Thanks for all the answers/hints.
https://nghttp2.org/documentation/h2load-howto.html#basic-usage

Python FTP TLS not working

I'm trying to setup an FTP TLS transfer. I have scripts for strict FTP and for SFTP, but this is my first exposure to TLS. My basic script:
import ftplib
import ssl
ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1_2)
ftps = ftplib.FTP_TLS(context=ctx)
print (ftps.connect(myhost,21))
print(ftps.login(myusername,mypwd))
print("1")
ftps.prot_p()
print("2")
print (ftps.retrlines('LIST'))
print("3")
Error:
[WinError 10054] An existing connection was forcibly closed by the remote host
This error occurs at the retrlines line. It says the error is in ssl.py at do_handshake self._sslobj.do_handshake().
I've already verified the connection with WinSCP, and that the protocol is TLS1.2.
Any ideas?
Well, the issue turned out to be that the vendor was only allowing access from a specific machine. Once I tried the script on the correct machine, it worked.

Python DNS server address already in use

I'm doing lab in Malware analysis.
The task is to investigate CVE-2015-7547 glibc vulnerability.
Google already gave proof of concept code. This code contains client in C and fake DNS server in python. When I try to run server, it throws exception:
turbolab#sandbox:~/Desktop$ sudo python CVE-2015-7547-poc.py
Traceback (most recent call last):
File "CVE-2015-7547-poc.py", line 176, in <module>
tcp_thread()
File "CVE-2015-7547-poc.py", line 101, in tcp_thread
sock_tcp.bind((IP, 53))
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
IP was set to 127.0.0.1.
How to run server and connect client to it?
You could run netstat -lpn to list all listening connections, with pids (-n do not resolve names).
To test for this vulnerability
Clone the POC code git clone https://github.com/fjserna/CVE-2015-7547.git
Set your DNS server to localhost (127.0.0.1) edit /etc/resolv.conf
Run the POC DNS server
sudo python CVE-2015-7547-poc.py
Compile the client
make
Run the client
./CVE-2015-7547-client
CVE-2015-7547-client segfaults when you are vulnerable
CVE-2015-7547-client reports CVE-2015-7547-client: getaddrinfo: Name or service not known when not vulnerable.
See this Ubuntu Security Notice for more information, as well the original Google blog

unable to use IP address with ftplib (Python)

I have created an FTP client using ftplib. I am running the server on one of my Ubuntu virtual machine and client on another. I want to connect to the server using ftplib and I'm doing it in the following way:
host = "IP address of the server"
port = "Port number of the server"
ftpc = FTP()
ftpc.connect(host, port)
I'm getting the following error!
Traceback (most recent call last):
File "./client.py", line 54, in <module>
ftpc.connect(host, port)
File "/usr/lib/python2.7/ftplib.py", line 132, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused
When I went through the docs of python, I could see ftplib used only with domain names as in FTP("domain name"). Can I use IP address instead of domain name? In my case I am unable to comprehend the error. It would be great if anyone can help me out.
Also if I use port 21 on my server, I'm getting socket error: Connection refused. How do I use port 21 for my FTP server?
Thank You.
It seems like you are trying to connect to SFTP server using ftplib which is giving you the Connection Refused error. Try using pysftp instead of ftplib and see if it works.
On the virtual machine, test by typing ftp and sftp commands on the console. You will get to know on which server the machine is running i.e ftp or sftp.
To solve the problem, I install and config vsftpd:
sudo apt install vsftpd (if not exist)
sudo vim /etc/vsftpd.conf
set "listen=YES"

Categories

Resources