I read that http/3 uses UDP instead of TCP to send requests, so that makes it faster, And I really need the speed of http/3, So what can I do, to implement it in python?.
I wrote this code based on my understanding of the protocol:
It's a hypertext protocol, You using UDP instead of TCP, change the http/1.1 in the packet to http/3, send it.
And I think I'm wrong.
here's the code I wrote:
import socket
from OpenSSL import SSL # for DTLS
connection = 'close' # or keep-alive
protocol = 'HTTP/3' # or HTTP/1.1
packet = f'GET / {protocol}\r\nHost: i.instagram.com\r\nConnection: {connection}\r\n\r\n'
def callback(conn, cert, errnum, depth, ok): cert.get_subject(); return ok
# Initialize context
ctx = SSL.Context(SSL.TLSv1_2_METHOD)
ctx.set_verify(SSL.VERIFY_PEER, callback) # Demand a certificate
# Set up client
client = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_DGRAM))
addr = ('i.instagram.com', 443) #using DTLS
client.connect(addr)
buffer = packet.encode()
client.sendall(buffer) # it stuck here
print(sock.recv(4096))
One can most certainly implement HTTP/3 in Python. It has already been done: check out aioquic.
Also, please have a look at the latest set of QUIC and HTTP/3 Internet Drafts. Your naive implementation is based on wrong assumptions.
A more simple approach might be to use a Python ASGI web server that supports HTTP/3.
I've created an example project here using the hypercorn ASGI web server.
There appears to be two ways that a server can request HTTP/3;
with alpn
through the alt-svc header.
The hypercorn server uses the header approach.
I'm using Ubuntu 20.04 LTS and the only browser I can find that supports HTTP/3 using the alt-svc header as of 2020-12-05 is the FireFox nightly build.
Related
I try to connect to a FTP Server which only supports TLS 1.2
Using Python 3.4.1
My Code:
import ftplib
import ssl
ftps = ftplib.FTP_TLS()
ftps.ssl_version = ssl.PROTOCOL_TLSv1_2
print (ftps.connect('108.61.166.122',31000))
print(ftps.login('test','test123'))
ftps.prot_p()
print (ftps.retrlines('LIST'))
Error on client side:
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:598)
Error on server side:
Failed TLS negotiation on control channel, disconnected. (SSL_accept():
error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol)
The credentials in the example are working for testing.
See the end of this post for the final solution. The rest are the steps needed to debug the problem.
I try to connect to a FTP Server which only supports TLS 1.2 Using Python 3.4.1
How do you know?
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:598)
I would suggest one of the many SSL problems between client and server, like the server not supporting TLS 1.2, no common ciphers etc. These problems are hard to debug because you either get only some SSL alert or the server will simply close the connection without any obvious reason. If you have access to the server look for error messages on the server side.
You may also try to not to enforce an SSL version but use the default instead, so that client and server will agree to the best SSL version both support. If this will still not work try with a client which is known to work with this server and make a packet capture of the good and bad connections and compare. If you need help with that post the packet captures to cloudshark.org.
Edit#1: just tried it with python 3.4.0 and 3.4.2 against a test server:
python 3.4.0 does a TLS 1.0 handshake, i.e. ignores the setting
python 3.4.2 does a successful TLS 1.2 handshake
In both versions ftplib has the minor bug, that it sends AUTH SSL instead of AUTH TLS if ftps.ssl_version is something else then TLS 1.0, i.e. SSLv3 or TLS1.1.+. While I doubt that this is the origin of the problem it might actually be if the FTP server handles AUTH TLS and AUTH SSL differently.
Edit#2 and Solution:
A packet capture shows that setting ftps.ssl_version has no effect and the SSL handshake will still be done with TLS 1.0 only. Looking at the source code of ftplib in 3.4.0 gives:
ssl_version = ssl.PROTOCOL_TLSv1
def __init__(self, host='', user='', passwd='', acct='', keyfile=None,
certfile=None, context=None,
timeout=_GLOBAL_DEFAULT_TIMEOUT, source_address=None):
....
if context is None:
context = ssl._create_stdlib_context(self.ssl_version,
certfile=certfile,
keyfile=keyfile)
self.context = context
Since __init__ is called when ftplib.FTP_TLS() is called the SSL context will be created with the default ssl_version used by ftplib (ssl.PROTOCOL_TLSv1) and not with your own version. To enforce another SSL version you must to provide your own context with the needed SSL version. The following works for me:
import ftplib
import ssl
ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1_2)
ftps = ftplib.FTP_TLS(context=ctx)
print (ftps.connect('108.61.166.122',31000))
print(ftps.login('test','test123'))
ftps.prot_p()
print (ftps.retrlines('LIST'))
Alternatively you could set the protocol version globally instead of only for this FTP_TLS object:
ftplib.FTP_TLS.ssl_version = ssl.PROTOCOL_TLSv1_2
ftps = ftplib.FTP_TLS()
And just a small but important observation: it looks like that ftplib does not do any kind of certificate validation, since it accepts this self-signed certificate which does not match the name without complaining. This makes a active man-in-the-middle attack possible. Hopefully they will fix this insecure behavior in the future, in which case the code here will fail because of an invalid certificate.
Firstly AFAIK no ftp supports SSL directly, for which ftps is introduced. Also sftp and ftps are two different concepts: http://en.wikipedia.org/wiki/FTPS .Now, your problem is regarding the programming and not related to SSL or FTPs or any such client-server communication
import ftplib
import ssl
ftps = ftplib.FTP_TLS()
#ftps.ssl_version = ssl.PROTOCOL_TLSv1_2
print (ftps.connect('108.61.166.122',31000))
print(ftps.login('test','test123'))
ftps.prot_p()
print (ftps.retrlines('LIST'))
as ftplib has no attribute PROTOCOL_TLSv1_2 besides which it works fine. and well, your host is not responding !
Hopefully it helps !
Can I create a HTTP server without using
python -m http.server [port number]
Using an old school style with sockets and such.
Latest code and errors...
import socketserver
response = """HTTP/1.0 500 Internal Server Error
Content-type: text/html
Invalid Server Error"""
class MyTCPHandler(socketserver.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
self.request.sendall(response)
if __name__ == "__main__":
HOST, PORT = "localhost", 8000
server = socketserver.TCPServer((HOST, PORT), MyTCPHandler)
server.serve_forever()
TypeError: 'str' does not support the buffer interface
Yes, you can, but it's a terrible idea -- in fact, even http.server is at best a toy implementation.
You're better off writing whatever webapp you want as a standard WSGI application (most Python web frameworks do that -- Django, Pyramid, Flask...), and serving it with one of the dozens of production-grade HTTP servers that exist for Python.
uWSGI (https://uwsgi-docs.readthedocs.org/en/latest/) is my personal favorite, with Gevent a close second.
If you want more info about how it's done, I recommend that you read the source code to the CherryPy server (http://www.cherrypy.org/). While not as powerful as the aforementioned uWSGI, it's a good reference implementation written in pure Python, that serves WSGI apps through a thread pool.
Sure you can, and servers like Tornado already do it this way.
For simple test servers which can do only HTTP/1.0 GET requests and handle only a single request at a time it should not be that hard once you understood the basics of the HTTP protocol. But if you care even a bit about performance it gets complex fast.
I have an assignment to create a secure socket server using TLS version 1.1 or 1.2. I'm using python 3.4 (as that's the only version with native TLS 1.1/1.2 support). I've made a self-signed CA and signed both the client and the server. A snippet of the code is as follows:
In my server:
tls_server = ssl.wrap_socket(server, ssl_version=ssl.PROTOCOL_TLSv1_2,
cert_reqs=ssl.CERT_NONE, server_side=True,
keyfile='./server.key', certfile='./server.crt',
ca_certs='./SigningCA/signing-ca.crt')
and in the client:
tls_client = ssl.wrap_socket(client, keyfile="./client.key",
certfile="./client.crt", ssl_version=ssl.PROTOCOL_TLSv1_2,
cert_reqs=ssl.CERT_REQUIRED, ca_certs='./SigningCA/signing-ca.crt')
The connection works fine, I get a request and response. But when I print out the results of the client or server cipher() method, I get the following:
('AES256-SHA', 'TLSv1/SSLv3', 256)
which seems to indicate I'm still running TLSv1/SSLv3. Does anyone have some insight into this? Any help would be appreciated.
I am facing the following scenario:
I am forced to use an HTTP proxy to connect to an HTTPS server. For several reasons I need access to the raw data (before encryption) so I am using the socket library instead of one of the HTTP specific libraries.
I thus first connect a TCP socket to the HTTP proxy and issue the connect command.
At this point, the HTTP proxy accepts the connection and seemingly forwards all further data to the target server.
However, if I now try to switch to SSL, I receive
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
indicating that the socket attempted the handshake with the HTTP proxy and not with the HTTPS target.
Here's the code I have so far:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('proxy',9502))
s.send("""CONNECT en.wikipedia.org:443 HTTP/1.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:15.0) Gecko/20100101 Firefox/15.0.1
Proxy-Connection: keep-alive
Host: en.wikipedia.org
""")
print s.recv(1000)
ssl = socket.ssl(s, None, None)
ssl.connect(("en.wikipedia.org",443))
What would be the correct way to open an SSL socket to the target server after connecting to the HTTP proxy?
(Note that in generally, it would be easier to use an existing HTTPS library such as PyCurl, instead of implementing it all by yourself.)
Firstly, don't call your variable ssl. This name is already used by the ssl module, so you don't want to hide it.
Secondly, don't use connect a second time. You're already connected, what you need is to wrap the socket. Since Python doesn't do any certificate verification by default, you'll need to verify the remote certificate and verify the host name too.
Here are the steps involved:
Establish your plain-text connection and use CONNECT like you're doing in the first few lines.
Read the HTTP response you get, and make sure you get a 200 status code. (You'll need to read the header line by line).
Use ssl_s = ssl.wrap_socket(s, cert_reqs=ssl.CERT_REQUIRED, ssl_version=ssl.PROTOCOL_TLS1, ca_certs='/path/to/cabundle.pem') to wrap the socket. Then, verify the host name. It's worth reading this answer: the connect method and what it does after wrapping the socket.
Then, use ssl_s as if it was your normal socket. Don't call connect again.
works with python 3
< proxy > is an ip or domain name
< port > 443 or 80 or whatever your proxy is listening to
< endpoint > your final server you want to connect to via the proxy
< cn > is an optional sni field your final server could be expecting
import socket,ssl
def getcert_sni_proxy(cn,endpoint,PROXY_ADDR=("<proxy>", <port>)):
#prepare the connect phrase
CONNECT = "CONNECT %s:%s HTTP/1.0\r\nConnection: close\r\n\r\n" % (endpoint, 443)
#connect to the actual proxy
conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conn.connect(PROXY_ADDR)
conn.send(str.encode(CONNECT))
conn.recv(4096)
#set the cipher for the ssl layer
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
#connect to the final endpoint via the proxy, sending an optional servername information [cn here]
sock = context.wrap_socket(conn, server_hostname=cn)
#retreive certificate from the server
certificate = ssl.DER_cert_to_PEM_cert(sock.getpeercert(True))
return certificate
I am adapting a Python script to be OS independent and run on Windows. I have changed its ssh system calls to calls to paramiko functions. I am stuck with the issue of http proxy authentication. In Unix (actually Cygwin) environment I would use ~/.ssh/config
Host *
ProxyCommand corkscrew http-proxy.example.com 8080 %h %p
Is there a way to obtain the same using paramiko (or the Python ssh module) either using or not using corkscrew? This post seems to suggest that, but I don't know how.
Note: I am behind a firewall that allows me to use only port 80. I need to control Amazon ec2 instances so I configured the sshd server on those machines to listen to port 80. Everything is working fine in my cygwin+corkscrew prototype, but I would like to have a Python script that works without Cygwin.
You can use any pre-established session to paramiko via the sock parameter in SSHClient.connect(hostname,username,password,...,sock).
Below is a code-snippet that tunnels SSH via HTTP-Proxy-Tunnel (HTTP-CONNECT). At first the connection to the proxy is established and the proxy is instructed to connect to localhost:22. The result is a TCP tunnel over the established session that is usually used to tunnel SSL but can be used for any tcp based protocol.
This scenario works with a default installation of tinyproxy with Allow <yourIP> and ConnectPort 22 being set in /etc/tinyproxy.conf. The proxy and the sshd are running on the same host in my example but all you need is any proxy that allows you to CONNECT to your ssh port. Usually this is restricted to port 443 (hint: if you make your sshd listen on 443 this will work with most of the public proxies even thought I do not recommend to do this for interop and security reasons). If this ultimately allows you to bypass your firewall depends on what kind of firewall is employed. If there's no DPI/SSL-Interception features involved, you should be fine. If there's SSL-Interception involved you could still try to tunnel it via ssl or as part of HTTP payload :)
import paramiko
import socket
import logging
logging.basicConfig(loglevel=logging.DEBUG)
LOG = logging.getLogger("xxx")
def http_proxy_tunnel_connect(proxy, target,timeout=None):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout)
sock.connect(proxy)
LOG.debug("connected")
cmd_connect = "CONNECT %s:%d HTTP/1.1\r\n\r\n"%target
LOG.debug("--> %s"%repr(cmd_connect))
sock.sendall(cmd_connect)
response = []
sock.settimeout(2) # quick hack - replace this with something better performing.
try:
# in worst case this loop will take 2 seconds if not response was received (sock.timeout)
while True:
chunk = sock.recv(1024)
if not chunk: # if something goes wrong
break
response.append(chunk)
if "\r\n\r\n" in chunk: # we do not want to read too far ;)
break
except socket.error, se:
if "timed out" not in se:
response=[se]
response = ''.join(response)
LOG.debug("<-- %s"%repr(response))
if not "200 connection established" in response.lower():
raise Exception("Unable to establish HTTP-Tunnel: %s"%repr(response))
return sock
if __name__=="__main__":
LOG.setLevel(logging.DEBUG)
LOG.debug("--start--")
sock = http_proxy_tunnel_connect(proxy=("192.168.139.128",8888),
target=("192.168.139.128",22),
timeout=50)
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname="192.168.139.128",sock=sock, username="xxxx", password="xxxxx")
print "#> whoami \n%s"% ssh.exec_command("whoami")[1].read()
output:
DEBUG:xxx:--start--
DEBUG:xxx:connected
DEBUG:xxx:--> 'CONNECT 192.168.139.128:22 HTTP/1.1\r\n\r\n'
DEBUG:xxx:<-- 'HTTP/1.0 200 Connection established\r\nProxy-agent: tinyproxy/1.8.3\r\n\r\n'
#> whoami
root
here are some other resources on how to tunnel through proxies. Just do whatever is needed to establish your tunnel and pass the socket to SSHClient.connect(...,sock)
There's paraproxy, which implements proxy support for Paramiko.
The post you linked to suggets that Paramiko can operate over an arbitrary socket, but that doesn't appear to be the case. In fact, paraproxy works by completing replacing specific methods inside paramiko, since the existing code simply calls socket.socket() to obtain a socket and does not offer any way of hooking in a proxy.