Short version: Is there any easy API for encoding an HTTP request (and decoding the response) without actually transmitting and receiving the encoded bytes as part of the process?
Long version: I'm writing some embedded software which uses paramiko to open an SSH session with a server. I then need to make an HTTP request across an SSH channel opened with transport.open_channel('direct-tcpip', <remote address>, <source address>).
requests has is transport adapters, which lets you substitute your own transport. But the send interface provided by BaseAdapter just accepts a PreparedRequest object which (a) doesn't provide the remote address in any useful way; you need to parse the URL to find out the host and port and (b) doesn't provide an encoded version of the request, only a dictionary of headers and the encoded body (if any). It also gives you no help in decoding the response. HTTPAdapter defers the whole lot, including encoding the request, making the network connection, sending the bytes, receiving the response bytes and decoding the response, to urllib3.
urllib3 likewise defers to http.client and http.client's HTTPConnection class has encoding and network operations all jumbled up together.
Is there a simple way to say, "Give me a bunch of bytes to send to an HTTP server," and "Here's a bunch of bytes from an HTTP server; turn them into a useful Python object"?
This is the simplest implementation of this that I can come up with:
from http.client import HTTPConnection
import requests
from requests.structures import CaseInsensitiveDict
from urllib.parse import urlparse
from argparse import ArgumentParser
class TunneledHTTPConnection(HTTPConnection):
def __init__(self, transport, *args, **kwargs):
self.ssh_transport = transport
HTTPConnection.__init__(self, *args, **kwargs)
def connect(self):
self.sock = self.ssh_transport.open_channel(
'direct-tcpip', (self.host, self.port), ('localhost', 0)
)
class TunneledHTTPAdapter(requests.adapters.BaseAdapter):
def __init__(self, transport):
self.transport = transport
def close(self):
pass
def send(self, request, **kwargs):
scheme, location, path, params, query, anchor = urlparse(request.url)
if ':' in location:
host, port = location.split(':')
port = int(port)
else:
host = location
port = 80
connection = TunneledHTTPConnection(self.transport, host, port)
connection.request(method=request.method,
url=request.url,
body=request.body,
headers=request.headers)
r = connection.getresponse()
resp = requests.Response()
resp.status_code = r.status
resp.headers = CaseInsensitiveDict(r.headers)
resp.raw = r
resp.reason = r.reason
resp.url = request.url
resp.request = request
resp.connection = connection
resp.encoding = requests.utils.get_encoding_from_headers(response.headers)
requests.cookies.extract_cookies_to_jar(resp.cookies, request, r)
return resp
if __name__ == '__main__':
import paramiko
parser = ArgumentParser()
parser.add_argument('-p', help='Port the SSH server listens on', default=22)
parser.add_argument('host', help='SSH server to tunnel through')
parser.add_argument('username', help='Username on SSH server')
parser.add_argument('url', help='URL to perform HTTP GET on')
args = parser.parse_args()
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(args.host, args.p, username=args.username)
transport = client.get_transport()
s = requests.Session()
s.mount(url, TunneledHTTPAdapter(transport))
response = s.get(url)
print(response.text)
There are various options to BaseAdapter.send that it doesn't handle, and it completely ignores issues like connection pooling and so on, but it gets the job done.
You could write your own SOCKS4 proxy, run it on localhost, then point your HTTP requests at it. For example, https://urllib3.readthedocs.io/en/latest/advanced-usage.html describes how to use a SOCKS proxy with urllib3.
SOCKS4 is basically a simple handshake followed by raw HTTP/TCP traffic. The handshake conveys the target IP address and port. So your proxy can do the handshake to satisfy the client that it is a SOCKS server, then the proxy can send the "real" traffic straight to the SSH session (and proxy the responses in the reverse direction).
The cool thing about this approach is that it will work with tons of clients--SOCKS has been widespread for a long time.
Related
I am trying to understand the resolving process in dnslib. Specifically, I am using the proxy.py example to implement a local DNS proxy which will send a request to specific servers based on the query.
(copy of proxy.py):
# -*- coding: utf-8 -*-
from __future__ import print_function
import binascii,socket,struct
from dnslib import DNSRecord,RCODE
from dnslib.server import DNSServer,DNSHandler,BaseResolver,DNSLogger
class ProxyResolver(BaseResolver):
"""
Proxy resolver - passes all requests to upstream DNS server and
returns response
Note that the request/response will be each be decoded/re-encoded
twice:
a) Request packet received by DNSHandler and parsed into DNSRecord
b) DNSRecord passed to ProxyResolver, serialised back into packet
and sent to upstream DNS server
c) Upstream DNS server returns response packet which is parsed into
DNSRecord
d) ProxyResolver returns DNSRecord to DNSHandler which re-serialises
this into packet and returns to client
In practice this is actually fairly useful for testing but for a
'real' transparent proxy option the DNSHandler logic needs to be
modified (see PassthroughDNSHandler)
"""
def __init__(self,address,port,timeout=0):
self.address = address
self.port = port
self.timeout = timeout
def resolve(self,request,handler):
try:
if handler.protocol == 'udp':
proxy_r = request.send(self.address,self.port,
timeout=self.timeout)
else:
proxy_r = request.send(self.address,self.port,
tcp=True,timeout=self.timeout)
reply = DNSRecord.parse(proxy_r)
except socket.timeout:
reply = request.reply()
reply.header.rcode = getattr(RCODE,'NXDOMAIN')
return reply
class PassthroughDNSHandler(DNSHandler):
"""
Modify DNSHandler logic (get_reply method) to send directly to
upstream DNS server rather then decoding/encoding packet and
passing to Resolver (The request/response packets are still
parsed and logged but this is not inline)
"""
def get_reply(self,data):
host,port = self.server.resolver.address,self.server.resolver.port
request = DNSRecord.parse(data)
self.server.logger.log_request(self,request)
if self.protocol == 'tcp':
data = struct.pack("!H",len(data)) + data
response = send_tcp(data,host,port)
response = response[2:]
else:
response = send_udp(data,host,port)
reply = DNSRecord.parse(response)
self.server.logger.log_reply(self,reply)
return response
def send_tcp(data,host,port):
"""
Helper function to send/receive DNS TCP request
(in/out packets will have prepended TCP length header)
"""
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((host,port))
sock.sendall(data)
response = sock.recv(8192)
length = struct.unpack("!H",bytes(response[:2]))[0]
while len(response) - 2 < length:
response += sock.recv(8192)
sock.close()
return response
def send_udp(data,host,port):
"""
Helper function to send/receive DNS UDP request
"""
sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
sock.sendto(data,(host,port))
response,server = sock.recvfrom(8192)
sock.close()
return response
if __name__ == '__main__':
import argparse,sys,time
p = argparse.ArgumentParser(description="DNS Proxy")
p.add_argument("--port","-p",type=int,default=53,
metavar="<port>",
help="Local proxy port (default:53)")
p.add_argument("--address","-a",default="",
metavar="<address>",
help="Local proxy listen address (default:all)")
p.add_argument("--upstream","-u",default="8.8.8.8:53",
metavar="<dns server:port>",
help="Upstream DNS server:port (default:8.8.8.8:53)")
p.add_argument("--tcp",action='store_true',default=False,
help="TCP proxy (default: UDP only)")
p.add_argument("--timeout","-o",type=float,default=5,
metavar="<timeout>",
help="Upstream timeout (default: 5s)")
p.add_argument("--passthrough",action='store_true',default=False,
help="Dont decode/re-encode request/response (default: off)")
p.add_argument("--log",default="request,reply,truncated,error",
help="Log hooks to enable (default: +request,+reply,+truncated,+error,-recv,-send,-data)")
p.add_argument("--log-prefix",action='store_true',default=False,
help="Log prefix (timestamp/handler/resolver) (default: False)")
args = p.parse_args()
args.dns,_,args.dns_port = args.upstream.partition(':')
args.dns_port = int(args.dns_port or 53)
print("Starting Proxy Resolver (%s:%d -> %s:%d) [%s]" % (
args.address or "*",args.port,
args.dns,args.dns_port,
"UDP/TCP" if args.tcp else "UDP"))
resolver = ProxyResolver(args.dns,args.dns_port,args.timeout)
handler = PassthroughDNSHandler if args.passthrough else DNSHandler
logger = DNSLogger(args.log,args.log_prefix)
udp_server = DNSServer(resolver,
port=args.port,
address=args.address,
logger=logger,
handler=handler)
udp_server.start_thread()
if args.tcp:
tcp_server = DNSServer(resolver,
port=args.port,
address=args.address,
tcp=True,
logger=logger,
handler=handler)
tcp_server.start_thread()
while udp_server.isAlive():
time.sleep(1)
I have successfully injected the business logic of my interactions in the get_reply method of PassthroughDNSHandler:
def get_reply(self, data):
host, port = self.server.resolver.address, self.server.resolver.port
request = DNSRecord.parse(data)
query = str(request.questions[0].qname)
if query.endswith('.example.info.'):
server = "192.168.10.1"
elif any(query.endswith(x) for x in ["example.net.", "example.com."]):
server = "10.24.131.10"
else:
server = "1.1.1.1"
log.debug(f"{query} redirected to {server}")
response = send_udp(data, server, port)
reply = DNSRecord.parse(response)
This works as expected, the right DNS servers are queried depending on the request.
The part which I do not understand is the involvement of ProxyResolver in the initialization of the server.
resolver = ProxyResolver(args.dns, args.dns_port, args.timeout)
udp_server = DNSServer(resolver, port=53, address="127.0.0.1", handler=PassthroughDNSHandler)
What is resolver needed for?
As far as I understand, the packet received on 127.0.0.1:53 is passed, via handler, to PassthroughDNSHandler and actually processed in get_reply().
It is then further sent to the relevant upstream server via send_udp() and the response is forwarded back to the requesting client.
At what point does resolver gets into the picture and what is its role?
I put a breakpoint in the resolve() method of ProxyResolver and it is never hit.
I'm writing a program to download a given webpage. I need to only use raw python sockets for all the connection due to some restriction. So I make a socket connection to a given domain (the Host field in the response header of an object) and then send the GET request on this. Now when the url is a https url, I think I need to first do the SSL handshake (because otherwise I'm getting non-200 OK responses from the server and other error responses mentioning P3P policies). I inspected curl's response to check how it's able to successfully download while I'm not, turns out curl first does the SSL handshake (that's all the difference). curl is always able to successfully download a given object, the only difference always being the SSL handshake it does.
So I'm wondering how to do the SSL handshake in raw python sockets? Basically I want as easy a solution which allows me to do the minimum besides using raw sockets.
Here is an example of a TCP client with SLL.
Not sure if it's the best way to download a web page but it should answer your question "SSL handshake in raw python socket".
You will probably have to adapt the struct.pack/unpack but you get the general idea:
import socket
import ssl
import struct
import binascii
import sys
class NotConnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class DisconnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class Connector:
def __init__(self):
pass
def is_connected(self):
return (self.sock and self.ssl_sock)
def open(self, hostname, port, cacert):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ssl_sock = ssl.wrap_socket(self.sock, ca_certs=cacert, cert_reqs=ssl.CERT_REQUIRED)
if hostname == socket.gethostname():
ipaddress = socket.gethostbyname_ex(hostname)[2][0]
self.ssl_sock.connect((ipaddress, port))
else:
self.ssl_sock.connect((hostname, port))
self.sock.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
def close(self):
if self.sock: self.sock.close()
self.sock = None
self.ssl_sock = None
def send(self, buffer):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
self.ssl_sock.sendall(struct.pack('L', len(buffer)))
self.ssl_sock.sendall(buffer)
def receive(self):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
data_size_buffer = self.ssl_sock.recv(4)
if len(data_size_buffer) <= 0:
raise DisconnectedException()
data_size = struct.unpack('L', data_size_buffer)[0]
received_size = 0
data_buffer = ""
while received_size < data_size:
chunk = self.ssl_sock.recv(1024)
data_buffer += chunk
received_size += len(chunk)
return data_buffer
Then you use the class like this:
connector = Connector.Connector()
connector.open(server_ip, server_port, path_to_the_CA_cert.pem)
connector.send(your_data)
response = connector.receive()
connector.close()
You can use the wrap_socket method of the python ssl module to turn your socket into one that talks SSL. Once you've done this you can use it like normal, but internally the data will be encrypted and decrypted for you. These are the docs for the method:
https://docs.python.org/2/library/ssl.html#ssl.wrap_socket
I think the easier way to do that would be using SSL contexts and wraping the TCP socket.
Python SSL module's documentation give a very thoroughful explanation with examples. I recommend you to read the relevant sections of Python 2 or Python 3 ssl module documentation. It should be very easy to achieve what you want.
Hope this helps!
I'm planning on incorporating a server into an application I'm developing (none of the data being transferred will be sensitive). I've set up port forwarding on my router that points to the server on the network. Here is a snippet of the server side code:
import time
import threading
import socketserver
import ssl
class ThreadedTCPRequestHandler(socketserver.StreamRequestHandler):
def handle(self):
# Each new request is handled by this function.
data = str(self.request.recv(4096), 'utf-8')
print('Request received on {}'.format(time.ctime()))
print('{} wrote: {}'.format(self.client_address[0], data))
cur_thread = threading.current_thread()
response = bytes("{}: {}".format(cur_thread.name, data), 'utf-8')
self.request.sendall(response)
class TLSTCPServer(socketserver.TCPServer):
def __init__(self, server_address, request_handler_class, certfile, keyfile, ssl_version=ssl.PROTOCOL_TLSv1_2,
bind_and_activate=True):
socketserver.TCPServer.__init__(self, server_address, request_handler_class, bind_and_activate)
self.certfile = certfile
self.keyfile = keyfile
self.ssl_version = ssl_version
def get_request(self):
newsocket, fromaddr = self.socket.accept()
connstream = ssl.wrap_socket(newsocket,
server_side=True,
certfile=self.certfile,
keyfile=self.keyfile,
ssl_version=self.ssl_version)
return connstream, fromaddr
class ThreadedTCPServer(socketserver.ThreadingMixIn, TLSTCPServer):
pass
if __name__ == "__main__":
HOST, PORT = "0.0.0.0", 6001
# Creates a server that handles each request on a separate thread. "cert.pem" is the TLS certificate and "key.pem"
# is the TLS private key (kept only on the server).
server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler, "cert.pem", "key.pem")
ip, port = server.server_address
print('Started server\n')
server.serve_forever()
And here is the client code:
import socket
import time
import ssl
HOST = 'localhost' # This should be the server's public IP when used in production code
PORT = 6001
data = 'Hello!'
start_time = time.time()
try:
# Connect to server and send data
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ssl_sock = ssl.wrap_socket(sock,
ca_certs="cert.pem",
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1_2)
ssl_sock.connect((HOST, PORT))
ssl_sock.sendall(data.encode())
# Receive data from the server and shut down
received = ssl_sock.recv(4096)
elapsed_tie = round(time.time() - start_time, 2)
print("Sent: {}".format(data))
print("Received: {}".format(received.decode('utf-8')))
print("Elapsed: {}s \n".format(elapsed_tie))
ssl_sock.close()
except Exception as e:
print(format(e))
Note that cert.pem and key.pem are generated with this command in a Mac or Linux terminal: openssl req -newkey rsa:4096 -nodes -sha512 -x509 -days 3650 -nodes -out cert.pem -keyout key.pem
The server uses TLS to secure the data, and requests are handled on separate threads. The amount of computation done for each request will be relatively small, as it would mainly consist of reading and writing small amounts of data to a database with each request.
My main concern is that somebody acting maliciously could figure out what the server's public IP address is and perform a DDOS attack. One way I can think to mitigate this is to deny requests made too frequently from the same client address. Are there any other ways to mitigate such attacks? Also, is running a secure server in Python a good idea or should I be looking elsewhere? Thank you in advance.
--- EDIT ---
I was thinking of checking whether the same user makes too many requests in a certain amount of time. Since the requests are on a timer (say, 5 seconds) any requests made more frequently are deemed suspicious. As long as the incoming requests don't saturate the router's bandwidth, I should, in theory, be able to deny some requests. However, if multiple machines make requests from the same network, I can't just look at the incoming requests' public IP addresses, since I could be denying perfectly valid requests. Is there any ID identifiable to the machine making the request?
When a DDoS attack gets to you it is too late. The packets arrived to your server and are filling up your pipe. No matter what you do, they are already there - many of them. You can discard them but others won't be able to reach you anyway.
DDoS protection must be done uplink, by someone who will have the capacity to decide whether a packet is malicious or not. This is a magical operation which companies such as Cloudflare or Akamai make you pay a lot for.
Another possibility is to change your DNS entry to point is somewhere else during the attack. This is really a nice to have, so that your customers know that your site is "under maintenance, back soon".
I have written this HTTP web server in python which simply sends reply "Website Coming Soon!" to the browser/client, but I want that this web server should sends back the URL given by the client, like if I write
http://localhost:13555/ChessBoard_x16_y16.bmp
then server should reply back the same url instead of "Website Coming Soon!" message.
please tell how can I do this?
Server Code:
import sys
import http.server
from http.server import HTTPServer
from http.server import SimpleHTTPRequestHandler
#import usb.core
class MyHandler(SimpleHTTPRequestHandler): #handles client requests (by me)
#def init(self,req,client_addr,server):
# SimpleHTTPRequestHandler.__init__(self,req,client_addr,server)
def do_GET(self):
response="Website Coming Soon!"
self.send_response(200)
self.send_header("Content-type", "application/json;charset=utf-8")
self.send_header("Content-length", len(response))
self.end_headers()
self.wfile.write(response.encode("utf-8"))
self.wfile.flush()
print(response)
HandlerClass = MyHandler
Protocol = "HTTP/1.1"
port = 13555
server_address = ('localhost', port)
HandlerClass.protocol_version = Protocol
try:
httpd = HTTPServer(server_address, MyHandler)
print ("Server Started")
httpd.serve_forever()
except:
print('Shutting down server due to some problems!')
httpd.socket.close()
You can do what you're asking, sort of, but it's a little complicated.
When a client (e.g., a web browser) connects to your web server, it sends a request that look like this:
GET /ChessBoard_x16_y16.bmp HTTP/1.1
Host: localhost:13555
This assumes your client is using HTTP/1.1, which is likely true of anything you'll find these days. If you expect HTTP/1.0 or earlier clients, life is much more difficult because there is no Host: header.
Using the value of the Host header and the path passed as an argument to the GET request, you can construct a URL that in many cases will match the URL the client was using.
But it won't necessarily match in all cases:
There may be a proxy in between the client and your server, in which case both the path and hostname/port seen by your code may be different from that used by the client.
There may be packet manipulation rules in place that modify the destination ip address and/or port, so that the connection seen by your code does not match the parameters used by the client.
In your do_GET method, you can access request headers via the
self.headers attribute and the request path via self.path. For example:
def do_GET(self):
response='http://%s/%s' % (self.headers['host'],
self.path)
I'd like to manually (using the socket and ssl modules) make an HTTPS request through a proxy which itself uses HTTPS.
I can perform the initial CONNECT exchange just fine:
import ssl, socket
PROXY_ADDR = ("proxy-addr", 443)
CONNECT = "CONNECT example.com:443 HTTP/1.1\r\n\r\n"
sock = socket.create_connection(PROXY_ADDR)
sock = ssl.wrap_socket(sock)
sock.sendall(CONNECT)
s = ""
while s[-4:] != "\r\n\r\n":
s += sock.recv(1)
print repr(s)
The above code prints HTTP/1.1 200 Connection established plus some headers, which is what I expect. So now I should be ready to make the request, e.g.
sock.sendall("GET / HTTP/1.1\r\n\r\n")
but the above code returns
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
Reason: You're speaking plain HTTP to an SSL-enabled server port.<br />
Instead use the HTTPS scheme to access this URL, please.<br />
</body></html>
This makes sense too, since I still need to do an SSL handshake with the example.com server to which I'm tunneling. However, if instead of immediately sending the GET request I say
sock = ssl.wrap_socket(sock)
to do the handshake with the remote server, then I get an exception:
Traceback (most recent call last):
File "so_test.py", line 18, in <module>
ssl.wrap_socket(sock)
File "/usr/lib/python2.6/ssl.py", line 350, in wrap_socket
suppress_ragged_eofs=suppress_ragged_eofs)
File "/usr/lib/python2.6/ssl.py", line 118, in __init__
self.do_handshake()
File "/usr/lib/python2.6/ssl.py", line 293, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [Errno 1] _ssl.c:480: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
So how can I do the SSL handshake with the remote example.com server?
EDIT: I'm pretty sure that no additional data is available before my second call to wrap_socket because calling sock.recv(1) blocks indefinitely.
This should work if the CONNECT string is rewritten as follows:
CONNECT = "CONNECT %s:%s HTTP/1.0\r\nConnection: close\r\n\r\n" % (server, port)
Not sure why this works, but maybe it has something to do with the proxy I'm using. Here's an example code:
from OpenSSL import SSL
import socket
def verify_cb(conn, cert, errun, depth, ok):
return True
server = 'mail.google.com'
port = 443
PROXY_ADDR = ("proxy.example.com", 3128)
CONNECT = "CONNECT %s:%s HTTP/1.0\r\nConnection: close\r\n\r\n" % (server, port)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(PROXY_ADDR)
s.send(CONNECT)
print s.recv(4096)
ctx = SSL.Context(SSL.SSLv23_METHOD)
ctx.set_verify(SSL.VERIFY_PEER, verify_cb)
ss = SSL.Connection(ctx, s)
ss.set_connect_state()
ss.do_handshake()
cert = ss.get_peer_certificate()
print cert.get_subject()
ss.shutdown()
ss.close()
Note how the socket is first opened and then open socket placed in SSL context. Then I manually initialize SSL handshake. And output:
HTTP/1.1 200 Connection established
<X509Name object '/C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com'>
It's based on pyOpenSSL because I needed to fetch invalid certificates too and Python built-in ssl module will always try to verify the certificate if it's received.
Judging from the API of the OpenSSL and GnuTLS library, stacking a SSLSocket onto a SSLSocket is actually not straightforwardly possible as they provide special read/write functions to implement the encryption, which they are not able to use themselves when wrapping a pre-existing SSLSocket.
The error is therefore caused by the inner SSLSocket directly reading from the system socket and not from the outer SSLSocket. This ends in sending data not belonging to the outer SSL session, which ends badly and for sure never returns a valid ServerHello.
Concluding from that, I would say there is no simple way to implement what you (and actually myself) would like to accomplish.
Finally I got somewhere expanding on #kravietz and #02strich answers.
Here's the code
import threading
import select
import socket
import ssl
server = 'mail.google.com'
port = 443
PROXY = ("localhost", 4433)
CONNECT = "CONNECT %s:%s HTTP/1.0\r\nConnection: close\r\n\r\n" % (server, port)
class ForwardedSocket(threading.Thread):
def __init__(self, s, **kwargs):
threading.Thread.__init__(self)
self.dest = s
self.oursraw, self.theirsraw = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)
self.theirs = socket.socket(_sock=self.theirsraw)
self.start()
self.ours = ssl.wrap_socket(socket.socket(_sock=self.oursraw), **kwargs)
def run(self):
rl, wl, xl = select.select([self.dest, self.theirs], [], [], 1)
print rl, wl, xl
# FIXME write may block
if self.theirs in rl:
self.dest.send(self.theirs.recv(4096))
if self.dest in rl:
self.theirs.send(self.dest.recv(4096))
def recv(self, *args):
return self.ours.recv(*args)
def send(self, *args):
return self.outs.recv(*args)
def test():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(PROXY)
s = ssl.wrap_socket(s, ciphers="ALL:aNULL:eNULL")
s.send(CONNECT)
resp = s.read(4096)
print (resp, )
fs = ForwardedSocket(s, ciphers="ALL:aNULL:eNULL")
fs.send("foobar")
Don't mind custom cihpers=, that only because I didn't want to deal with certificates.
And there's depth-1 ssl output, showing CONNECT, my response to it ssagd and depth-2 ssl negotiation and binary rubbish:
[dima#bmg ~]$ openssl s_server -nocert -cipher "ALL:aNULL:eNULL"
Using default temp DH parameters
Using default temp ECDH parameters
ACCEPT
-----BEGIN SSL SESSION PARAMETERS-----
MHUCAQECAgMDBALAGQQgmn6XfJt8ru+edj6BXljltJf43Sz6AmacYM/dSmrhgl4E
MOztEauhPoixCwS84DL29MD/OxuxuvG5tnkN59ikoqtfrnCKsk8Y9JtUU9zuaDFV
ZaEGAgRSnJ81ogQCAgEspAYEBAEAAAA=
-----END SSL SESSION PARAMETERS-----
Shared ciphers: [snipped]
CIPHER is AECDH-AES256-SHA
Secure Renegotiation IS supported
CONNECT mail.google.com:443 HTTP/1.0
Connection: close
sagq
�u\�0�,�(�$��
�"�!��kj98���� �m:��2�.�*�&���=5�����
��/�+�'�#�� ����g#32��ED���l4�F�1�-�)�%���</�A������
�� ������
�;��A��q�J&O��y�l
It doesn't sound like there's anything wrong with what you're doing; it's certainly possible to call wrap_socket() on an existing SSLSocket.
The 'unknown protocol' error can occur (amongst other reasons) if there's extra data waiting to be read on the socket at the point you call wrap_socket(), for instance an extra \r\n or an HTTP error (due to a missing cert on the server end, for instance). Are you certain you've read everything available at that point?
If you can force the first SSL channel to use a "plain" RSA cipher (i.e. non-Diffie-Hellman) then you may be able to use Wireshark to decrypt the stream to see what's going on.
Building on #kravietz answer. Here is a version that works in Python3 through a Squid proxy:
from OpenSSL import SSL
import socket
def verify_cb(conn, cert, errun, depth, ok):
return True
server = 'mail.google.com'
port = 443
PROXY_ADDR = ("<proxy_server>", 3128)
CONNECT = "CONNECT %s:%s HTTP/1.0\r\nConnection: close\r\n\r\n" % (server, port)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(PROXY_ADDR)
s.send(str.encode(CONNECT))
s.recv(4096)
ctx = SSL.Context(SSL.SSLv23_METHOD)
ctx.set_verify(SSL.VERIFY_PEER, verify_cb)
ss = SSL.Connection(ctx, s)
ss.set_connect_state()
ss.do_handshake()
cert = ss.get_peer_certificate()
print(cert.get_subject())
ss.shutdown()
ss.close()
This works in Python 2 also.