An asyncore-based XMPP client opens a normal TCP connection to an XMPP server. The server indicates it requires an encrypted connection. The client is now expected to start a TLS handshake so that subsequent requests can be encrypted.
tlslite integrates with asyncore, but the sample code is for a server (?) and I don't understand what it's doing.
I'm on Python 2.5. How can I get the TLS magic working?
Here's what ended up working for me:
from tlslite.api import *
def handshakeTls(self):
"""
Encrypt the socket using the tlslite module
"""
self.logger.info("activating TLS encrpytion")
self.socket = TLSConnection(self.socket)
self.socket.handshakeClientCert()
Definitely check out twisted and wokkel. I've been building tons of xmpp bots and components with it and it's a dream.
I've followed what I believe are all the steps tlslite documents to make an asyncore client work -- I can't actually get it to work since the only asyncore client I have at hand to tweak for the purpose is the example in the Python docs, which is an HTTP 1.0 client, and I believe that because of this I'm trying to set up an HTTPS connection in a very half-baked way. And I have no asyncore XMPP client, nor any XMPP server requesting TLS, to get anywhere close to your situation. Nevertheless I decided to share the fruits of my work anyway because (even though some step may be missing) it does seem to be a bit better than what you previously had -- I think I'm showing all the needed steps in the __init__. BTW, I copied the pem files from the tlslite/test directory.
import asyncore, socket
from tlslite.api import *
s = open("./clientX509Cert.pem").read()
x509 = X509()
x509.parse(s)
certChain = X509CertChain([x509])
s = open("./clientX509Key.pem").read()
privateKey = parsePEMKey(s, private=True)
class http_client(TLSAsyncDispatcherMixIn, asyncore.dispatcher):
ac_in_buffer_size = 16384
def __init__(self, host, path):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect( (host, 80) )
TLSAsyncDispatcherMixIn.__init__(self, self.socket)
self.tlsConnection.ignoreAbruptClose = True
handshaker = self.tlsConnection.handshakeClientCert(
certChain=certChain,
privateKey=privateKey,
async=True)
self.setHandshakeOp(handshaker)
self.buffer = 'GET %s HTTP/1.0\r\n\r\n' % path
def handle_connect(self):
pass
def handle_close(self):
self.close()
def handle_read(self):
print self.recv(8192)
def writable(self):
return (len(self.buffer) > 0)
def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
c = http_client('www.readyhosting.com', '/')
asyncore.loop()
This is a mix of the asyncore example http client in the Python docs, plus what I've gleaned from the tlslite docs and have been able to reverse engineer from their sources. Hope this (even though incomplete/not working) can at least advance you in your quest...
Personally, in your shoes, I'd consider switching from asyncore to twisted -- asyncore is old and rusty, Twisted already integrates a lot of juicy, useful bits (the URL I gave is to a bit in the docs that already does integrate TLS and XMPP for you...).
Related
I'm writing a program to download a given webpage. I need to only use raw python sockets for all the connection due to some restriction. So I make a socket connection to a given domain (the Host field in the response header of an object) and then send the GET request on this. Now when the url is a https url, I think I need to first do the SSL handshake (because otherwise I'm getting non-200 OK responses from the server and other error responses mentioning P3P policies). I inspected curl's response to check how it's able to successfully download while I'm not, turns out curl first does the SSL handshake (that's all the difference). curl is always able to successfully download a given object, the only difference always being the SSL handshake it does.
So I'm wondering how to do the SSL handshake in raw python sockets? Basically I want as easy a solution which allows me to do the minimum besides using raw sockets.
Here is an example of a TCP client with SLL.
Not sure if it's the best way to download a web page but it should answer your question "SSL handshake in raw python socket".
You will probably have to adapt the struct.pack/unpack but you get the general idea:
import socket
import ssl
import struct
import binascii
import sys
class NotConnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class DisconnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class Connector:
def __init__(self):
pass
def is_connected(self):
return (self.sock and self.ssl_sock)
def open(self, hostname, port, cacert):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ssl_sock = ssl.wrap_socket(self.sock, ca_certs=cacert, cert_reqs=ssl.CERT_REQUIRED)
if hostname == socket.gethostname():
ipaddress = socket.gethostbyname_ex(hostname)[2][0]
self.ssl_sock.connect((ipaddress, port))
else:
self.ssl_sock.connect((hostname, port))
self.sock.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
def close(self):
if self.sock: self.sock.close()
self.sock = None
self.ssl_sock = None
def send(self, buffer):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
self.ssl_sock.sendall(struct.pack('L', len(buffer)))
self.ssl_sock.sendall(buffer)
def receive(self):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
data_size_buffer = self.ssl_sock.recv(4)
if len(data_size_buffer) <= 0:
raise DisconnectedException()
data_size = struct.unpack('L', data_size_buffer)[0]
received_size = 0
data_buffer = ""
while received_size < data_size:
chunk = self.ssl_sock.recv(1024)
data_buffer += chunk
received_size += len(chunk)
return data_buffer
Then you use the class like this:
connector = Connector.Connector()
connector.open(server_ip, server_port, path_to_the_CA_cert.pem)
connector.send(your_data)
response = connector.receive()
connector.close()
You can use the wrap_socket method of the python ssl module to turn your socket into one that talks SSL. Once you've done this you can use it like normal, but internally the data will be encrypted and decrypted for you. These are the docs for the method:
https://docs.python.org/2/library/ssl.html#ssl.wrap_socket
I think the easier way to do that would be using SSL contexts and wraping the TCP socket.
Python SSL module's documentation give a very thoroughful explanation with examples. I recommend you to read the relevant sections of Python 2 or Python 3 ssl module documentation. It should be very easy to achieve what you want.
Hope this helps!
I'm planning on incorporating a server into an application I'm developing (none of the data being transferred will be sensitive). I've set up port forwarding on my router that points to the server on the network. Here is a snippet of the server side code:
import time
import threading
import socketserver
import ssl
class ThreadedTCPRequestHandler(socketserver.StreamRequestHandler):
def handle(self):
# Each new request is handled by this function.
data = str(self.request.recv(4096), 'utf-8')
print('Request received on {}'.format(time.ctime()))
print('{} wrote: {}'.format(self.client_address[0], data))
cur_thread = threading.current_thread()
response = bytes("{}: {}".format(cur_thread.name, data), 'utf-8')
self.request.sendall(response)
class TLSTCPServer(socketserver.TCPServer):
def __init__(self, server_address, request_handler_class, certfile, keyfile, ssl_version=ssl.PROTOCOL_TLSv1_2,
bind_and_activate=True):
socketserver.TCPServer.__init__(self, server_address, request_handler_class, bind_and_activate)
self.certfile = certfile
self.keyfile = keyfile
self.ssl_version = ssl_version
def get_request(self):
newsocket, fromaddr = self.socket.accept()
connstream = ssl.wrap_socket(newsocket,
server_side=True,
certfile=self.certfile,
keyfile=self.keyfile,
ssl_version=self.ssl_version)
return connstream, fromaddr
class ThreadedTCPServer(socketserver.ThreadingMixIn, TLSTCPServer):
pass
if __name__ == "__main__":
HOST, PORT = "0.0.0.0", 6001
# Creates a server that handles each request on a separate thread. "cert.pem" is the TLS certificate and "key.pem"
# is the TLS private key (kept only on the server).
server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler, "cert.pem", "key.pem")
ip, port = server.server_address
print('Started server\n')
server.serve_forever()
And here is the client code:
import socket
import time
import ssl
HOST = 'localhost' # This should be the server's public IP when used in production code
PORT = 6001
data = 'Hello!'
start_time = time.time()
try:
# Connect to server and send data
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ssl_sock = ssl.wrap_socket(sock,
ca_certs="cert.pem",
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1_2)
ssl_sock.connect((HOST, PORT))
ssl_sock.sendall(data.encode())
# Receive data from the server and shut down
received = ssl_sock.recv(4096)
elapsed_tie = round(time.time() - start_time, 2)
print("Sent: {}".format(data))
print("Received: {}".format(received.decode('utf-8')))
print("Elapsed: {}s \n".format(elapsed_tie))
ssl_sock.close()
except Exception as e:
print(format(e))
Note that cert.pem and key.pem are generated with this command in a Mac or Linux terminal: openssl req -newkey rsa:4096 -nodes -sha512 -x509 -days 3650 -nodes -out cert.pem -keyout key.pem
The server uses TLS to secure the data, and requests are handled on separate threads. The amount of computation done for each request will be relatively small, as it would mainly consist of reading and writing small amounts of data to a database with each request.
My main concern is that somebody acting maliciously could figure out what the server's public IP address is and perform a DDOS attack. One way I can think to mitigate this is to deny requests made too frequently from the same client address. Are there any other ways to mitigate such attacks? Also, is running a secure server in Python a good idea or should I be looking elsewhere? Thank you in advance.
--- EDIT ---
I was thinking of checking whether the same user makes too many requests in a certain amount of time. Since the requests are on a timer (say, 5 seconds) any requests made more frequently are deemed suspicious. As long as the incoming requests don't saturate the router's bandwidth, I should, in theory, be able to deny some requests. However, if multiple machines make requests from the same network, I can't just look at the incoming requests' public IP addresses, since I could be denying perfectly valid requests. Is there any ID identifiable to the machine making the request?
When a DDoS attack gets to you it is too late. The packets arrived to your server and are filling up your pipe. No matter what you do, they are already there - many of them. You can discard them but others won't be able to reach you anyway.
DDoS protection must be done uplink, by someone who will have the capacity to decide whether a packet is malicious or not. This is a magical operation which companies such as Cloudflare or Akamai make you pay a lot for.
Another possibility is to change your DNS entry to point is somewhere else during the attack. This is really a nice to have, so that your customers know that your site is "under maintenance, back soon".
I'm writing a program to download a given webpage. I need to only use raw python sockets for all the connection due to some restriction. So I make a socket connection to a given domain (the Host field in the response header of an object) and then send the GET request on this. Now when the url is a https url, I think I need to first do the SSL handshake (because otherwise I'm getting non-200 OK responses from the server and other error responses mentioning P3P policies). I inspected curl's response to check how it's able to successfully download while I'm not, turns out curl first does the SSL handshake (that's all the difference). curl is always able to successfully download a given object, the only difference always being the SSL handshake it does.
So I'm wondering how to do the SSL handshake in raw python sockets? Basically I want as easy a solution which allows me to do the minimum besides using raw sockets.
Here is an example of a TCP client with SLL.
Not sure if it's the best way to download a web page but it should answer your question "SSL handshake in raw python socket".
You will probably have to adapt the struct.pack/unpack but you get the general idea:
import socket
import ssl
import struct
import binascii
import sys
class NotConnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class DisconnectedException(Exception):
def __init__(self, message=None, node=None):
self.message = message
self.node = node
class Connector:
def __init__(self):
pass
def is_connected(self):
return (self.sock and self.ssl_sock)
def open(self, hostname, port, cacert):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ssl_sock = ssl.wrap_socket(self.sock, ca_certs=cacert, cert_reqs=ssl.CERT_REQUIRED)
if hostname == socket.gethostname():
ipaddress = socket.gethostbyname_ex(hostname)[2][0]
self.ssl_sock.connect((ipaddress, port))
else:
self.ssl_sock.connect((hostname, port))
self.sock.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
def close(self):
if self.sock: self.sock.close()
self.sock = None
self.ssl_sock = None
def send(self, buffer):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
self.ssl_sock.sendall(struct.pack('L', len(buffer)))
self.ssl_sock.sendall(buffer)
def receive(self):
if not self.ssl_sock: raise NotConnectedException("Not connected (SSL Socket is null)")
data_size_buffer = self.ssl_sock.recv(4)
if len(data_size_buffer) <= 0:
raise DisconnectedException()
data_size = struct.unpack('L', data_size_buffer)[0]
received_size = 0
data_buffer = ""
while received_size < data_size:
chunk = self.ssl_sock.recv(1024)
data_buffer += chunk
received_size += len(chunk)
return data_buffer
Then you use the class like this:
connector = Connector.Connector()
connector.open(server_ip, server_port, path_to_the_CA_cert.pem)
connector.send(your_data)
response = connector.receive()
connector.close()
You can use the wrap_socket method of the python ssl module to turn your socket into one that talks SSL. Once you've done this you can use it like normal, but internally the data will be encrypted and decrypted for you. These are the docs for the method:
https://docs.python.org/2/library/ssl.html#ssl.wrap_socket
I think the easier way to do that would be using SSL contexts and wraping the TCP socket.
Python SSL module's documentation give a very thoroughful explanation with examples. I recommend you to read the relevant sections of Python 2 or Python 3 ssl module documentation. It should be very easy to achieve what you want.
Hope this helps!
I am trying to understand the examples given here: https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/wamp/basic/pubsub/basic
I built this script which is supposed to handle multiple pub/sub websocket connections and also open a tcp port ( 8123 ) for incoming control messages. When a message comes on the 8123 port, the application should broadcast to all the connected subscribers the message received on port 8123. How do i make NotificationProtocol or NotificationFactory talk to the websocket and make the websocket server broadcast a message.
Another thing that i do not understand is the url. The client javascript connects to the url http://:8080/ws . Where does the "ws" come from ?
Also can someone explain the purpose of RouterFactory, RouterSessionFactory and this bit:
from autobahn.wamp import types
session_factory.add( WsNotificationComponent(types.ComponentConfig(realm = "realm1" )))
my code is below:
import sys, time
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, Factory
from twisted.internet.defer import inlineCallbacks
from autobahn.twisted.wamp import ApplicationSession
from autobahn.twisted.util import sleep
class NotificationProtocol(Protocol):
def __init__(self, factory):
self.factory = factory
def dataReceived(self, data):
print "received new data"
class NotificationFactory(Factory):
protocol = NotificationProtocol
class WsNotificationComponent(ApplicationSession):
#inlineCallbacks
def onJoin(self, details):
counter = 0
while True:
self.publish("com.myapp.topic1", "test %d" % counter )
counter += 1
yield sleep(1)
## we use an Autobahn utility to install the "best" available Twisted reactor
##
from autobahn.twisted.choosereactor import install_reactor
reactor = install_reactor()
## create a WAMP router factory
##
from autobahn.wamp.router import RouterFactory
router_factory = RouterFactory()
## create a WAMP router session factory
##
from autobahn.twisted.wamp import RouterSessionFactory
session_factory = RouterSessionFactory(router_factory)
from autobahn.wamp import types
session_factory.add( WsNotificationComponent(types.ComponentConfig(realm = "realm1" )))
from autobahn.twisted.websocket import WampWebSocketServerFactory
transport_factory = WampWebSocketServerFactory(session_factory)
transport_factory.setProtocolOptions(failByDrop = False)
from twisted.internet.endpoints import serverFromString
## start the server from an endpoint
##
server = serverFromString(reactor, "tcp:8080")
server.listen(transport_factory)
notificationFactory = NotificationFactory()
reactor.listenTCP(8123, notificationFactory)
reactor.run()
"How do i make NotificationProtocol or NotificationFactory talk to the websocket and make the websocket server broadcast a message":
Check out one of my other answers on SO: Persistent connection in twisted. Jump down to the example code and model your websocket logic like the "IO" logic and you'll have a good fit (You might also want to see the follow-on answer about the newer endpoint calls from one of the twisted core-team too)
"Where does the "ws" come from ?"
Websockets are implemented by retasking http connections, which by their nature have to have a specific path on the request. That "ws" path typically would map to a special http handler that autobahn is building for you to process websockets (or at least that's what your javascript is expecting...). Assuming thing are setup right you can actually point your web-browswer at that url and it should print back an error about the websocket handshake (Expected WebSocket Headers in my case, but I'm using cyclones websockets not autobahn).
P.S. one of the cool side-effects from "websockets must have a specific path" is that you can actually mix websockets and normal http content on the same handler/listen/port, this gets really handy when your trying to run them all on the same SSL port because your trying to avoid the requirement of a proxy front-ending your code.
I am writing a tool in python (platform is linux), one of the tasks is to capture a live tcp stream and to
apply a function to each line. Currently I'm using
import subprocess
proc = subprocess.Popen(['sudo','tcpflow', '-C', '-i', interface, '-p', 'src', 'host', ip],stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline,''):
do_something(line)
This works quite well (with the appropriate entry in /etc/sudoers), but I would like to avoid calling an external program.
So far I have looked into the following possibilities:
flowgrep: a python tool which looks just like what I need, BUT: it uses pynids
internally, which is 7 years old and seems pretty much abandoned. There is no pynids package
for my gentoo system and it ships with a patched version of libnids
which I couldn't compile without further tweaking.
scapy: this is a package manipulation program/library for python,
I'm not sure if tcp stream
reassembly is supported.
pypcap or pylibpcap as wrappers for libpcap. Again, libpcap is for packet
capturing, where I need stream reassembly which is not possible according
to this question.
Before I dive deeper into any of these libraries I would like to know if maybe someone
has a working code snippet (this seems like a rather common problem). I'm also grateful if
someone can give advice about the right way to go.
Thanks
Jon Oberheide has led efforts to maintain pynids, which is fairly up to date at:
http://jon.oberheide.org/pynids/
So, this might permit you to further explore flowgrep. Pynids itself handles stream reconstruction rather elegantly.See http://monkey.org/~jose/presentations/pysniff04.d/ for some good examples.
Just as a follow-up: I abandoned the idea to monitor the stream on the tcp layer. Instead I wrote a proxy in python and let the connection I want to monitor (a http session) connect through this proxy. The result is more stable and does not need root privileges to run. This solution depends on pymiproxy.
This goes into a standalone program, e.g. helper_proxy.py
from multiprocessing.connection import Listener
import StringIO
from httplib import HTTPResponse
import threading
import time
from miproxy.proxy import RequestInterceptorPlugin, ResponseInterceptorPlugin, AsyncMitmProxy
class FakeSocket(StringIO.StringIO):
def makefile(self, *args, **kw):
return self
class Interceptor(RequestInterceptorPlugin, ResponseInterceptorPlugin):
conn = None
def do_request(self, data):
# do whatever you need to sent data here, I'm only interested in responses
return data
def do_response(self, data):
if Interceptor.conn: # if the listener is connected, send the response to it
response = HTTPResponse(FakeSocket(data))
response.begin()
Interceptor.conn.send(response.read())
return data
def main():
proxy = AsyncMitmProxy()
proxy.register_interceptor(Interceptor)
ProxyThread = threading.Thread(target=proxy.serve_forever)
ProxyThread.daemon=True
ProxyThread.start()
print "Proxy started."
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='some_secret_password')
while True:
Interceptor.conn = listener.accept()
print "Accepted Connection from", listener.last_accepted
try:
Interceptor.conn.recv()
except: time.sleep(1)
finally:
Interceptor.conn.close()
if __name__ == '__main__':
main()
Start with python helper_proxy.py. This will create a proxy listening for http connections on port 8080 and listening for another python program on port 6000. Once the other python program has connected on that port, the helper proxy will send all http replies to it. This way the helper proxy can continue to run, keeping up the http connection, and the listener can be restarted for debugging.
Here is how the listener works, e.g. listener.py:
from multiprocessing.connection import Client
def main():
address = ('localhost', 6000)
conn = Client(address, authkey='some_secret_password')
while True:
print conn.recv()
if __name__ == '__main__':
main()
This will just print all the replies. Now point your browser to the proxy running on port 8080 and establish the http connection you want to monitor.