I'm planning on incorporating a server into an application I'm developing (none of the data being transferred will be sensitive). I've set up port forwarding on my router that points to the server on the network. Here is a snippet of the server side code:
import time
import threading
import socketserver
import ssl
class ThreadedTCPRequestHandler(socketserver.StreamRequestHandler):
def handle(self):
# Each new request is handled by this function.
data = str(self.request.recv(4096), 'utf-8')
print('Request received on {}'.format(time.ctime()))
print('{} wrote: {}'.format(self.client_address[0], data))
cur_thread = threading.current_thread()
response = bytes("{}: {}".format(cur_thread.name, data), 'utf-8')
self.request.sendall(response)
class TLSTCPServer(socketserver.TCPServer):
def __init__(self, server_address, request_handler_class, certfile, keyfile, ssl_version=ssl.PROTOCOL_TLSv1_2,
bind_and_activate=True):
socketserver.TCPServer.__init__(self, server_address, request_handler_class, bind_and_activate)
self.certfile = certfile
self.keyfile = keyfile
self.ssl_version = ssl_version
def get_request(self):
newsocket, fromaddr = self.socket.accept()
connstream = ssl.wrap_socket(newsocket,
server_side=True,
certfile=self.certfile,
keyfile=self.keyfile,
ssl_version=self.ssl_version)
return connstream, fromaddr
class ThreadedTCPServer(socketserver.ThreadingMixIn, TLSTCPServer):
pass
if __name__ == "__main__":
HOST, PORT = "0.0.0.0", 6001
# Creates a server that handles each request on a separate thread. "cert.pem" is the TLS certificate and "key.pem"
# is the TLS private key (kept only on the server).
server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler, "cert.pem", "key.pem")
ip, port = server.server_address
print('Started server\n')
server.serve_forever()
And here is the client code:
import socket
import time
import ssl
HOST = 'localhost' # This should be the server's public IP when used in production code
PORT = 6001
data = 'Hello!'
start_time = time.time()
try:
# Connect to server and send data
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ssl_sock = ssl.wrap_socket(sock,
ca_certs="cert.pem",
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1_2)
ssl_sock.connect((HOST, PORT))
ssl_sock.sendall(data.encode())
# Receive data from the server and shut down
received = ssl_sock.recv(4096)
elapsed_tie = round(time.time() - start_time, 2)
print("Sent: {}".format(data))
print("Received: {}".format(received.decode('utf-8')))
print("Elapsed: {}s \n".format(elapsed_tie))
ssl_sock.close()
except Exception as e:
print(format(e))
Note that cert.pem and key.pem are generated with this command in a Mac or Linux terminal: openssl req -newkey rsa:4096 -nodes -sha512 -x509 -days 3650 -nodes -out cert.pem -keyout key.pem
The server uses TLS to secure the data, and requests are handled on separate threads. The amount of computation done for each request will be relatively small, as it would mainly consist of reading and writing small amounts of data to a database with each request.
My main concern is that somebody acting maliciously could figure out what the server's public IP address is and perform a DDOS attack. One way I can think to mitigate this is to deny requests made too frequently from the same client address. Are there any other ways to mitigate such attacks? Also, is running a secure server in Python a good idea or should I be looking elsewhere? Thank you in advance.
--- EDIT ---
I was thinking of checking whether the same user makes too many requests in a certain amount of time. Since the requests are on a timer (say, 5 seconds) any requests made more frequently are deemed suspicious. As long as the incoming requests don't saturate the router's bandwidth, I should, in theory, be able to deny some requests. However, if multiple machines make requests from the same network, I can't just look at the incoming requests' public IP addresses, since I could be denying perfectly valid requests. Is there any ID identifiable to the machine making the request?
When a DDoS attack gets to you it is too late. The packets arrived to your server and are filling up your pipe. No matter what you do, they are already there - many of them. You can discard them but others won't be able to reach you anyway.
DDoS protection must be done uplink, by someone who will have the capacity to decide whether a packet is malicious or not. This is a magical operation which companies such as Cloudflare or Akamai make you pay a lot for.
Another possibility is to change your DNS entry to point is somewhere else during the attack. This is really a nice to have, so that your customers know that your site is "under maintenance, back soon".
Related
I'm relatively new to Python, so apologies for what me a simple question, I just cannot find the solution. First off, I am not looking for the client's hostname. My situation is that I have a simple socket server (basically this https://docs.python.org/3/library/socketserver.html#socketserver-tcpserver-example) which clients connect to. The exact server code is;
import socketserver
class MyTCPHandler(socketserver.BaseRequestHandler):
"""
The request handler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print("{} wrote:".format(self.client_address[0]))
print(self.data)
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
if __name__ == "__main__":
HOST, PORT = "0.0.0.0", 8080
# Create the server, binding to localhost on port 9999
with socketserver.TCPServer((HOST, PORT), MyTCPHandler) as server:
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
The clients connect successfully and are sending data which the server is receiving. My issue is that need to know the hostname that the client used to connect. The architecture will be like;
client1 will connect to client1.mydomain.net:8080
client2 will connect to client2.mydomain.net:8080
client3 will connect to client3.mydomain.net:8080
The DNS entry for client1.mydomain.net, client2.mydomain.net and client3.mydomain.net will all map to 123.123.123.123 so behind the scenes there is only one server.
The 3 clients will connect to their respective server and send their data. I have no control over the payload and I cannot augment it with a string or parm like "client=1".
So my question is, is there a way in python sockets (on the server) to know the hostname that a client connected to, so for example I know when I'm process data from client1.
Thanks!
Nothing at the TCP level reveals which hostname the client is connected to. This means there is no way for a generic TCP server to get this information.
Various protocols on top of TCP contains such information though. For example HTTP has a Host header so that different domains with different contents can be server on the same IP and port. TLS has the server_name extension in the TLS handshake so that the expected certificate can be given matching the hostname used by the client.
Thus, if you need this information you need to define your application protocol so that the client will include this information.
I am trying to modify a socket server I wrote with the python socket library to use encryption using python's SSL library.
I am no able to successfully open a connection to the server, wrap it with an SSL context and send data to the server, but data sent back to the client is not what it should be.
My suspicion is that the server responses are not being decrypted on the client side, but I don't know why. I'm pretty new to SSL/TLS, and networking in general so... what am I missing?
The client is also written in python (for now, to facilitate testing)
Code:
Relevant Server stuff:
def sslServerLoop():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host, port))
s.listen(5)
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context.load_cert_chain('cert.pem')
while True:
conn, addr = s.accept()
sslConn = context.wrap_socket(conn, server_side=True)
data = sslConn.recv(1024)
sslConn.sendall(response)
sslConn.close()
Relevant Client stuff:
context = ssl.create_default_context(cafile='cert.pem')
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s = context.wrap_socket(s, server_hostname=server_addr)
s.connect((address, port))
s.sendall(msg)
s.shutdown(socket.SHUT_WR)
response = s.recv(1024)
Sending from client to server works fine, but data sent back to the client is wrong. For example if I set response = bytes([1]) on the server side, I receive b'\x17\x03\x03\x00\x19\xac\xb6\x7f#\xc0\xd3\xce%\x13G\x01\xbd\x88y\xf0\xda..\x02\xf9\xe4o\xdd\x1a\xdb' on the client side. Most of that changes every time I try to run it, but the first 5 bytes are always the same (which is partly why I suspect it isn't being decrypted).
cert.pem is a self signed certificate generated using openssl as described in the python 3 SSL module documentation
It is not legal to shutdown a socket that is being used for SSL. It is a protocol violation. You must close via the SSL/TLS API you are using.
Short version: Is there any easy API for encoding an HTTP request (and decoding the response) without actually transmitting and receiving the encoded bytes as part of the process?
Long version: I'm writing some embedded software which uses paramiko to open an SSH session with a server. I then need to make an HTTP request across an SSH channel opened with transport.open_channel('direct-tcpip', <remote address>, <source address>).
requests has is transport adapters, which lets you substitute your own transport. But the send interface provided by BaseAdapter just accepts a PreparedRequest object which (a) doesn't provide the remote address in any useful way; you need to parse the URL to find out the host and port and (b) doesn't provide an encoded version of the request, only a dictionary of headers and the encoded body (if any). It also gives you no help in decoding the response. HTTPAdapter defers the whole lot, including encoding the request, making the network connection, sending the bytes, receiving the response bytes and decoding the response, to urllib3.
urllib3 likewise defers to http.client and http.client's HTTPConnection class has encoding and network operations all jumbled up together.
Is there a simple way to say, "Give me a bunch of bytes to send to an HTTP server," and "Here's a bunch of bytes from an HTTP server; turn them into a useful Python object"?
This is the simplest implementation of this that I can come up with:
from http.client import HTTPConnection
import requests
from requests.structures import CaseInsensitiveDict
from urllib.parse import urlparse
from argparse import ArgumentParser
class TunneledHTTPConnection(HTTPConnection):
def __init__(self, transport, *args, **kwargs):
self.ssh_transport = transport
HTTPConnection.__init__(self, *args, **kwargs)
def connect(self):
self.sock = self.ssh_transport.open_channel(
'direct-tcpip', (self.host, self.port), ('localhost', 0)
)
class TunneledHTTPAdapter(requests.adapters.BaseAdapter):
def __init__(self, transport):
self.transport = transport
def close(self):
pass
def send(self, request, **kwargs):
scheme, location, path, params, query, anchor = urlparse(request.url)
if ':' in location:
host, port = location.split(':')
port = int(port)
else:
host = location
port = 80
connection = TunneledHTTPConnection(self.transport, host, port)
connection.request(method=request.method,
url=request.url,
body=request.body,
headers=request.headers)
r = connection.getresponse()
resp = requests.Response()
resp.status_code = r.status
resp.headers = CaseInsensitiveDict(r.headers)
resp.raw = r
resp.reason = r.reason
resp.url = request.url
resp.request = request
resp.connection = connection
resp.encoding = requests.utils.get_encoding_from_headers(response.headers)
requests.cookies.extract_cookies_to_jar(resp.cookies, request, r)
return resp
if __name__ == '__main__':
import paramiko
parser = ArgumentParser()
parser.add_argument('-p', help='Port the SSH server listens on', default=22)
parser.add_argument('host', help='SSH server to tunnel through')
parser.add_argument('username', help='Username on SSH server')
parser.add_argument('url', help='URL to perform HTTP GET on')
args = parser.parse_args()
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(args.host, args.p, username=args.username)
transport = client.get_transport()
s = requests.Session()
s.mount(url, TunneledHTTPAdapter(transport))
response = s.get(url)
print(response.text)
There are various options to BaseAdapter.send that it doesn't handle, and it completely ignores issues like connection pooling and so on, but it gets the job done.
You could write your own SOCKS4 proxy, run it on localhost, then point your HTTP requests at it. For example, https://urllib3.readthedocs.io/en/latest/advanced-usage.html describes how to use a SOCKS proxy with urllib3.
SOCKS4 is basically a simple handshake followed by raw HTTP/TCP traffic. The handshake conveys the target IP address and port. So your proxy can do the handshake to satisfy the client that it is a SOCKS server, then the proxy can send the "real" traffic straight to the SSH session (and proxy the responses in the reverse direction).
The cool thing about this approach is that it will work with tons of clients--SOCKS has been widespread for a long time.
I have written this HTTP web server in python which simply sends reply "Website Coming Soon!" to the browser/client, but I want that this web server should sends back the URL given by the client, like if I write
http://localhost:13555/ChessBoard_x16_y16.bmp
then server should reply back the same url instead of "Website Coming Soon!" message.
please tell how can I do this?
Server Code:
import sys
import http.server
from http.server import HTTPServer
from http.server import SimpleHTTPRequestHandler
#import usb.core
class MyHandler(SimpleHTTPRequestHandler): #handles client requests (by me)
#def init(self,req,client_addr,server):
# SimpleHTTPRequestHandler.__init__(self,req,client_addr,server)
def do_GET(self):
response="Website Coming Soon!"
self.send_response(200)
self.send_header("Content-type", "application/json;charset=utf-8")
self.send_header("Content-length", len(response))
self.end_headers()
self.wfile.write(response.encode("utf-8"))
self.wfile.flush()
print(response)
HandlerClass = MyHandler
Protocol = "HTTP/1.1"
port = 13555
server_address = ('localhost', port)
HandlerClass.protocol_version = Protocol
try:
httpd = HTTPServer(server_address, MyHandler)
print ("Server Started")
httpd.serve_forever()
except:
print('Shutting down server due to some problems!')
httpd.socket.close()
You can do what you're asking, sort of, but it's a little complicated.
When a client (e.g., a web browser) connects to your web server, it sends a request that look like this:
GET /ChessBoard_x16_y16.bmp HTTP/1.1
Host: localhost:13555
This assumes your client is using HTTP/1.1, which is likely true of anything you'll find these days. If you expect HTTP/1.0 or earlier clients, life is much more difficult because there is no Host: header.
Using the value of the Host header and the path passed as an argument to the GET request, you can construct a URL that in many cases will match the URL the client was using.
But it won't necessarily match in all cases:
There may be a proxy in between the client and your server, in which case both the path and hostname/port seen by your code may be different from that used by the client.
There may be packet manipulation rules in place that modify the destination ip address and/or port, so that the connection seen by your code does not match the parameters used by the client.
In your do_GET method, you can access request headers via the
self.headers attribute and the request path via self.path. For example:
def do_GET(self):
response='http://%s/%s' % (self.headers['host'],
self.path)
An asyncore-based XMPP client opens a normal TCP connection to an XMPP server. The server indicates it requires an encrypted connection. The client is now expected to start a TLS handshake so that subsequent requests can be encrypted.
tlslite integrates with asyncore, but the sample code is for a server (?) and I don't understand what it's doing.
I'm on Python 2.5. How can I get the TLS magic working?
Here's what ended up working for me:
from tlslite.api import *
def handshakeTls(self):
"""
Encrypt the socket using the tlslite module
"""
self.logger.info("activating TLS encrpytion")
self.socket = TLSConnection(self.socket)
self.socket.handshakeClientCert()
Definitely check out twisted and wokkel. I've been building tons of xmpp bots and components with it and it's a dream.
I've followed what I believe are all the steps tlslite documents to make an asyncore client work -- I can't actually get it to work since the only asyncore client I have at hand to tweak for the purpose is the example in the Python docs, which is an HTTP 1.0 client, and I believe that because of this I'm trying to set up an HTTPS connection in a very half-baked way. And I have no asyncore XMPP client, nor any XMPP server requesting TLS, to get anywhere close to your situation. Nevertheless I decided to share the fruits of my work anyway because (even though some step may be missing) it does seem to be a bit better than what you previously had -- I think I'm showing all the needed steps in the __init__. BTW, I copied the pem files from the tlslite/test directory.
import asyncore, socket
from tlslite.api import *
s = open("./clientX509Cert.pem").read()
x509 = X509()
x509.parse(s)
certChain = X509CertChain([x509])
s = open("./clientX509Key.pem").read()
privateKey = parsePEMKey(s, private=True)
class http_client(TLSAsyncDispatcherMixIn, asyncore.dispatcher):
ac_in_buffer_size = 16384
def __init__(self, host, path):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect( (host, 80) )
TLSAsyncDispatcherMixIn.__init__(self, self.socket)
self.tlsConnection.ignoreAbruptClose = True
handshaker = self.tlsConnection.handshakeClientCert(
certChain=certChain,
privateKey=privateKey,
async=True)
self.setHandshakeOp(handshaker)
self.buffer = 'GET %s HTTP/1.0\r\n\r\n' % path
def handle_connect(self):
pass
def handle_close(self):
self.close()
def handle_read(self):
print self.recv(8192)
def writable(self):
return (len(self.buffer) > 0)
def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
c = http_client('www.readyhosting.com', '/')
asyncore.loop()
This is a mix of the asyncore example http client in the Python docs, plus what I've gleaned from the tlslite docs and have been able to reverse engineer from their sources. Hope this (even though incomplete/not working) can at least advance you in your quest...
Personally, in your shoes, I'd consider switching from asyncore to twisted -- asyncore is old and rusty, Twisted already integrates a lot of juicy, useful bits (the URL I gave is to a bit in the docs that already does integrate TLS and XMPP for you...).