HTTPSimpleServer - How to close/terminate it? - python

I recently learned I could run a server with this command:
sudo python -m HTTPSimpleServer
My question: how do I terminate this server when done with it?

Type Control-C. Simple as that.

You might want to check the HttpServer class in this servlet module for a modification that allows the server to be quit. If the handler raises a SystemExit exception, the server will break from its serving.
class HttpServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
"""Create a server with specified address and handler.
A generic web server can be instantiated with this class. It will listen
on the address given to its constructor and will use the handler class
to process all incoming traffic. Running a server is greatly simplified."""
# We should not be binding to an
# address that is already in use.
allow_reuse_address = False
#classmethod
def main(cls, RequestHandlerClass, port=80):
"""Start server with handler on given port.
This static method provides an easy way to start, run, and exit
a HttpServer instance. The server will be executed if possible,
and the computer's web browser will be directed to the address."""
try:
server = cls(('', port), RequestHandlerClass)
active = True
except socket.error:
active = False
else:
addr, port = server.socket.getsockname()
print('Serving HTTP on', addr, 'port', port, '...')
finally:
port = '' if port == 80 else ':' + str(port)
addr = 'http://localhost' + port + '/'
webbrowser.open(addr)
if active:
try:
server.serve_forever()
except KeyboardInterrupt:
print('Keyboard interrupt received: EXITING')
finally:
server.server_close()
def handle_error(self, request, client_address):
"""Process exceptions raised by the RequestHandlerClass.
Overriding this method is necessary for two different reasons:
(1) SystemExit exceptions are incorrectly caught otherwise and
(2) Socket errors should be silently passed in the server code"""
klass, value = sys.exc_info()[:2]
if klass is SystemExit:
self.__exit = value
self._BaseServer__serving = None
elif issubclass(klass, socket.error):
pass
else:
super().handle_error(request, client_address)
def serve_forever(self, poll_interval=0.5):
"""Handle all incoming client requests forever.
This method has been overridden so that SystemExit exceptions
raised in the RequestHandlerClass can be re-raised after being
caught in the handle_error method above. This allows servlet
code to terminate server execution if so desired or required."""
super().serve_forever(poll_interval)
if self._BaseServer__serving is None:
raise self.__exit

Related

BaseHTTPServer still writing although client lost network connection

I've implemented a server which accepts requests and after some process the client connects to my server.
The server continuously sends data to client, but if the client lose the network connection (e.g. on my mobile I've disabled the internet access without exiting from the client program), then the server is still writing to the nothing.
I've attached my shortened version of my code logic. Monitoring the input data could be a good idea, but I have some cases when I don't have to wait for any input.
class CustomRequestHandler(BaseHTTPRequestHandler):
def __init__(self, request, client_address, server):
BaseHTTPRequestHandler.__init__(self, request, client_address, server)
def do_GET(self):
try:
readable, writable, exceptional = select.select([self.rfile], [self.wfile], [self.rfile, self.wfile], 0)
for s in readable:
print (s.readline())
for s in writable: #
s.write(b"Data")
except Exception as e:
print(e)
def finish(self, *args, **kw):
print ("Do finish")
class CustomServer(socketserver.ThreadingMixIn, HTTPServer):
pass
def start_server():
httpd = CustomServer((HOST, PORT), CustomRequestHandler)
try:
httpd.allow_reuse_address = True
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
if __name__ == '__main__':
start_server()
After a while writable became an empty list, but how could I detect if on the client side a network lost occurred? How could I catch the network error?
Your socket is not closed when you cut the network connection. The sender will only get informed when the OS decides that the socket is timed out. This usually takes 30s+.
If on the other hand the receiver program is closed properly, the sender will get notified within milliseconds.
These left open but actually lost connections are a major problem in network programming. There are mitigations but there is no universal ultimate solution to it.

Multithreading sockets with a central relay-like server

I have previously managed to implement a client-server socket script which relays messages between a single client and the server and I'm now trying to implement a multiple-client system.
More specifically, I would like to use the server as some sort of medium between two clients which retrieves information from one client and relays it to the other. I had tried to attach and send the port number of the receiving client and then extract it from the message on the server side. After that, I would try and send it to whatever socket with that port number but I ran into some trouble (as port numbers are determined at the point of sending I believe?) so now I am simply just trying to relay the sent message back to all clients. However, the problem is that the message is only being sent to the server and not being relayed to the desired client.
I had previously tried to implement a peer-to-peer system but I ran into trouble so I decided to take a step back and do this instead.
Server.py:
import socket, _thread, threading
import tkinter as tk
SERVERPORT = 8600
HOST = 'localhost'
class Server():
def __init__(self):
self.Connected = True
self.ServerSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ServerSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR,1)
self.ServerSocket.bind((HOST, SERVERPORT))
self.ServerSocket.listen(2)
self.Clients = []
def Listen(self):
print('Server is now running')
while self.Connected:
ClientSocket, Address = self.ServerSocket.accept()
self.Clients.append(Address)
print('\nNew user connected', Address)
t = threading.Thread(target=self.NewClient, args=(ClientSocket,
Address))
t.daemon = True
t.start()
self.Socket.close()
def NewClient(self, ClientSocket, Address):
while self.Connected:
if ClientSocket:
try:
ReceivedMsg = ClientSocket.recv(4096)
print('Message received from', Address, ':', ReceivedMsg)
self.Acknowledge(ClientSocket, Address)
if ReceivedMsg.decode('utf8').split()[-1] != 'message':
ReceiverPort = self.GetSendPort(ReceivedMsg)
self.SendToClient(ClientSocket,ReceivedMsg,ReceiverPort)
except:
print('Connection closed')
raise Exception
ClientSocket.close()
def Acknowledge(self, Socket, Address):
Socket.sendto(b'The server received your message', Address)
def GetSendPort(self, Msg):
MsgDigest = Msg.decode('utf8').split()
return int(MsgDigest[-1])
def SendToClient(self, Socket, Msg, Port):
Addr = (HOST, Msg)
for Client in self.Clients:
Socket.sendto(Msg, Client)
def NewThread(Func, *args):
if len(args) == 1:
t = threading.Thread(target=Func, args=(args,))
elif len(args) > 1:
t = threading.Thread(target=Func, args=args)
else:
t = threading.Thread(target=Func)
t.daemon = True
t.start()
t.join()
Host = Server()
NewThread(Host.Listen)
And the Client(.py):
import socket, threading
import tkinter as tk
Username = 'Ernest'
PORT = 8601
OtherPORT = 8602
SERVERPORT = 8600
HOST = '127.0.0.1'
class Client():
def __init__(self, Username):
self.Connected, self.Username = False, Username
self.Socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def Connect(self):
print('Trying to connect')
try:
self.Socket.connect((HOST, SERVERPORT))
self.Connected = True
print(self.Username, 'connected to server')
Msg = MsgUI(self.Username)
Msg.Display()
except Exception:
print('Could not connect to server')
raise Exception
def SendMsg(self):
if self.Connected:
Msg = '{} sent you a message {}'.format(self.Username, OtherPORT)
self.Socket.sendall(bytes(Msg, encoding='utf8'))
self.GetResponse()
def GetResponse(self, *args):
AckMsg = '\n{} received the message'.format(self.Username)
NMsg = '\n{} did not receive the message'.format(self.Username)
if self.Connected:
Msg = self.Socket.recv(4096)
print(Msg)
if Msg:
self.Socket.sendall(bytes(AckMsg, encoding='utf8'))
else:
self.Socket.sendall(bytes(NMsg, encoding='utf8'))
class MsgUI():
def __init__(self, Username):
self.Username = Username
self.entry = tk.Entry(win)
self.sendbtn = tk.Button(win, text='send', command=Peer.SendMsg)
def Display(self):
self.entry.grid()
self.sendbtn.grid()
win.mainloop()
win = tk.Tk()
Peer = Client(Username)
Peer.Connect()
I want a message to be sent whenever the user presses the send button in the tkinter window, but at the same time, it is continually 'listening' to see if it received any messages.
I also previously tried to run the GetResponse method in the Client in another thread and instead of if self.Connected I used while self.Connected and it still didn't work.
UPDATE
After some helpful comments, I have edited the two files as such:
The server now holds the two sockets for each client which is run first. The server file is imported into the client file as a module. Each client file is then run and each client runs a function in the server file, requesting to use the socket. If the request is allowed (i.e. no error was thrown), the socket is connected, added to a set of clients stored in the server file and then returned to the client file. The client then uses this socket to send and receive messages.
Server.py
import socket, _thread, threading
import tkinter as tk
SERVERPORT = 8600
HOST = 'localhost'
class Server():
def __init__(self):
self.Connected = True
self.ServerSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.ServerSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR,1)
self.ServerSocket.bind((HOST, SERVERPORT))
self.ServerSocket.listen(2)
self.Clients = {}
def ConnectClient(self, Username, Port):
Socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.Clients[Username] = [Socket, Port, False]
try:
self.Clients[Username][0].connect((HOST, SERVERPORT))
self.Clients[Username][2] = True
print('Opened port for user', Username)
return Socket
except Exception:
print('Could not open port for user', Username)
raise Exception
def Listen(self):
print('Server is now running')
while self.Connected:
ClientSocket, Address = self.ServerSocket.accept()
print('\nNew user connected', Address)
t = threading.Thread(target=self.NewClient, args=(ClientSocket,
Address))
t.daemon = True
t.start()
self.Socket.close()
def NewClient(self, ClientSocket, Address):
while self.Connected:
if ClientSocket:
try:
ReceivedMsg = ClientSocket.recv(4096)
if b'attempting to connect to the server' in ReceivedMsg:
ClientSocket.send(b'You are now connected to the server')
else:
print('Message received from', Address, ':',ReceivedMsg)
#self.Acknowledge(ClientSocket, Address)
ReceiverPort = self.GetSendPort(ReceivedMsg)
if ReceiverPort != None:
self.SendToClient(ClientSocket,ReceivedMsg,
ReceiverPort)
except:
print('Connection closed')
raise Exception
ClientSocket.close()
def Acknowledge(self, Socket, Address):
Socket.sendto(b'The server received your message', Address)
def GetSendPort(self, Msg):
MsgDigest = Msg.decode('utf8').split()
try:
Port = int(MsgDigest[-1])
except ValueError:
Port = None
return Port
def SendToClient(self, Socket, Msg, Port):
Addr = (HOST, Port)
Receiver = None
for Client, Vars in self.Clients.items():
if Vars[1] == Port:
Receiver = Client
self.Clients[Receiver][0].sendto(Msg, Addr)
def NewThread(Func, *args):
if len(args) == 1:
t = threading.Thread(target=Func, args=(args,))
elif len(args) > 1:
t = threading.Thread(target=Func, args=args)
else:
t = threading.Thread(target=Func)
t.daemon = True
t.start()
t.join()
Host = Server()
if __name__ == '__main__':
NewThread(Host.Listen)
And Client.py
import socket, threading, Server
import tkinter as tk
Username = 'Ernest'
PORT = 8601
OtherPORT = 8602
SERVERPORT = 8600
HOST = '127.0.0.1'
class Client():
def __init__(self, Username):
self.Connected, self.Username = False, Username
def Connect(self):
print('Requesting to connect to server')
try:
self.Socket = Server.Host.ConnectClient(self.Username, PORT)
self.Connected = Server.Host.Clients[self.Username][2]
Msg = '{} is attempting to connect to the server'.format(self.Username)
self.Socket.sendall(bytes(Msg, encoding='utf8'))
ReceivedMsg = self.Socket.recv(4096)
print(ReceivedMsg)
Msg = MsgUI(self.Username)
Msg.Display()
except Exception:
print('Could not connect to server')
raise Exception
def SendMsg(self):
try:
if self.Connected:
Msg = '{} sent you a message {}'.format(self.Username,OtherPORT)
self.Socket.sendall(bytes(Msg, encoding='utf8'))
self.GetResponse()
except Exception:
print('Connection closed')
raise Exception
def GetResponse(self, *args):
AckMsg = '\n{} received the message'.format(self.Username)
NMsg = '\n{} did not receive the message'.format(self.Username)
if self.Connected:
Msg = self.Socket.recv(4096)
print(Msg)
if Msg:
self.Socket.sendall(bytes(AckMsg, encoding='utf8'))
else:
self.Socket.sendall(bytes(NMsg, encoding='utf8'))
class MsgUI():
def __init__(self, Username):
self.Username = Username
self.entry = tk.Entry(win)
self.sendbtn = tk.Button(win, text='send', command=Peer.SendMsg)
def Display(self):
self.entry.grid()
self.sendbtn.grid()
win.mainloop()
win = tk.Tk()
Peer = Client(Username)
Peer.Connect()
Now the problem is more of a python and scope problem. When trying to relay the message back to the client, I was getting a KeyError as the Clients dictionary was still empty. When making the function call to the server in the client file, it's clear that the update to the dictionary happens in the client file rather than the server file - which is in a different instance. I need a method of changing the contents of the Clients dictionary that is called to action by the client file but takes effect in the server file.
Are you committed to multithreading? Threads don't run concurrently in python ( due to the GIL), and while they are one way to handle concurrent operations, they aren't the only way and usually they're not the best way, unless they're the only way. Consider this code, which doesn't handle failure cases well, but seems to work as a starting point.
import socket, select, Queue
svrsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
svrsock.setblocking(0)
svrsock.bind(('', 17654))
svrsock.listen(16)
client_queues = {}
write_ready=[] # we'll update this for clients only that have things in the queue
while client_queues.keys() + [svrsock] :
readable, writable, exceptional = select.select(client_queues.keys() + [svrsock] , write_ready, [])
for rd in readable:
if rd is svrsock: # reading listening socket == accepting connection
conn, addr = svrsock.accept()
print("Connection from {}".format(addr))
conn.setblocking(0)
client_queues[conn] = Queue.Queue()
else:
data = rd.recv(1024)
if data:
# TODO: send to all queues
print("Message from {}".format(rd.getpeername()))
for sock, q in client_queues.iteritems():
q.put("From {}: {}".format( rd.getpeername(), data))
if sock not in write_ready:
write_ready.append(sock)
for rw in writable:
try:
data = client_queues[rw].get_nowait()
rw.send(data)
except Queue.Empty:
write_ready.remove(rw)
continue
The concept is pretty simple. The server accepts connections; each connection (socket) is associated with a queue of pending messages. Each socket that's ready for reading is read from, and its message is added to each client's queue. The recipient client is added into the write_ready list of clients with data pending, if it's not already in there. Then each socket that's ready for writing has its next queued message written to it. If there are no more messages, the recipient is removed from the write_ready list.
This is very easy to orchestrate if you don't use multithreading because all coordination is inherent in the order of the application. With threads it would be more difficult and a lot more code, but probably not more performance due to the gil.
The secret to handling multiple I/O streams concurrently without multithreading is select. In principle it's pretty easy; we pass select() a list of possible sockets for reading, another list of possible sockets for writing, and a final list that for this simplified demo I completely ignore . The results of the select call will include one or more sockets that are actually ready for reading or writing, which allows me to block until one or more sockets are ready for activity. I then process all the sockets ready for activity every pass ( but they've already been filtered down to just those which wouldn't block).
There's a ton still to be done here. I don't cleanup after myself, don't track closed connections, don't handle any exceptions, and so on. but without having to worry about threading and concurrency guarantees, it's pretty easy to start addressing these deficiencies.
Here it is "in action". Here for the client side I use netcat, which is perfect for layer 3 testing without layer 4+ protocols ( in other words, raw tcp so to speak). It simply opens a socket to the given destination and port and sends its stdin through the socket and sends its socket data to stdout, which makes it perfect for demoing this server app!
I also wanted to point out, coupling code between server and client is inadvisable because you won't be able to roll out changes to either without breaking the other. It's ideal to have a "contract" so to speak between server and client and maintain it. Even if you implement the behavior of server and client in the same code base, you should use the tcp communications contract to drive your implementation, not code sharing. Just my 2 cents, but once you start sharing code you often start coupling server/client versions in ways you didn't anticipate.
the server:
$ python ./svr.py
Connection from ('127.0.0.1', 52059)
Connection from ('127.0.0.1', 52061)
Message from ('127.0.0.1', 52061)
Message from ('127.0.0.1', 52059)
Message from ('127.0.0.1', 52059)
First client ( 52059):
$ nc localhost 17654
hello
From ('127.0.0.1', 52061): hello
From ('127.0.0.1', 52059): hello
From ('127.0.0.1', 52059): hello
Second client:
$ nc localhost 17654
From ('127.0.0.1', 52061): hello
hello
From ('127.0.0.1', 52059): hello
hello
From ('127.0.0.1', 52059): hello
If you need more convincing on why select is way more compelling than concurrent execution, consider this: Apache is based on a threading model, in other words, the connections each get a worker thread . nginx is based on a select model, so you can see how much faster that can potentially be. Not to say that nginx is inherently better, as Apache benefits from the threading model because of its heavy use of modules to extend capabilities ( mod_php for example), whereas nginx doesn't have this limitation and can handle all requests from any thread. But the raw performance of nginx is typically considered far higher and far more efficient, and a big reason for this is that it avoids almost all the cpu context switches inherent in apache. It's a valid approach!
A word on scaling. Obviously, this wouldn't scale forever. Neither would a threading model; eventually you run out of threads. A more distributed and high throughput system would likely use a Pub/Sub mechanism of some kind, offloading the client connection tracking and message queueing from the server to a pub/sub data tier and allowing connections to be restored and queued data to be sent, as well as adding multiple servers behind a load balancer. Just throwing it out there. You might be pleasantly surprised how well select can scale ( cpu is so much faster than network anyway that it's likely not the bottleneck).

Python long-lived socket connection weirdness

I've implemented some code that allows a client to connect to a socket server, introduces itself and the server then goes into an infinite loop which sends "commands" (strings) to the client from a Redis list. The server uses the Redis 'blpop' method to block until a string arrives which is then sent off to the client and the response awaited.
However, in testing (with a python client socket script on another local workstation) I find that if I break the client connection (Ctrl+c) to simulate an interruption in the connectivity, the server happily writes the next received string to the client, reports an empty response but ONLY throws the broken pipe exception when a second string is written :/ Thus, two writes are "lost" before anything is caught. Here's my code:
# Create global Redis resource
rds_cnx = redis.StrictRedis(host='localhost', port=6379, db=6)
def initialise_server():
""" Setup server socket """
try:
srv_skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
srv_skt.bind((IP, PORT))
srv_skt.listen(1)
print("Listening on:[{}]".format(IP, PORT))
return srv_skt
except socket.error as skt_err: # e.g. port in use
print("Could not initialise tcp server:[{}]".format(skt_err))
sys.exit(1)
except Exception as exp:
print("Unable to setup server socket:[{}]".format(exp))
sys.exit(1)
def main():
server_socket = initialise_server()
while True:
client_socket, remote_address = server_socket.accept()
try:
# Block and wait for connection and data
initial_data = client_socket.recv(1024).decode()
print("Connection from [{}] - Data:[{}]".format(remote_address, initial_data))
while True:
wait_for_queue_command(client_socket)
except (BrokenPipeError, socket.error, Exception) as sck_exp:
print("Exception in client loop:[{}]".format(sck_exp))
continue
except KeyboardInterrupt:
# Close client socket
client_socket.shutdown(2)
client_socket.close()
print('Caught Ctrl+c ... Shutting down.')
break
# Tear down context
server_socket.shutdown(2) # Param ref: 0 = done receiving, 1 = done sending, 2 = both
server_socket.close()
def wait_for_queue_command(client_skt):
""" Blocking while waiting for command for Redis list
:param client_skt: socket
:return: None
"""
print('Waiting for command...')
queue_cmd = rds_cnx.blpop('queuetest', 0)
print("Received something from the queue:")
pprint(queue_cmd)
try:
#client_skt.settimeout(15)
client_skt.send(queue_cmd[1])
# Block for response
response_data = client_skt.recv(1024).decode()
print("Response:[{}]".format(response_data))
except BrokenPipeError as brkn_p:
print('Outbound write detected "Broken Pipe":[{}]'.format(brkn_p))
''' Here one would decide to either re-schedule the command or
ignore the error and move on to the next command. A "pause"
(sleep) could also useful?
'''
raise
except socket.timeout as sck_tmo:
print('Socket timed out:[{}]'.format(sck_tmo))
except socket.error as sck_err:
print('Socket timed out:[{}]'.format(sck_err))
raise
print('Command handling complete.')
Is there any better way to handle such a situation? I've had a cursory look at Twisted but it seems very difficult to achieve the specific blocking behavior and other code that might be implemented to handle specific responses from the client.

How do I abort a socket.recvfrom() from another thread in python?

This looks like a duplicate of How do I abort a socket.recv() from another thread in Python, but it's not, since I want to abort recvfrom() in a thread, which is UDP, not TCP.
Can this be solved by poll() or select.select() ?
If you want to unblock a UDP read from another thread, send it a datagram!
Rgds,
Martin
A good way to handle this kind of asynchronous interruption is the old C pipe trick. You can create a pipe and use select/poll on both socket and pipe: Now when you want interrupt receiver you can just send a char to the pipe.
pros:
Can work both for UDP and TCP
Is protocol agnostic
cons:
select/poll on pipes are not available on Windows, in this case you should replace it by another UDP socket that use as notification pipe
Starting point
interruptable_socket.py
import os
import socket
import select
class InterruptableUdpSocketReceiver(object):
def __init__(self, host, port):
self._host = host
self._port = port
self._socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self._socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self._r_pipe, self._w_pipe = os.pipe()
self._interrupted = False
def bind(self):
self._socket.bind((self._host, self._port))
def recv(self, buffersize, flags=0):
if self._interrupted:
raise RuntimeError("Cannot be reused")
read, _w, errors = select.select([self._r_pipe, self._socket], [], [self._socket])
if self._socket in read:
return self._socket.recv(buffersize, flags)
return ""
def interrupt(self):
self._interrupted = True
os.write(self._w_pipe, "I".encode())
A test suite:
test_interruptable_socket.py
import socket
from threading import Timer
import time
from interruptable_socket import InterruptableUdpSocketReceiver
import unittest
class Sender(object):
def __init__(self, destination_host, destination_port):
self._socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
self._dest = (destination_host, destination_port)
def send(self, message):
self._socket.sendto(message, self._dest)
class Test(unittest.TestCase):
def create_receiver(self, host="127.0.0.1", port=3010):
receiver = InterruptableUdpSocketReceiver(host, port)
receiver.bind()
return receiver
def create_sender(self, host="127.0.0.1", port=3010):
return Sender(host, port)
def create_sender_receiver(self, host="127.0.0.1", port=3010):
return self.create_sender(host, port), self.create_receiver(host, port)
def test_create(self):
self.create_receiver()
def test_recv_async(self):
sender, receiver = self.create_sender_receiver()
start = time.time()
send_message = "TEST".encode('UTF-8')
Timer(0.1, sender.send, (send_message, )).start()
message = receiver.recv(128)
elapsed = time.time()-start
self.assertGreaterEqual(elapsed, 0.095)
self.assertLess(elapsed, 0.11)
self.assertEqual(message, send_message)
def test_interrupt_async(self):
receiver = self.create_receiver()
start = time.time()
Timer(0.1, receiver.interrupt).start()
message = receiver.recv(128)
elapsed = time.time()-start
self.assertGreaterEqual(elapsed, 0.095)
self.assertLess(elapsed, 0.11)
self.assertEqual(0, len(message))
def test_exception_after_interrupt(self):
sender, receiver = self.create_sender_receiver()
receiver.interrupt()
with self.assertRaises(RuntimeError):
receiver.recv(128)
if __name__ == '__main__':
unittest.main()
Evolution
Now this code is just a starting point. To make it more generic I see we should fix follow issues:
Interface: return empty message in interrupt case is not a good deal, is better to use an exception to handle it
Generalization: we should have just a function to call before socket.recv(), extend interrupt to others recv methods become very simple
Portability: to make simple port it to windows we should isolate the async notification in a object to choose the right implementation for our operating system
First of all we change test_interrupt_async() to check exception instead empty message:
from interruptable_socket import InterruptException
def test_interrupt_async(self):
receiver = self.create_receiver()
start = time.time()
with self.assertRaises(InterruptException):
Timer(0.1, receiver.interrupt).start()
receiver.recv(128)
elapsed = time.time()-start
self.assertGreaterEqual(elapsed, 0.095)
self.assertLess(elapsed, 0.11)
After this we can replace return '' by raise InterruptException and the tests pass again.
The ready to extend version can be :
interruptable_socket.py
import os
import socket
import select
class InterruptException(Exception):
pass
class InterruptableUdpSocketReceiver(object):
def __init__(self, host, port):
self._host = host
self._port = port
self._socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self._socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self._async_interrupt = AsycInterrupt(self._socket)
def bind(self):
self._socket.bind((self._host, self._port))
def recv(self, buffersize, flags=0):
self._async_interrupt.wait_for_receive()
return self._socket.recv(buffersize, flags)
def interrupt(self):
self._async_interrupt.interrupt()
class AsycInterrupt(object):
def __init__(self, descriptor):
self._read, self._write = os.pipe()
self._interrupted = False
self._descriptor = descriptor
def interrupt(self):
self._interrupted = True
self._notify()
def wait_for_receive(self):
if self._interrupted:
raise RuntimeError("Cannot be reused")
read, _w, errors = select.select([self._read, self._descriptor], [], [self._descriptor])
if self._descriptor not in read:
raise InterruptException
def _notify(self):
os.write(self._write, "I".encode())
Now wraps more recv function, implement a windows version or take care of socket timeouts become really simple.
The solution here is to forcibly close the socket. The problem is that the method for doing this is OS-specific and Python does not do a good job of abstracting the way to do it or the consequences. Basically, you need to do a shutdown() followed by a close() on the socket. On POSIX systems such as Linux, the shutdown is the key element in forcing recvfrom to stop (a call to close() alone won't do it). On Windows, shutdown() does not affect the recvfrom and the close() is the key element. This is exactly the behavior that you would see if you were implementing this code in C and using either native POSIX sockets or Winsock sockets, so Python is providing a very thin layer on top of those calls.
On both POSIX and Windows systems, this sequence of calls results in an OSError being raised. However, the location of the exception and the details of it are OS-specific. On POSIX systems, the exception is raised on the call to shutdown() and the errno value of the exception is set to 107 (Transport endpoint is not connected). On Windows systems, the exception is raised on the call to recvfrom() and the winerror value of the exception is set to 10038 (An operation was attempted on something that is not a socket). This means that there's no way to do this in an OS-agnositc way, the code has to account for both Windows and POSIX behavior and errors. Here's a simple example I wrote up:
import socket
import threading
import time
class MyServer(object):
def __init__(self, port:int=0):
if port == 0:
raise AttributeError('Invalid port supplied.')
self.port = port
self.socket = socket.socket(family=socket.AF_INET,
type=socket.SOCK_DGRAM)
self.socket.bind(('0.0.0.0', port))
self.exit_now = False
print('Starting server.')
self.thread = threading.Thread(target=self.run_server,
args=[self.socket])
self.thread.start()
def run_server(self, socket:socket.socket=None):
if socket is None:
raise AttributeError('No socket provided.')
buffer_size = 4096
while self.exit_now == False:
data = b''
try:
data, address = socket.recvfrom(buffer_size)
except OSError as e:
if e.winerror == 10038:
# Error is, "An operation was attempted on something that
# is not a socket". We don't care.
pass
else:
raise e
if len(data) > 0:
print(f'Received {len(data)} bytes from {address}.')
def stop(self):
self.exit_now = True
try:
self.socket.shutdown(socket.SHUT_RDWR)
except OSError as e:
if e.errno == 107:
# Error is, "Transport endpoint is not connected".
# We don't care.
pass
else:
raise e
self.socket.close()
self.thread.join()
print('Server stopped.')
if __name__ == '__main__':
server = MyServer(5555)
time.sleep(2)
server.stop()
exit(0)
Implement a quit command on the server and client sockets. Should work something like this:
Thread1:
status: listening
handler: quit
Thread2: client
exec: socket.send "quit" ---> Thread1.socket # host:port
Thread1:
status: socket closed()
To properly close a tcp socket in python, you have to call socket.shutdown(arg) before calling socket.close(). See the python socket documentation, the part about shutdown.
If the socket is UDP, you can't call socket.shutdown(...), it would raise an exception. And calling socket.close() alone would, like for tcp, keep the blocked operations blocking. close() alone won't interrupt them.
Many suggested solutions (not all), don't work or are seen as cumbersome as they involve 3rd party libraries. I haven't tested poll() or select(). What does definately work, is the following:
firstly, create an official Thread object for whatever thread is running socket.recv(), and save the handle to it. Secondly, import signal. Signal is an official library, which enables sending/recieving linux/posix signals to processes (read its documentation). Thirdly, to interrupt, assuming that handle to your thread is called udpThreadHandle:
signal.pthread_kill(udpthreadHandle.ident, signal.SIGINT)
and ofcourse, in the actual thread/loop doing the recieving:
try:
while True:
myUdpSocket.recv(...)
except KeyboardInterrupt:
pass
Notice, the exception handler for KeyboardInterrupt (generated by SIGINT), is OUTSIDE the recieve loop. This silently terminates the recieve loop and its thread.

Paramiko SSH Tunnel Shutdown Issue

I'm working on a python script to query a few remote databases over an established ssh tunnel every so often. I'm fairly familiar with the paramiko library, so that was my choice of route. I'd prefer to keep this in complete python so I can use paramiko to deal with key issues, as well as uses python to start, control, and shutdown the ssh tunnels.
There have been a few related questions around here about this topic, but most of them seemed incomplete in answers. My solution below is a hacked together of the solutions I've found so far.
Now for the problem: I'm able to create the first tunnel quite easily (in a separate thread) and do my DB/python stuff, but when attempting to close the tunnel the localhost won't release the local port I binded to. Below, I've included my source and the relevant netstat data through each step of the process.
#!/usr/bin/python
import select
import SocketServer
import sys
import paramiko
from threading import Thread
import time
class ForwardServer(SocketServer.ThreadingTCPServer):
daemon_threads = True
allow_reuse_address = True
class Handler (SocketServer.BaseRequestHandler):
def handle(self):
try:
chan = self.ssh_transport.open_channel('direct-tcpip', (self.chain_host, self.chain_port), self.request.getpeername())
except Exception, e:
print('Incoming request to %s:%d failed: %s' % (self.chain_host, self.chain_port, repr(e)))
return
if chan is None:
print('Incoming request to %s:%d was rejected by the SSH server.' % (self.chain_host, self.chain_port))
return
print('Connected! Tunnel open %r -> %r -> %r' % (self.request.getpeername(), chan.getpeername(), (self.chain_host, self.chain_port)))
while True:
r, w, x = select.select([self.request, chan], [], [])
if self.request in r:
data = self.request.recv(1024)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(1024)
if len(data) == 0:
break
self.request.send(data)
chan.close()
self.request.close()
print('Tunnel closed from %r' % (self.request.getpeername(),))
class DBTunnel():
def __init__(self,ip):
self.c = paramiko.SSHClient()
self.c.load_system_host_keys()
self.c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.c.connect(ip, username='someuser')
self.trans = self.c.get_transport()
def startTunnel(self):
class SubHandler(Handler):
chain_host = '127.0.0.1'
chain_port = 5432
ssh_transport = self.c.get_transport()
def ThreadTunnel():
global t
t = ForwardServer(('', 3333), SubHandler)
t.serve_forever()
Thread(target=ThreadTunnel).start()
def stopTunnel(self):
t.shutdown()
self.trans.close()
self.c.close()
Although I will end up using a stopTunnel() type method, I've realize that code isn't entirely correct, but more so an experimentation of trying to get the tunnel to shutdown properly and test my results.
When I first call create the DBTunnel object and call startTunnel(), netstat yields the following:
tcp4 0 0 *.3333 *.* LISTEN
tcp4 0 0 MYIP.36316 REMOTE_HOST.22 ESTABLISHED
tcp4 0 0 127.0.0.1.5432 *.* LISTEN
Once I call stopTunnel(), or even delete the DBTunnel object itself..I'm left with this connection until I exit python all together, and what I assume to be the garbage collector takes care of it:
tcp4 0 0 *.3333 *.* LISTEN
It would be nice to figure out why this open socket is hanging around independent of the DBConnect object, and how to close it properly from within my script. If I try and bind a different connection to different IP using the same local port before completely exiting python (time_wait is not the issue), then I get the infamous bind err 48 address in use. Thanks in advance :)
It appears the SocketServer's shutdown method isn't properly shutting down/closing the socket. With the below changes in my code, I retain access to the SocketServer object and access the socket directly to close it. Note that socket.close() works in my case, but others might be interested in socket.shutdown() followed by a socket.close() if other resources are accessing that socket.
[Ref: socket.shutdown vs socket.close
def ThreadTunnel():
self.t = ForwardServer(('127.0.0.1', 3333), SubHandler)
self.t.serve_forever()
Thread(target=ThreadTunnel).start()
def stopTunnel(self):
self.t.shutdown()
self.trans.close()
self.c.close()
self.t.socket.close()
Note that you don't have do the Subhandler hack as shown in the demo code. The comment is wrong. Handlers do have access to their Server's data. Inside a handler you can use self.server.instance_data.
If you use the following code, in your Handler, you would use
self.server.chain_host
self.server.chain_port
self.server.ssh_transport
class ForwardServer(SocketServer.ThreadingTCPServer):
daemon_threads = True
allow_reuse_address = True
def __init__(
self, connection, handler, chain_host, chain_port, ssh_transport):
SocketServer.ThreadingTCPServer.__init__(self, connection, handler)
self.chain_host = chain_host
self.chain_port = chain_port
self.ssh_transport = ssh_transport
...
server = ForwardServer(('', local_port), Handler,
remote_host, remote_port, transport)
server.serve_forever()
You may want to add some synchronization between the spawned thread and the caller so that you don't try to use the tunnel before it is ready. Something like:
from threading import Event
def startTunnel(self):
class SubHandler(Handler):
chain_host = '127.0.0.1'
chain_port = 5432
ssh_transport = self.c.get_transport()
mysignal = Event()
mysignal.clear()
def ThreadTunnel():
global t
t = ForwardServer(('', 3333), SubHandler)
mysignal.set()
t.serve_forever()
Thread(target=ThreadTunnel).start()
mysignal.wait()
You can also try sshtunnel it has two cases to close tunnel .stop() if you want to wait until the end of all active connections or .stop(force=True) to close all active connections.
If you don't want to use it you can check the source code for this logic here: https://github.com/pahaz/sshtunnel/blob/090a1c1/sshtunnel.py#L1423-L1456

Categories

Resources