I am trying to create a separate thread in a client that revives messages from a server socket in a non-blocking manner. Since my original code is too long and a bit of a hassle to explain in order to understand it, I have created an example program which focuses on what I want to do. I try to create two separate threads , say Thread t1 and Thread t2. Thread t1 polls the socket to check for any received data whereas Thread t2 does whatever task it is assigned to do. What I am expecting it to do is, Thread t1 always polls and if a data is received it prints it on the screen and Thread t2 executes in parallel doing whatever it is doing. But, I cannot get it working for some reason.
My example program is:
import threading
import time
import threading
from time import sleep
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('localhost', 5555))
s.setblocking(0)
s.sendall(str.encode('Initial Hello'))
def this_thing():
while True:
try:
data = s.recv(4096)
print(data.decode('utf-8'))
except:
pass / break #not sure which one to use. Neither of it works
def that_thing():
for i in range(10000):
sleep(3)
s.sendall(str.encode('Hello')
print('happening2')
threading.Thread(target=this_thing(), args=[]).start()
threading.Thread(target=that_thing(), args=[]).start()
Note: The server socket is a simple server that sends a message to all connected sockets if a message was received by it.
When I run the program by breaking out in the exception in Thread t1, only my Thread t2 is keeps running. I.e Thread t1 does not receive any data sent from the server
The reason this is happening is because the "target" argument takes a callable object.
from the docs docs.python.org/2/library/threading.html
"target is the callable object to be invoked by the run() method"
in your version
threading.Thread(target=this_thing(), args=[]).start()
threading.Thread(target=that_thing(), args=[]).start()
when you say target=this_thing(), it will try and evaluate the value of a call to this_thing, in your case, it will enter a while True loop, and then if it was to finish it would evaluate to None.
What you want to do is replace these 2 lines with
threading.Thread(target=this_thing, args=[]).start()
threading.Thread(target=that_thing, args=[]).start()
Note that you are now passing in the function itself. A function is a callable object.
The correct solution for python 3+ isn't multithreading but asyncio.
Check out this awesome speech from David Beazley on the matter (49 mins):
https://www.youtube.com/watch?v=ZzfHjytDceU
Asyncio / sockets example: https://gist.github.com/gregvish/7665915
Related
please see the code below :
server.py
import zmq
import time
from multiprocessing import Process
class A:
def __init__(self):
ctx = zmq.Context(1)
sock = zmq.Socket(ctx, zmq.PUB)
sock.bind('ipc://test')
p = Process(target=A.run, args=(sock,))
p.start() # Process calls run, but the client can't receive messages
p.join() #
#A.run(sock) # this one is ok, messages get it to be received
#staticmethod
def run(sock):
while True:
sock.send('demo'.encode('utf-8'))
print('sent')
time.sleep(1)
if __name__ =='__main__':
a = A()
client.py
import zmq
ctx=zmq.Context(1)
sock = zmq.Socket(ctx, zmq.SUB)
sock.connect('ipc://test')
sock.setsockopt_string(zmq.SUBSCRIBE, '')
while True:
print(sock.recv())
In the constructor of server.py, if I call .run()-method directly, the client can receive the message, but when I use the multiprocessing.Process()-method, it fails. Can anyone explain on this and provide some advice?
Q : "Why ZeroMQ fails to communicate when I use multiprocessing.Process to run?"
Well, ZeroMQ does not fail to communicate, the problem is, how Python multiprocessing module "operates".
The module is designed so that some processing may escape from the python central GIL-lock (re-[SERIAL]-iser, that is used as a forever present [CONCURRENT]-situations' principal avoider).
This means that the call to the multiprocessing.Process makes one exact "mirror-copy" of the python interpreter state, "exported" into new O/S-spawned process (details depend on localhost O/S).
Given that, there is zero chance a "mirror"-ed replica could get access to resources already owned by the __main__ - here the .bind()-method already acquired ipc://test address, so "remote"-process will never get "permission" to touch this ZeroMQ AccessPoint, unless the code gets repaired & fully re-factored.
Q : "Can anyone explain on this and provide some advice?"
Sure. The best step to start is to fully understand Pythonic culture of monopolistic GIL-lock re-[SERIAL]-isation, where no two things ever happen in the same time, so even adding more threads does not speed-up the flow of the processing, as it all gets re-aligned by the central "monopolist" The GIL-lock.
Next, understanding a promise of a fully reflected copy of the python interpreter state, while it sounds promising, also has some obvious drawbacks - the new processes, being "mirror"-copies cannot introduce colliding cases on already owned resources. If they try to, a not working as expected cases are the milder of the problems in such principally ill-designed cases.
In your code, the first row in __main__ instantiates a = A(), where A's .__init__ method straight occupies the IPC-resource since .bind('ipc://test'). The later step, p = Process( target = A.run, args = ( sock, ) ) "mirror"-replicates the state of the python interpreter (an as-is copy) and the p.start() cannot but crash into disability to "own" the same resource as the __main__ already owns (yes, the ipc://test for a "mirror"-ed process instructed call to grab the same, non-free resource in .bind('ipc://test') ). This will never fly.
Last but not least, enjoy the Zen-of-Zero, the masterpiece of Martin SUSTRIK for distributed-computing, so well crafted for ultimately scalable, almost zero-latency, very comfortable, widely ported signalling & messaging framework.
Short answer: Start your subprocesses. Create your zmq.Context- and .Socket-instances from within your Producer.run()-classmethod within each subprocess. Use .bind()-method on the side on which your cardinality is 1, and .connect()-method on the side where your cardinality is >1 (in this case, the "server").
My approach would be structured something like...
# server.py :
import zmq
from multiprocessing import Process
class Producer (Process):
def init(self):
...
def run(self):
ctx = zmq.Context(1)
sock = zmq.Socket(ctx, zmq.PUB)
# Multiple producers, so connect instead of bind (consumer must bind)
sock.connect('ipc://test')
while True:
...
if __name__ == "__main__":
producer = Producer()
p = Process(target=producer.run)
p.start()
p.join()
# client.py :
import zmq
ctx = zmq.Context(1)
sock = zmq.Socket(ctx, zmq.SUB)
# Capture from multiple producers, so bind (producers must connect)
sock.bind('ipc://test')
sock.setsockopt_string(zmq.SUBSCRIBE, '')
while True:
print(sock.recv())
I'm confused as to how to properly shut down a very simple server that I'm using.
I was thinking that this should be enough:
#!/usr/bin/python
import signal
import myhandler
import SocketServer
def terminate(signal, frame):
print "terminating on %s at %s"
server.shutdown()
if __name__ == "__main__":
signal.signal(signal.SIGTERM, terminate)
server = SocketServer.TCPServer(("localhost", 9999), myhandler.MyHandler)
server.serve_forever()
The server works OK, but when I throw SIGTERM at it, it only prints terminating on 15 at ... but does not really shut down (i.e. close all sockets and exit).
Now pydoc explains it
shutdown(self)
Stops the serve_forever loop.
Blocks until the loop has finished. This must be called while
serve_forever() is running in another thread, or it will
deadlock.
but this is where I'm getting lost, since I'm hardly even getting to wrap my head around threaded programming. For now I need just a simple TCP echo server that I'm able to killall and start any time (which fails now due to leftover LISTENING sockets).
So what is the correct way to achieve this?
Disclaimer: I have 0, nil, null, none, no experience with python.
Disclaimer 2: I, in no way, think that your server is "the way to go" when it comes to...anything server related, not even for the most basic things or anything outside school homework stuff; it might be a decent sample to help people learn the basics but it is, at the same time, misleading and wrong on so many levels I lost count.
Back to your problem. I took your code and modified it to work as intended:
#!/usr/bin/python
import signal
import SocketServer
import threading
import thread
class DummyServer(SocketServer.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
self.request.send(data)
return
def shutdownHandler(msg,evt):
print "shutdown handler called. shutting down on thread id:%x"%(id(threading.currentThread()))
server.shutdown()
print "shutdown complete"
evt.set()
return
def terminate(signal,frame):
print "terminate handle on thread id:%x"%(id(threading.currentThread()))
t = threading.Thread(target = shutdownHandler,args = ('SIGTERM received',doneEvent))
t.start()
if __name__ == "__main__":
doneEvent = threading.Event()
print "main thread id:%x"%(id(threading.currentThread()))
signal.signal(signal.SIGTERM, terminate)
server = SocketServer.TCPServer(("localhost",9999), DummyServer)
server.serve_forever()
doneEvent.wait()
You should check the code for SocketServer, especially the server_forever() and shutdown() methods. You should also try to learn about threads and how to do any kind of communication/signaling between them. There are lots of good sources on these topics out there.
The basic thing to remember about threads is that, generally speaking, a thread can only do ONE thing at a time - your signal handler is one of those exceptions :) If a thread is stuck in server_forever(), you can't expect the same thread to be able to run your call to shutdown() too. Python (check the signals docs) will run your signal handlers on the main thread - the same one that runs the server_forever() loop from your code: calling shutdown() from within the signal handler will lead to a deadlock, as you noticed.
The way around it is to have a new thread created for the sole purpose to run shutdown(). The new thread's shutdown() call will signal the main thread's server_forever() that it's time to break the loop and exit. The main thread might even end before the thread running shutdown() is complete - generally speaking, when the main thread ends, any other threads will be suddenly killed too without having the chance to finish whatever they were doing.
The doneEvent even is there to make sure that the main thread will wait (doneEvent.wait()) until the shutdown thread completes it's work - print "shutdown complete" before exiting.
As a simple solution, you can call server_close() after serve_forever():
import socketserver
class StoppableServer(socketserver.TCPServer):
def run(self):
try:
self.serve_forever()
except KeyboardInterrupt:
pass
finally:
# Clean-up server (close socket, etc.)
self.server_close()
Server stoppable with Ctrl+C or SIGTERM:
server = StoppableServer(("127.0.0.1", 8080), socketserver.BaseRequestHandler)
server.run()
Server running in a thread:
server = StoppableServer(("127.0.0.1", 8080), socketserver.BaseRequestHandler)
thread = threading.Thread(None, server.run)
thread.start()
# ... do things ...
server.shutdown()
thread.join()
It seems that asynchronous signals in multithreaded programs are not correctly handled by Python. But, I thought I would check here to see if anyone can spot a place where I am violating some principle, or misunderstanding some concept.
There are similar threads I've found here on SO, but none that seem to be quite the same.
The scenario is: I have two threads, reader thread and writer thread (main thread). The writer thread writes to a pipe that the reader thread polls. The two threads are coordinated using a threading.Event() primitive (which I assume is implemented using pthread_cond_wait). The main thread waits on the Event while the reader thread eventually sets it.
But, if I want to interrupt my program while the main thread is waiting on the Event, the KeyboardInterrupt is not handled asynchronously.
Here is a small program to illustrate my point:
#!/usr/bin/python
import os
import sys
import select
import time
import threading
pfd_r = -1
pfd_w = -1
reader_ready = threading.Event()
class Reader(threading.Thread):
"""Read data from pipe and echo to stdout."""
def run(self):
global pfd_r
while True:
if select.select([pfd_r], [], [], 1)[0] == [pfd_r]:
output = os.read(pfd_r, 1000)
sys.stdout.write("R> '%s'\n" % output)
sys.stdout.flush()
# Suppose there is some long-running processing happening:
time.sleep(10)
reader_ready.set()
# Set up pipe.
(pfd_r, pfd_w) = os.pipe()
rt = Reader()
rt.daemon = True
rt.start()
while True:
reader_ready.clear()
user_input = raw_input("> ").strip()
written = os.write(pfd_w, user_input)
assert written == len(user_input)
# Wait for reply -- Try to ^C here and it won't work immediately.
reader_ready.wait()
Start the program with './bug.py' and enter some input at the prompt. Once you see the reader reply with the prefix 'R>', try to interrupt using ^C.
What I see (Ubuntu Linux 10.10, Python 2.6.6) is that the ^C is not handled until after the blocking reader_ready.wait() returns. What I expected to see is that the ^C is raised asynchronously, resulting in the program terminating (because I do not catch KeyboardInterrupt).
This may seem like a contrived example, but I'm running into this in a real-world program where the time.sleep(10) is replaced by actual computation.
Am I doing something obviously wrong, like misunderstanding what the expected result would be?
Edit: I've also just tested with Python 3.1.1 and the same problem exists.
The wait() method of a threading._Event object actually relies on a thread.lock's acquire() method. However, the thread documentation states that a lock's acquire() method cannot be interrupted, and that any KeyboardInterrupt exception will be handled after the lock is released.
So basically, this is working as intended. Threading objects that implement this behavior rely on a lock at some point (including queues), so you might want to choose another path.
Alternatively, you could also use the pause() function of the signal module instead of reader_ready.wait(). signal.pause() is a blocking function and gets unblocked when a signal is received by the process. In your case, when ^C is pressed, SIGINT signal unblocks the function.
According to the documentation, the function is not available for Windows. I've tested it on Linux and it works. I think this is better than using wait() with a timeout.
This is my 'game server'. It's nothing serious, I thought this was a nice way of learning a few things about python and sockets.
First the server class initialized the server.
Then, when someone connects, we create a client thread. In this thread we continually listen on our socket.
Once a certain command comes in (I12345001001, for example) it spawns another thread.
The purpose of this last thread is to send updates to the client.
But even though I see the server is performing this code, the data isn't actually being sent.
Could anyone tell where it's going wrong?
It's like I have to receive something before I'm able to send. So I guess somewhere I'm missing something
#!/usr/bin/env python
"""
An echo server that uses threads to handle multiple clients at a time.
Entering any line of input at the terminal will exit the server.
"""
import select
import socket
import sys
import threading
import time
import Queue
globuser = {}
queue = Queue.Queue()
class Server:
def __init__(self):
self.host = ''
self.port = 2000
self.backlog = 5
self.size = 1024
self.server = None
self.threads = []
def open_socket(self):
try:
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.bind((self.host,self.port))
self.server.listen(5)
except socket.error, (value,message):
if self.server:
self.server.close()
print "Could not open socket: " + message
sys.exit(1)
def run(self):
self.open_socket()
input = [self.server,sys.stdin]
running = 1
while running:
inputready,outputready,exceptready = select.select(input,[],[])
for s in inputready:
if s == self.server:
# handle the server socket
c = Client(self.server.accept(), queue)
c.start()
self.threads.append(c)
elif s == sys.stdin:
# handle standard input
junk = sys.stdin.readline()
running = 0
# close all threads
self.server.close()
for c in self.threads:
c.join()
class Client(threading.Thread):
initialized=0
def __init__(self,(client,address), queue):
threading.Thread.__init__(self)
self.client = client
self.address = address
self.size = 1024
self.queue = queue
print 'Client thread created!'
def run(self):
running = 10
isdata2=0
receivedonce=0
while running > 0:
if receivedonce == 0:
print 'Wait for initialisation message'
data = self.client.recv(self.size)
receivedonce = 1
if self.queue.empty():
print 'Queue is empty'
else:
print 'Queue has information'
data2 = self.queue.get(1, 1)
isdata2 = 1
if data2 == 'Exit':
running = 0
print 'Client is being closed'
self.client.close()
if data:
print 'Data received through socket! First char: "' + data[0] + '"'
if data[0] == 'I':
print 'Initializing user'
user = {'uid': data[1:6] ,'x': data[6:9], 'y': data[9:12]}
globuser[user['uid']] = user
print globuser
initialized=1
self.client.send('Beginning - Initialized'+';')
m=updateClient(user['uid'], queue)
m.start()
else:
print 'Reset receivedonce'
receivedonce = 0
print 'Sending client data'
self.client.send('Feedback: ' +data+';')
print 'Client Data sent: ' + data
data=None
if isdata2 == 1:
print 'Data2 received: ' + data2
self.client.sendall(data2)
self.queue.task_done()
isdata2 = 0
time.sleep(1)
running = running - 1
print 'Client has stopped'
class updateClient(threading.Thread):
def __init__(self,uid, queue):
threading.Thread.__init__(self)
self.uid = uid
self.queue = queue
global globuser
print 'updateClient thread started!'
def run(self):
running = 20
test=0
while running > 0:
test = test + 1
self.queue.put('Test Queue Data #' + str(test))
running = running - 1
time.sleep(1)
print 'Updateclient has stopped'
if __name__ == "__main__":
s = Server()
s.run()
I don't understand your logic -- in particular, why you deliberately set up two threads writing at the same time on the same socket (which they both call self.client), without any synchronization or coordination, an arrangement that seems guaranteed to cause problems.
Anyway, a definite bug in your code is you use of the send method -- you appear to believe that it guarantees to send all of its argument string, but that's very definitely not the case, see the docs:
Returns the number of bytes sent.
Applications are responsible for
checking that all data has been sent;
if only some of the data was
transmitted, the application needs to
attempt delivery of the remaining
data.
sendall is the method that you probably want:
Unlike send(), this method continues
to send data from string until either
all data has been sent or an error
occurs.
Other problems include the fact that updateClient is apparently designed to never terminate (differently from the other two thread classes -- when those terminate, updateClient instances won't, and they'll just keep running and keep the process alive), redundant global statements (innocuous, just confusing), some threads trying to read a dict (via the iteritems method) while other threads are changing it, again without any locking or coordination, etc, etc -- I'm sure there may be even more bugs or problems, but, after spotting several, one's eyes tend to start to glaze over;-).
You have three major problems. The first problem is likely the answer to your question.
Blocking (Question Problem)
The socket.recv is blocking. This means that execution is halted and the thread goes to sleep until it can read data from the socket. So your third update thread just fills the queue up but it only gets emptied when you get a message. The queue is also emptied by one message at a time.
This is likely why it will not send data unless you send it data.
Message Protocol On Stream Protocol
You are trying to use the socket stream like a message stream. What I mean is you have:
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
The SOCK_STREAM part says it is a stream not a message such as SOCK_DGRAM. However, TCP does not support message. So what you have to do is build messages such as:
data =struct.pack('I', len(msg)) + msg
socket.sendall(data)
Then the receiving end will looking for the length field and read the data into a buffer. Once enough data is in the buffer it can grab out the entire message.
Your current setup is working because your messages are small enough to all be placed into the same packet and also placed into the socket buffer together. However, once you start sending large data over multiple calls with socket.send or socket.sendall you are going to start having multiple messages and partial messages being read unless you implement a message protocol on top of the socket byte stream.
Threads
Even though threads can be easier to use when starting out they come with a lot of problems and can degrade performance if used incorrectly especially in Python. I love threads so do not get me wrong. Python also has a problem with the GIL (global interpreter lock) so you get bad performance when using threads that are CPU bound. Your code is mostly I/O bound at the moment, but in the future it may become CPU bound. Also you have to worry about locking with threading. A thread can be a quick fix but may not be the best fix. There are circumstances where threading is quite simply the easiest way to break some time consuming process. So do not discard threads as evil or terrible. In Python they are considered bad mainly because of the GIL, and in other languages (including Python) because of concurrency issues so most people recommend you to use multiple processes with Python or use asynchronous code. The subject of to use a thread or not is very complex as it depends on the language (way your code is run), the system (single or multiple processors), and contention (trying to share a resource with locking), and other factors, but generally asynchronous code is faster because it utilizes more CPU with less overhead especially if you are not CPU bound.
The solution is the usage of the select module in Python, or something similar. It will tell you when a socket has data to be read, and you can set a timeout parameter.
You can gain more performance by doing asynchronous work (asynchronous sockets). To turn a socket into asynchronous mode you simply call socket.settimeout(0) which will make it not block. However, you will constantly eat CPU spinning waiting for data. The select module and friends will prevent you from spinning.
Generally for performance you want to do as much asynchronous (same thread) as possible, and then expand with more threads that are also doing as much asynchronously as possible. However as previously noted Python is an exception to this rule because of the GIL (global interpreter lock) which can actually degrade performance from what I have read. If you are interesting you should try writing a test case and find out!
You should also check out the thread locking primitives in the threading module. They are Lock, RLock, and Condition. They can help multiple threads share data with out problems.
lock = threading.Lock()
def myfunction(arg):
with lock:
arg.do_something()
Some Python objects are thread safe and others are not.
Sending Updates Asynchronously (Improvement)
Instead of using a third thread only to send updates you could instead use the client thread to send updates by checking the current time with the last time an update was sent. This would remove the usage of a Queue and a Thread. Also to do this you must convert your client code into asynchronous code and have a timeout on your select so that you can at interval check the current time to see if an update is needed.
Summary
I would recommend you rewrite your code using asynchronous socket code. I would also recommend that you use a single thread for all clients and the server. This will improve performance and decrease latency. It would make debugging easier because you would have no possible concurrency issues like you have with threads. Also, fix your message protocol before it fails.
(I'm using the pyprocessing module in this example, but replacing processing with multiprocessing should probably work if you run python 2.6 or use the multiprocessing backport)
I currently have a program that listens to a unix socket (using a processing.connection.Listener), accept connections and spawns a thread handling the request. At a certain point I want to quit the process gracefully, but since the accept()-call is blocking and I see no way of cancelling it in a nice way. I have one way that works here (OS X) at least, setting a signal handler and signalling the process from another thread like so:
import processing
from processing.connection import Listener
import threading
import time
import os
import signal
import socket
import errno
# This is actually called by the connection handler.
def closeme():
time.sleep(1)
print 'Closing socket...'
listener.close()
os.kill(processing.currentProcess().getPid(), signal.SIGPIPE)
oldsig = signal.signal(signal.SIGPIPE, lambda s, f: None)
listener = Listener('/tmp/asdf', 'AF_UNIX')
# This is a thread that handles one already accepted connection, left out for brevity
threading.Thread(target=closeme).start()
print 'Accepting...'
try:
listener.accept()
except socket.error, e:
if e.args[0] != errno.EINTR:
raise
# Cleanup here...
print 'Done...'
The only other way I've thought about is reaching deep into the connection (listener._listener._socket) and setting the non-blocking option...but that probably has some side effects and is generally really scary.
Does anyone have a more elegant (and perhaps even correct!) way of accomplishing this? It needs to be portable to OS X, Linux and BSD, but Windows portability etc is not necessary.
Clarification:
Thanks all! As usual, ambiguities in my original question are revealed :)
I need to perform cleanup after I have cancelled the listening, and I don't always want to actually exit that process.
I need to be able to access this process from other processes not spawned from the same parent, which makes Queues unwieldy
The reasons for threads are that:
They access a shared state. Actually more or less a common in-memory database, so I suppose it could be done differently.
I must be able to have several connections accepted at the same time, but the actual threads are blocking for something most of the time. Each accepted connection spawns a new thread; this in order to not block all clients on I/O ops.
Regarding threads vs. processes, I use threads for making my blocking ops non-blocking and processes to enable multiprocessing.
Isnt that what select is for??
Only call accept on the socket if the select indicates it will not block...
The select has a timeout, so you can break out occasionally occasionally to check
if its time to shut down....
I thought I could avoid it, but it seems I have to do something like this:
from processing import connection
connection.Listener.fileno = lambda self: self._listener._socket.fileno()
import select
l = connection.Listener('/tmp/x', 'AF_UNIX')
r, w, e = select.select((l, ), (), ())
if l in r:
print "Accepting..."
c = l.accept()
# ...
I am aware that this breaks the law of demeter and introduces some evil monkey-patching, but it seems this would be the most easy-to-port way of accomplishing this. If anyone has a more elegant solution I would be happy to hear it :)
I'm new to the multiprocessing module, but it seems to me that mixing the processing module and the threading module is counter-intuitive, aren't they targetted at solving the same problem?
Anyway, how about wrapping your listen functions into a process itself? I'm not clear how this affects the rest of your code, but this may be a cleaner alternative.
from multiprocessing import Process
from multiprocessing.connection import Listener
class ListenForConn(Process):
def run(self):
listener = Listener('/tmp/asdf', 'AF_UNIX')
listener.accept()
# do your other handling here
listen_process = ListenForConn()
listen_process.start()
print listen_process.is_alive()
listen_process.terminate()
listen_process.join()
print listen_process.is_alive()
print 'No more listen process.'
Probably not ideal, but you can release the block by sending the socket some data from the signal handler or the thread that is terminating the process.
EDIT: Another way to implement this might be to use the Connection Queues, since they seem to support timeouts (apologies, I misread your code in my first read).
I ran into the same issue. I solved it by sending a "stop" command to the listener. In the listener's main thread (the one that processes the incoming messages), every time a new message is received, I just check to see if it's a "stop" command and exit out of the main thread.
Here's the code I'm using:
def start(self):
"""
Start listening
"""
# set the command being executed
self.command = self.COMMAND_RUN
# startup the 'listener_main' method as a daemon thread
self.listener = Listener(address=self.address, authkey=self.authkey)
self._thread = threading.Thread(target=self.listener_main, daemon=True)
self._thread.start()
def listener_main(self):
"""
The main application loop
"""
while self.command == self.COMMAND_RUN:
# block until a client connection is recieved
with self.listener.accept() as conn:
# receive the subscription request from the client
message = conn.recv()
# if it's a shut down command, return to stop this thread
if isinstance(message, str) and message == self.COMMAND_STOP:
return
# process the message
def stop(self):
"""
Stops the listening thread
"""
self.command = self.COMMAND_STOP
client = Client(self.address, authkey=self.authkey)
client.send(self.COMMAND_STOP)
client.close()
self._thread.join()
I'm using an authentication key to prevent would be hackers from shutting down my service by sending a stop command from an arbitrary client.
Mine isn't a perfect solution. It seems a better solution might be to revise the code in multiprocessing.connection.Listener, and add a stop() method. But, that would require sending it through the process for approval by the Python team.