How to exit a Python process if any parallel thread exits? - python

I have a python process with a main thread starting parallel threads for 1 gRPC server and 1 HTTP server. I want the OS process for this application to exit if ANY of the parallel threads exits.
I think the main thread, as is coded here, would wait as long as there is a single parallel thread that is running. What do I need to do to change this so the main thread exits as soon as any parallel thread exits?
if __name__ == '__main__':
svc = MyService()
t1 = GrpcServer(svc)
t1.start()
t2 = HealthHttpServer()
t2.start()
with the servers defined as
class GrpcServer(threading.Thread):
def __init__(self, service):
super().__init__()
self.grpcServer(futures.ThreadPoolExecutor(max_workers=10))
self.grpcServer.add_insecure_port('[::]:8000')
myservice_pb2_grpc.add_MyServiceServicer_to_server(service, self.grpcServer)
def run(self):
self.grpcserver.start()
class HealthServer(Thread):
def __init__(self):
super().__init__()
def run(self):
port=2113
httpd = HTTPServer(('localhost', port), HealthHTTPRequestHandler)
httpd.serve_forever()
class HealthHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
'''Respond to a GET request.'''
if self.path == '/healthz':
self.send_response(HTTPStatus.OK)
self.end_headers()
self.wfile.write(b'ok')
else:
self.send_response(HTTPStatus.NOT_FOUND)
self.end_headers()

The cleanest I've found so far is:
define all these threads as daemon threads
define a global threading.Event
object
add a top-level try...finally in each thread, and call that event' set()
in the finally
in the main thread wait on that event after the threads are started
If anything goes wrong in any of the threads the finally block will execute, signaling that event, unblocking the main thread which exits. All the other threads being daemon threads, the process will then exit.

Related

Thread termination when start one thread in another thread

I initially started a loop that runs a thread for each client of my server, but this graphic stopped the program. So I put this loop inside a thread, but the main thread stops after creating the sub thread, and no command in the main thread works after Thread.start().
def start():
startb.setEnabled(False)
stopb.setEnabled(True)
ev.set()
Thread(target = listen).start()
pass
def listen():
so = socket.socket()
so.bind((ip,port))
so.listen(4)
while ev.is_set():
th = client_thread(so.accept()[0], response)
th.start()
#The program will not run from now on
print('1')#to debug
print('2')#to debug
so.close()# -> This needs to be implemented
pass

How to kill a thread in python with blocking command?

I want to use Interactive Brokers' API which opens a TCP connection on a different thread.
The problem is that the app.run() function what needs to be called to establish the TCP connection probably uses a while true loop for processing it's queue, what block every possible way I know to terminate the thread when exiting the program.
class TradeApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
#...
def websocket_con():
app.run()
app = TradeApp()
app.connect("127.0.0.1", 7497, clientId=1)
con_thread = threading.Thread(target=websocket_con, daemon=True)
con_thread.start()
I've tried using a daemon thread and simple one.
I've tried to use events also.
But whatever I do, it seems the thread will never exit from the execution of app.run() function.
class TradingApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
#...
def websocket_conn():
app.run()
event.wait()
if event.is_set():
app.close()
event = threading.Event()
app = TradingApp()
app.connect("127.0.0.1", 7497, clientId=1)
conn_thread = threading.Thread(target=websocket_conn)
conn_thread.start()
#...
event.set()
Am I doing something wrong?
How could I exit from app.run() function?
I think u bad understand multiprocessing, when u start websocket_conn() function by new process, this new process will be executed step by step so event.wait() if event.is_set(): doesn't execute before app.run() wasn't over.
Try something like this:
conn_thread = threading.Thread(target=websocket_conn)
conn_thread.start()
event.wait()
if event.is_set():
conn_thread.kill()

Can I run cleanup code in daemon threads in python?

Suppose I have some consumer daemon threads that constantly take objects from a queue whenever the main thread puts them there and performs some long operation (a couple of seconds) with them.
The problem is that whenever the main thread is done, the daemon threads are killed before they finish processing whatever is left in the queue.
I know that one way to solve this could be to wait for the daemon threads to finish processing whatever is left in the queue and then exit, but I am curious if there is any way for the daemon threads to "clean up" after themselves (i.e. finish processing whatever is left in the queue) when the main thread exits, without explicitly having the main thread tell the daemon threads to start cleaning up.
The motivation behind this is that I made a python package that has a logging handler class that puts items into a queue whenever the user tries to log something (e.g. with logging.info("message")), and the handler has a daemon thread that sends the logs over the network. I'd prefer if the daemon thread could clean up by itself, so users of the package wouldn't have to manually make sure to make their main thread wait for the log handler to finish its processing.
Minimal working example
# this code is in my package
class MyHandler(logging.Handler):
def __init__(self, level):
super().__init__(level=level)
self.queue = Queue()
self.thread = Thread(target=self.consume, daemon=True)
self.thread.start()
def emit(self, record):
# This gets called whenever the user does logging.info, or similar
self.queue.put(record)
def consume(self):
while True:
record = self.queue.get()
send(record) # send record over network, can take a few seconds (assume it never raises)
self.queue.task_done()
# This is user's main code
# user will have to keep a reference to the handler for later. I want to avoid this.
my_handler = MyHandler()
# set up logging
logging.basicConfig(..., handlers=[..., my_handler])
# do some stuff...
logging.info("this will be sent over network")
# some more stuff...
logging.error("also sent over network")
# even more stuff
# before exiting must wait for handler to finish sending
# I don't want user to have to do this
my_hanler.queue.join()
You can use threading.main_thread.join() which will wait until shutdown like so:
import threading
import logging
import queue
class MyHandler(logging.Handler):
def __init__(self, level):
super().__init__(level=level)
self.queue = queue.Queue()
self.thread = threading.Thread(target=self.consume) # Not daemon
# Shutdown thread
threading.Thread(
target=lambda: threading.main_thread().join() or self.queue.put(None)
).start()
self.thread.start()
def emit(self, record):
# This gets called whenever the user does logging.info, or similar
self.queue.put(record)
def consume(self):
while True:
record = self.queue.get()
if record is None:
print("cleaning")
return # Cleanup
print(record) # send record over network, can take a few seconds (assume it never raises)
self.queue.task_done()
Quick test code:
logging.getLogger().setLevel(logging.INFO)
logging.getLogger().addHandler(MyHandler(logging.INFO))
logging.info("Hello")
exit()
You can use atexit to wait until the daemon thread shuts down:
import queue, threading, time, logging, atexit
class MyHandler(logging.Handler):
def __init__(self, level):
super().__init__(level=level)
self.queue = queue.Queue()
self.thread = threading.Thread(target=self.consume, daemon=True)
# Right before main thread exits, signal cleanup and wait until done
atexit.register(lambda: self.queue.put(None) or self.thread.join())
self.thread.start()
def emit(self, record):
# This gets called whenever the user does logging.info, or similar
self.queue.put(record)
def consume(self):
while True:
record = self.queue.get()
if record is None: # Cleanup requested
print("cleaning")
time.sleep(5)
return
print(record) # send record over network, can take a few seconds (assume it never raises)
self.queue.task_done()
# Test code
logging.getLogger().setLevel(logging.INFO)
logging.getLogger().addHandler(MyHandler(logging.INFO))
logging.info("Hello")

Python3: Wait for Daemon to finish iteration

I'm writing a python script that will start a local fileserver, and while that server is alive it will be writing to a file every 30 seconds. I would like to have the server and writer function running synchronously so I made the writer function a daemon thread... My main question is, since this daemon thread will quit once the server is stopped, if the daemon is in the middle of writing to a file will it complete that operation before exiting? It would be really bad to be left with 1/2 a file. Here's the code, but the actual file it will be writing is about 3k lines of JSON, hence the concern.
import http.server
import socketserver
from time import sleep
from threading import Thread
class Server:
def __init__(self):
self.PORT = 8000
self.Handler = http.server.SimpleHTTPRequestHandler
self.httpd = socketserver.TCPServer(("", self.PORT), self.Handler)
print("Serving at port", self.PORT)
def run(self):
try:
self.httpd.serve_forever()
except KeyboardInterrupt:
print("Server stopped")
def test():
while True:
with open('test', mode='w') as file:
file.write('testing...')
print('file updated')
sleep(5)
if __name__ == "__main__":
t = Thread(target=test, daemon=True)
t.start()
server = Server()
server.run()
It looks like you may have made an incorrect decision making the writer thread daemonic.
Making a daemonic thread does not mean it will run synchronously. It will still be affected by the GIL.
If you want synchronous execution, you'll have to use multiprocessing
From the Python docs:
Daemon threads are abruptly stopped at shutdown. Their resources (such
as open files, database transactions, etc.) may not be released
properly. If you want your threads to stop gracefully, make them
non-daemonic and use a suitable signalling mechanism such as an Event.
So that means that daemon threads are only suitable for the tasks that only make sense in context of the main thread and don't matter when the main thread has stopped working. File I/O, particularly data saving, is not suitable for a daemon thread.
So it looks like the most obvious and logical solution would be to make the writer thread non-daemonic.
Then, even if the main thread exits, the Python process won't be ended until all non-daemonic threads have finished. This allows for file I/O to complete and exit safely.
Explanation of daemonic threads in Python can be found here

Killing an infinitely-looping threaded application

Consider a hypothetical threaded Python application that runs each thread in an infinite loop:
import signal
import sys
import threading
import time
class CallSomebody (threading.Thread):
def __init__(self, target, *args):
self._target = target
self._args = args
threading.Thread.__init__(self)
def run (self):
self._target(*self._args)
def call (who):
while True:
print "Who you gonna call? %s" % (str(who))
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
a=CallSomebody(call, 'Ghostbusters!')
a.daemon=True
b=CallSomebody(call, 'The Exorcist!')
b.daemon=True
a.start()
b.start()
a.join()
b.join()
When running the application, sending SIGINT by pressing CtrlC does not stop the application. I tried removing the daemon statements but that did not help. What fundamental idea am I missing?
Thanks.
When you join a thread the active thread blocks until the joined thread returns. Yours never do. You won't want to join them in this way.
Generally, background threads that are daemon threads and that perform infinite loops should be marked daemon, never joined, and then allowed to expire when your main thread does so. If you happened to be using wx for example, you'd make your call to AppInstance.MainLoop() after starting the daemonic threads, then when your Frame or whatever other top level instances you had were closed, the program execution would be concluded, and the daemons would be addressed appropriately.

Categories

Resources