Thread termination when start one thread in another thread - python

I initially started a loop that runs a thread for each client of my server, but this graphic stopped the program. So I put this loop inside a thread, but the main thread stops after creating the sub thread, and no command in the main thread works after Thread.start().
def start():
startb.setEnabled(False)
stopb.setEnabled(True)
ev.set()
Thread(target = listen).start()
pass
def listen():
so = socket.socket()
so.bind((ip,port))
so.listen(4)
while ev.is_set():
th = client_thread(so.accept()[0], response)
th.start()
#The program will not run from now on
print('1')#to debug
print('2')#to debug
so.close()# -> This needs to be implemented
pass

Related

How to exit a Python process if any parallel thread exits?

I have a python process with a main thread starting parallel threads for 1 gRPC server and 1 HTTP server. I want the OS process for this application to exit if ANY of the parallel threads exits.
I think the main thread, as is coded here, would wait as long as there is a single parallel thread that is running. What do I need to do to change this so the main thread exits as soon as any parallel thread exits?
if __name__ == '__main__':
svc = MyService()
t1 = GrpcServer(svc)
t1.start()
t2 = HealthHttpServer()
t2.start()
with the servers defined as
class GrpcServer(threading.Thread):
def __init__(self, service):
super().__init__()
self.grpcServer(futures.ThreadPoolExecutor(max_workers=10))
self.grpcServer.add_insecure_port('[::]:8000')
myservice_pb2_grpc.add_MyServiceServicer_to_server(service, self.grpcServer)
def run(self):
self.grpcserver.start()
class HealthServer(Thread):
def __init__(self):
super().__init__()
def run(self):
port=2113
httpd = HTTPServer(('localhost', port), HealthHTTPRequestHandler)
httpd.serve_forever()
class HealthHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
'''Respond to a GET request.'''
if self.path == '/healthz':
self.send_response(HTTPStatus.OK)
self.end_headers()
self.wfile.write(b'ok')
else:
self.send_response(HTTPStatus.NOT_FOUND)
self.end_headers()
The cleanest I've found so far is:
define all these threads as daemon threads
define a global threading.Event
object
add a top-level try...finally in each thread, and call that event' set()
in the finally
in the main thread wait on that event after the threads are started
If anything goes wrong in any of the threads the finally block will execute, signaling that event, unblocking the main thread which exits. All the other threads being daemon threads, the process will then exit.

Simple multithreaded Server Client Program

I have a multithreaded server and client programs for a simple game. Whenever a client force exits the game, I try to catch the exception with a try catch "except BrokenPipeError" and inform the other players.
I also want to end the exited client thread; however, I take input such as this:
while True:
client = serverSocket.accept()
t = ServerThread(client)
t.start()
I tried to use a threading event with stop() function; however, I believe I can not use a .join statement to exit the thread because of the way I take input. How should I end the force exited client. I know that multiprocessing library has a terminate function but I am also required to use the threading library. I tried os_exit(1) but I believe this command kills the entire process. What is the standard exit process for programs such as this?
First of all join() do nothing else but waits for thread to stop.
Thread stops when it reaches end of threaded subroutine. For example
class ServerThread(threading.Thread):
def __init__(self,client,name):
super().__init__(self)
self.client = client
self.name = name
def inform(self,msg):
print("{}: got message {}".format( self.name, msg ))
self.client[0].send(msg)
def run(self):
while True:
try:
self.client[0].recv(1024)
except BrokenPipeError: #client exits
# do stuff
break # -> ends loop
return # -> thread exits, join returns
If you want to inform other clients that someone leaves, i would make another monitoring thread
class Monitoring(threading.Thread):
def __init__(self):
super().__init__(self,daemon=True) # daemon means thread stops when main thread do
self.clients=[]
def add_client(self,client):
self.clients.append(client)
def inform_client_leaves(self,client_leaved):
for client in self.clients:
if client.is_alive():
client.inform("Client '{}' leaves".format(client_leaved.name))
def run(self):
while True:
for thread in list(self.threads):
if not thread.is_alive(): # client exited
self.threads.remove(thread)
self.inform_client_exits(thread)
time.sleep(1)
So the initial code would looks like
mon = Monitoring()
mon.start()
while True:
client = serverSocket.accept()
t = ServerThread(client,"Client1")
t.start()
mon.add_client(t)

How to make sure queue is empty before exiting main thread

I have a program that has two threads, the main thread and one additional that works on handling jobs from a FIFO queue.
Something like this:
import queue
import threading
q = queue.Queue()
def _worker():
while True:
msg = q.get(block=True)
print(msg)
q.task_done()
t = threading.Thread(target=_worker)
#t.daemon = True
t.start()
q.put('asdf-1')
q.put('asdf-2')
q.put('asdf-4')
q.put('asdf-4')
What I want to accomplish is basically to make sure the queue is emptied before the main thread exits.
If I set t.daemon to be True the program will exit before the queue is emptied, however if it's set to False the program will never exit. Is there some way to make sure the thread running the _worker() method clears the queue on main thread exit?
The comments touch on using .join(), but depending on your use case, using a join may make threading pointless.
I assume that your main thread will be doing things other than adding items to the queue - and may be shut down at any point, you just want to ensure that your queue is empty before shutting down is complete.
At the end of your main thread, you could add a simple empty check in a loop.
while not q.empty():
sleep(1)
If you don't set t.daemon = True then the thread will never finish. Setting the thread as a daemon thread will mean that the thread does not cause your program to stay running when the main thread finishes.
Put a special item (e.g. None) in the queue, that signals the worker thread to stop.
import queue
import threading
q = queue.Queue()
def _worker():
while True:
msg = q.get(block=True)
if msg is None:
return
print(msg) # do your stuff here
t = threading.Thread(target=_worker)
#t.daemon = True
t.start()
q.put('asdf-1')
q.put('asdf-2')
q.put('asdf-4')
q.put('asdf-4')
q.put(None)
t.join()

How to wait for a spawned thread to finish in Python

I want to use threads to do some blocking work. What should I do to:
Spawn a thread safely
Do useful work
Wait until the thread finishes
Continue with the function
Here is my code:
import threading
def my_thread(self):
# Wait for the server to respond..
def main():
a = threading.thread(target=my_thread)
a.start()
# Do other stuff here
You can use Thread.join. Few lines from docs.
Wait until the thread terminates. This blocks the calling thread until the thread whose join() method is called terminates – either normally or through an unhandled exception – or until the optional timeout occurs.
For your example it will be like.
def main():
a = threading.thread(target = my_thread)
a.start()
a.join()

Python script don't receive exit signal sent by supervisor

I'm running a python script that creates a Tornado server, the server is run by supervisor.
I want to gracefully terminate all WebSocket client connections when a supervisorctl reload is issued (normally after a deploy).
My problem is that I'm not able to get a function called when my server is killed by supervisor, but it works when using kill with the signal or run on console and killed with Control+C.
I have tried other signals and configurations without luck.
import signal, sys
def clean_resources(signum, frame):
print "SIG: %d, clean me" % signum
sys.exit(0)
if __name__ == '__main__':
# Nicely handle closing the server
for sig in (signal.SIGINT, signal.SIGTERM):
signal.signal(sig, clean_resources)
This is my tornado_supervisor.conf
[program:tornado_server]
command = python /opt/tornado/server.py -p 8890
user = www-data
stdout_logfile = /var/log/tornado/tornado_server_sup.log
redirect_stderr = true
autorestart=true
environment=HOME='/var/www'
environment=PYTHONPATH="$PYTHONPATH:/opt/tornado/"
stopsignal = TERM
stopwaitsecs = 10
stopasgroup = true
I had similar/same problem. Only parent Tornado process got the signal, while children processes where not killed.
I made arrangement that parent process kills children manually using os.killpg(), also, children uses some delays to (possibly) finish current requests:
#will be initialized in main()
server = None
loop = None
def stop_loop():
global loop
loop.stop()
def signal_handler_child_callback():
global loop
global server
server.stop()
# allow to finish processing current requests
loop.add_timeout(time.time() + LOOP_STOP_DELAY, stop_loop)
def signal_handler(signum, frame):
global loop
global server
if loop:
#this is child process, will restrict incoming connections and stop ioloop after delay
loop.add_callback(signal_handler_child_callback)
else:
#this is master process, should restrict new incomming connections
#and send signal to child processes
server.stop()
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.killpg(0, signal.SIGTERM)
def main():
parse_command_line()
signal.signal(signal.SIGTERM, signal_handler)
# ...
tornado_app = tornado.web.Application(
[
#...
])
global server
server = tornado.httpserver.HTTPServer(tornado_app)
server.bind(options.port)
server.start(0)
global loop
loop = tornado.ioloop.IOLoop.instance()
loop.start()
if __name__ == '__main__':
main()

Categories

Resources