How to kill a thread in python with blocking command? - python

I want to use Interactive Brokers' API which opens a TCP connection on a different thread.
The problem is that the app.run() function what needs to be called to establish the TCP connection probably uses a while true loop for processing it's queue, what block every possible way I know to terminate the thread when exiting the program.
class TradeApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
#...
def websocket_con():
app.run()
app = TradeApp()
app.connect("127.0.0.1", 7497, clientId=1)
con_thread = threading.Thread(target=websocket_con, daemon=True)
con_thread.start()
I've tried using a daemon thread and simple one.
I've tried to use events also.
But whatever I do, it seems the thread will never exit from the execution of app.run() function.
class TradingApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
#...
def websocket_conn():
app.run()
event.wait()
if event.is_set():
app.close()
event = threading.Event()
app = TradingApp()
app.connect("127.0.0.1", 7497, clientId=1)
conn_thread = threading.Thread(target=websocket_conn)
conn_thread.start()
#...
event.set()
Am I doing something wrong?
How could I exit from app.run() function?

I think u bad understand multiprocessing, when u start websocket_conn() function by new process, this new process will be executed step by step so event.wait() if event.is_set(): doesn't execute before app.run() wasn't over.
Try something like this:
conn_thread = threading.Thread(target=websocket_conn)
conn_thread.start()
event.wait()
if event.is_set():
conn_thread.kill()

Related

How to exit a Python process if any parallel thread exits?

I have a python process with a main thread starting parallel threads for 1 gRPC server and 1 HTTP server. I want the OS process for this application to exit if ANY of the parallel threads exits.
I think the main thread, as is coded here, would wait as long as there is a single parallel thread that is running. What do I need to do to change this so the main thread exits as soon as any parallel thread exits?
if __name__ == '__main__':
svc = MyService()
t1 = GrpcServer(svc)
t1.start()
t2 = HealthHttpServer()
t2.start()
with the servers defined as
class GrpcServer(threading.Thread):
def __init__(self, service):
super().__init__()
self.grpcServer(futures.ThreadPoolExecutor(max_workers=10))
self.grpcServer.add_insecure_port('[::]:8000')
myservice_pb2_grpc.add_MyServiceServicer_to_server(service, self.grpcServer)
def run(self):
self.grpcserver.start()
class HealthServer(Thread):
def __init__(self):
super().__init__()
def run(self):
port=2113
httpd = HTTPServer(('localhost', port), HealthHTTPRequestHandler)
httpd.serve_forever()
class HealthHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
'''Respond to a GET request.'''
if self.path == '/healthz':
self.send_response(HTTPStatus.OK)
self.end_headers()
self.wfile.write(b'ok')
else:
self.send_response(HTTPStatus.NOT_FOUND)
self.end_headers()
The cleanest I've found so far is:
define all these threads as daemon threads
define a global threading.Event
object
add a top-level try...finally in each thread, and call that event' set()
in the finally
in the main thread wait on that event after the threads are started
If anything goes wrong in any of the threads the finally block will execute, signaling that event, unblocking the main thread which exits. All the other threads being daemon threads, the process will then exit.

JoinableQueue between two process, Two processes are blocking forever sometimes

I am writing multiprocess program. There are four class: Main, Worker, Request and Ack. The Main class is the entry point of program. It will create the sub-process called Worker to do some jobs. The main process put the Request into JoinableQueue, and than Worker get request from queue. When Worker finished the request, it will put the ACK into queue. The part of code shown as below:
Main:
class Main():
def __init__(self):
self.cmd_queue = JoinableQueue()
self.worker = Worker(self.cmd_queue)
def call_worker(self, cmd_code):
if self.cmd_queue.empty() is True:
request = Request(cmd_code)
self.cmd_queue.put(request)
self.cmd_queue.join()
ack = self.cmd_queue.get()
self.cmd_queue.task_done()
if ack.value == 0:
return True
else:
return False
else:
# TODO: Error Handling.
pass
def run_worker(self):
self.worker.start()
Worker:
class Worker(Process):
def __init__(self, cmd_queue):
super(Worker, self).__init__()
self.cmd_queue = cmd_queue
...
def run(self):
while True:
ack = Ack(0)
try:
request = self.cmd_queue.get()
if request.cmd_code == ReqCmd.enable_handler:
self.enable_handler()
elif request.cmd_code == ReqCmd.disable_handler:
self.disable_handler()
else:
pass
except Exception:
ack.value = -1
finally:
self.cmd_queue.task_done()
self.cmd_queue.put(ack)
self.cmd_queue.join()
It often works normally. But Main process stuck at self.cmd_queue.join(), and the Worker stuck at self.cmd_queue.join() sometimes. It is so weird! Does anyone have any ideas? Thanks
There's nothing weird in the above issue: you shouldn't call queue's join within a typical single worker process activity because
Queue.join()
Blocks until all items in the queue have been gotten and
processed.
Such a calls where they are in your current implementation will make the processing pipeline wait.
Usually queue.join() is called in the main (supervisor) thread after initiating/starting all threads/workers.
https://docs.python.org/3/library/queue.html#queue.Queue.join

Stopping a server in a subprocess with its shutdown method

I am implementing a Server class in CPython 3.7 on Windows 10 with a Server.serve method that starts the serving forever and a Server.shutdown method that stops the serving. I need to run multiple server instances in subprocesses.
Running a server instance in a subthread stops the instance as expected:
import threading
import time
class Server:
def __init__(self):
self.shutdown_request = False
def serve(self):
print("serving")
while not self.shutdown_request:
print("hello")
time.sleep(1)
print("done")
def shutdown(self):
print("stopping")
self.shutdown_request = True
if __name__ == "__main__":
server = Server()
threading.Thread(target=server.serve).start()
time.sleep(5)
server.shutdown()
However running a server instance in a subprocess does not stop the instance, unexpectedly:
import multiprocessing
import time
class Server:
def __init__(self):
self.shutdown_request = False
def serve(self):
print("serving")
while not self.shutdown_request:
print("hello")
time.sleep(1)
print("done")
def shutdown(self):
print("stopping")
self.shutdown_request = True
if __name__ == "__main__":
server = Server()
multiprocessing.Process(target=server.serve).start()
time.sleep(5)
server.shutdown()
I suspect that in the multiprocessing case, the self.shutdown_request attribute is not shared between the parent process and the subprocess, and therefore the server.shutdown() call does not affect the running server instance in the subprocess.
I know I could solve this with multiprocessing.Event:
import multiprocessing
import time
class Server:
def __init__(self, shutdown_event):
self.shutdown_event = shutdown_event
def serve(self):
print("serving")
while not self.shutdown_event.is_set():
print("hello")
time.sleep(1)
print("done")
if __name__ == "__main__":
shutdown_event = multiprocessing.Event()
server = Server(shutdown_event)
multiprocessing.Process(target=server.serve).start()
time.sleep(5)
shutdown_event.set()
But I want to keep the Server.shutdown method instead of changing the Server interface according to its usage (single processing v. multiprocessing) and I don't want clients to deal with multiprocessing.Event.
I have finally figured out a solution by myself:
import multiprocessing
import time
class Server:
def __init__(self):
self.shutdown_event = multiprocessing.Event()
def serve(self):
print("serving")
while not self.shutdown_event.is_set():
print("hello")
time.sleep(1)
print("done")
def shutdown(self):
print("stopping")
self.shutdown_event.set()
if __name__ == "__main__":
server = Server()
multiprocessing.Process(target=server.serve).start()
time.sleep(5)
server.shutdown()
It works in either case: single processing (multithreading) and multiprocessing.
Remark. — With a multiprocessing.Event() in the __init__ method, Server instances are no longer pickable. That might be a problem if one wants to call a Server instance in a process pool (either with multiprocessing.pool.Pool or concurrent.futures.ProcessPoolExecutor). In this case, one should replace multiprocessing.Event() with multiprocessing.Manager().Event() in the __init__ method.

Terminate the main thread from an other thread in Python

How to terminate the program from a thread? I want to terminate the whole program if it can not connect to the server and the user clicks on cancel.
class Client(tk.Frame):
def __init__(self, master=None):
super().__init__(master)
t = threading.Thread(target = self.connect)
t.setDaemon(True)
t.start()
def connect(self):
try:
r.connect( "localhost", 28015)
self.refresh_objects()
except r.ReqlDriverError as e:
self.db_exception_handler()
def db_exception_handler(self):
if(tk.messagebox.askretrycancel('ERROR','ERROR: Unable to connect to the database.')):
try:
r.connect( "localhost", 28015)
except r.ReqlDriverError as e:
self.db_exception_handler()
else:
root.destroy() #I want to terminate here the program
#I tried: threading.main_thread().destroy() but not working
def main():
root = tk.Tk()
root.title("IOT_LPS_Client")
cln = Client(master=root)
cln.mainloop()
try:
root.destroy()
except:
pass
if __name__ == '__main__':
main()
I strongly suggest coding a disciplined shutdown wherein the thread notifies the main thread of the need to exit, and the main thread calling sys.exit. [note that only in the main thread sys.exit will kill the process].
This blog post discusses some of the issues and solutions around this. In brief, you can use something like threading.Event() to convey the stop signal from any thread to the main thread.

interrupt thread with start_consuming method of pika

I have a thread which listens for new messages from rabbitmq using pika. After configuring the connection using BlockingConnection, I start consuming messages throught start_consuming. How can I interrupt the start consuming method call to, for example, stop the thread in a gracefully manner?
You can use consume generator instead of start_consuming.
import threading
import pika
class WorkerThread(threading.Thread):
def __init__(self):
super(WorkerThread, self).__init__()
self._is_interrupted = False
def stop(self):
self._is_interrupted = True
def run(self):
connection = pika.BlockingConnection(pika.ConnectionParameters())
channel = connection.channel()
channel.queue_declare("queue")
for message in channel.consume("queue", inactivity_timeout=1):
if self._is_interrupted:
break
if not message:
continue
method, properties, body = message
print(body)
def main():
thread = WorkerThread()
thread.start()
# some main thread activity ...
thread.stop()
thread.join()
if __name__ == "__main__":
main()

Categories

Resources