Stopping a server in a subprocess with its shutdown method - python

I am implementing a Server class in CPython 3.7 on Windows 10 with a Server.serve method that starts the serving forever and a Server.shutdown method that stops the serving. I need to run multiple server instances in subprocesses.
Running a server instance in a subthread stops the instance as expected:
import threading
import time
class Server:
def __init__(self):
self.shutdown_request = False
def serve(self):
print("serving")
while not self.shutdown_request:
print("hello")
time.sleep(1)
print("done")
def shutdown(self):
print("stopping")
self.shutdown_request = True
if __name__ == "__main__":
server = Server()
threading.Thread(target=server.serve).start()
time.sleep(5)
server.shutdown()
However running a server instance in a subprocess does not stop the instance, unexpectedly:
import multiprocessing
import time
class Server:
def __init__(self):
self.shutdown_request = False
def serve(self):
print("serving")
while not self.shutdown_request:
print("hello")
time.sleep(1)
print("done")
def shutdown(self):
print("stopping")
self.shutdown_request = True
if __name__ == "__main__":
server = Server()
multiprocessing.Process(target=server.serve).start()
time.sleep(5)
server.shutdown()
I suspect that in the multiprocessing case, the self.shutdown_request attribute is not shared between the parent process and the subprocess, and therefore the server.shutdown() call does not affect the running server instance in the subprocess.
I know I could solve this with multiprocessing.Event:
import multiprocessing
import time
class Server:
def __init__(self, shutdown_event):
self.shutdown_event = shutdown_event
def serve(self):
print("serving")
while not self.shutdown_event.is_set():
print("hello")
time.sleep(1)
print("done")
if __name__ == "__main__":
shutdown_event = multiprocessing.Event()
server = Server(shutdown_event)
multiprocessing.Process(target=server.serve).start()
time.sleep(5)
shutdown_event.set()
But I want to keep the Server.shutdown method instead of changing the Server interface according to its usage (single processing v. multiprocessing) and I don't want clients to deal with multiprocessing.Event.

I have finally figured out a solution by myself:
import multiprocessing
import time
class Server:
def __init__(self):
self.shutdown_event = multiprocessing.Event()
def serve(self):
print("serving")
while not self.shutdown_event.is_set():
print("hello")
time.sleep(1)
print("done")
def shutdown(self):
print("stopping")
self.shutdown_event.set()
if __name__ == "__main__":
server = Server()
multiprocessing.Process(target=server.serve).start()
time.sleep(5)
server.shutdown()
It works in either case: single processing (multithreading) and multiprocessing.
Remark. — With a multiprocessing.Event() in the __init__ method, Server instances are no longer pickable. That might be a problem if one wants to call a Server instance in a process pool (either with multiprocessing.pool.Pool or concurrent.futures.ProcessPoolExecutor). In this case, one should replace multiprocessing.Event() with multiprocessing.Manager().Event() in the __init__ method.

Related

How to kill a thread in python with blocking command?

I want to use Interactive Brokers' API which opens a TCP connection on a different thread.
The problem is that the app.run() function what needs to be called to establish the TCP connection probably uses a while true loop for processing it's queue, what block every possible way I know to terminate the thread when exiting the program.
class TradeApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
#...
def websocket_con():
app.run()
app = TradeApp()
app.connect("127.0.0.1", 7497, clientId=1)
con_thread = threading.Thread(target=websocket_con, daemon=True)
con_thread.start()
I've tried using a daemon thread and simple one.
I've tried to use events also.
But whatever I do, it seems the thread will never exit from the execution of app.run() function.
class TradingApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
#...
def websocket_conn():
app.run()
event.wait()
if event.is_set():
app.close()
event = threading.Event()
app = TradingApp()
app.connect("127.0.0.1", 7497, clientId=1)
conn_thread = threading.Thread(target=websocket_conn)
conn_thread.start()
#...
event.set()
Am I doing something wrong?
How could I exit from app.run() function?
I think u bad understand multiprocessing, when u start websocket_conn() function by new process, this new process will be executed step by step so event.wait() if event.is_set(): doesn't execute before app.run() wasn't over.
Try something like this:
conn_thread = threading.Thread(target=websocket_conn)
conn_thread.start()
event.wait()
if event.is_set():
conn_thread.kill()

Python Quart Unable to shutdown background task

I am working on a Python app, but I am moving from Flask to Quart. The application needs a background task that runs constantly whilst the application is running.
When I try to stop the process using control-c, the thread doesn't close cleanly and sits in the while loop in the shutdown routine.
while not self._master_thread_class.shutdown_completed:
if not pro:
print('[DEBUG] Thread is not complete')
pro = True
I have followed this Stackoverflow question, but I can't figure out how to cleanly shutdown the background thread so I would love an explanation please as it seems like the Quart Documentation is lacking a bit.
MasterThread class:
import asyncio
class MasterThread:
def __init__(self, shutdown_requested_event):
self._shutdown_completed = False
self._shutdown_requested_event = shutdown_requested_event
self._shutdown_requested = False
def __del__(self):
print('Thread was deleted')
def run(self, loop) -> None:
asyncio.set_event_loop(loop)
loop.run_until_complete(self._async_entrypoint())
#asyncio.coroutine
def _async_entrypoint(self) -> None:
while not self. _shutdown_requested and \
not self._shutdown_requested_event.isSet():
#print('_main_loop()')
pass
if self._shutdown_requested_event.wait(0.1):
self. _shutdown_requested = True
print('[DEBUG] thread has completed....')
self._shutdown_completed = True
def _main_loop(self) -> None:
print('_main_loop()')
Main application module:
import asyncio
import threading
from quart import Quart
from workthr import MasterThread
app = Quart(__name__)
class Service:
def __init__(self):
self._shutdown_thread_event = threading.Event()
self._master_thread = MasterThread(self._shutdown_thread_event)
self._thread = None
def __del__(self):
self.stop()
def start(self):
loop = asyncio.get_event_loop()
self._thread = threading.Thread(target=self._master_thread.run, args=(loop,))
self._thread.start()
return True
def stop(self) -> None:
print('[DEBUG] Stop signal caught...')
self._shutdown_thread_event.set()
while not self._master_thread.shutdown_completed:
print('[DEBUG] Thread is not complete')
print('[DEBUG] Thread has completed')
self._shutdown()
def _shutdown(self):
print('Shutting down...')
service = Service()
service.start()
Quart has startup and shutdown methods that allow something to be started before the server starts serving and stopped when the server finishes serving. If your background task is mostly IO bound I'd recommend just using a coroutine function rather than a thread,
async def background_task():
while True:
...
#app.before_serving
async def startup():
app.background_task = asyncio.ensure_future(background_task())
#app.after_serving
async def shutdown():
app.background_task.cancel() # Or use a variable in the while loop
Or you can do the same with your Service,
#app.before_serving
async def startup():
service.start()
#app.after_serving
async def shutdown():
service.stop()

CherryPy waits for extra thread to end that is stopped later

I am building a application that uses CherryPy to serve a REST API, and another thread that does background work (in fact, it reads data from a serial port).
import cherrypy
import threading
class main:
#cherrypy.expose
def index(self):
return "Hello World."
def run():
while running == True:
# read data from serial port and store in a variable
running = True
t = threading.Thread(target = run)
t.start()
if __name__ == '__main__':
cherrypy.quickstart(main())
running = False
Both api.pc_main() and run work fine. The trouble is, I use a running boolean to stop my thread, but that piece of code is never reached, because CherryPy waits for that thread to finish when I push Ctrl-C. I actually have to use kill -9 to stop the process.
I fixed it by making my thread a CherryPy plugin. I used the code found here: Why is CTRL-C not captured and signal_handler called?
from cherrypy.process.plugins import SimplePlugin
class myplugin(SimplePlugin):
running = False
thread = None
def __init__(self, bus):
SimplePlugin.__init__(self, bus)
def start(self):
print "Starting thread."
self.running = True
if not self.thread:
self.thread = threading.Thread(target = self.run)
self.thread.start()
def stop(self):
print "Stopping thread."
self.running = False
if self.thread:
self.thread.join()
self.thread = None
def run(self):
while self.running == True:
print "Thread runs."
time.sleep(1)
then in the main script:
if __name__ == '__main__':
mythread(cherrypy.engine).subscribe()
cherrypy.quickstart(main())

How to stop a simplehttpserver in python from httprequest handler?

I am new to python and wrote a simple httpserver in python. I am trying to shut down the server from the request to the server. How can I achieve this functionality of calling a function of the server from the handler?
class MyHandler(SimpleHTTPRequestHandler):
def do_GET(self):
if self.path == '/shutdown':
pass # I want to call MainServer.shutdown from here
class MainServer()
def __init__(self, port = 8123):
self._server = HTTPServer(('0.0.0.0', port), MyHandler)
self._thread = threading.Thread(target = self._server.serve_forever)
self._thread.deamon = True
def start(self):
self._thread.start()
def shut_down(self):
self._thread.close()
In short, do not use server.serve_forver(..). The request handler has a self.server attribute that you can use to communicate with the main server instance to set some sort of flag that tell the server when to stop.
import threading
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/shutdown':
self.server.running = False
class MainServer:
def __init__(self, port = 8123):
self._server = HTTPServer(('0.0.0.0', port), MyHandler)
self._thread = threading.Thread(target=self.run)
self._thread.deamon = True
def run(self):
self._server.running = True
while self._server.running:
self._server.handle_request()
def start(self):
self._thread.start()
def shut_down(self):
self._thread.close()
m = MainServer()
m.start()
The server is normally accessible from the handler through the server attribute. A HTTPServer that was started with server_forerver can be shut down with its... shutdown() method. Unfortunately, even if it is not documented, you cannot call shutdown from the thread that runs the server loop because it causes a deadlock. So you could write this in your do_GET handler method:
def do_GET(self):
# send something to requester...
if self.path == '/shutdown':
t = threading.Thread(target = self.server.shutdown)
t.daemon = True
t.start()
This will cleanly let the thread to terminate, and you should also use it as you server shut_down method, because Python threads cannot be closed abruptly:
def shut_down(self):
self._server.shutdown()

interrupt thread with start_consuming method of pika

I have a thread which listens for new messages from rabbitmq using pika. After configuring the connection using BlockingConnection, I start consuming messages throught start_consuming. How can I interrupt the start consuming method call to, for example, stop the thread in a gracefully manner?
You can use consume generator instead of start_consuming.
import threading
import pika
class WorkerThread(threading.Thread):
def __init__(self):
super(WorkerThread, self).__init__()
self._is_interrupted = False
def stop(self):
self._is_interrupted = True
def run(self):
connection = pika.BlockingConnection(pika.ConnectionParameters())
channel = connection.channel()
channel.queue_declare("queue")
for message in channel.consume("queue", inactivity_timeout=1):
if self._is_interrupted:
break
if not message:
continue
method, properties, body = message
print(body)
def main():
thread = WorkerThread()
thread.start()
# some main thread activity ...
thread.stop()
thread.join()
if __name__ == "__main__":
main()

Categories

Resources