Gracefully stopping greenlets in Windows service - python

Using pywin32 and gevent, I'm creating a Windows service that serves two functions:
It runs a web server for a simple web application (using bottle's gevent server adapter, which runs WSGIServer's serve_forever()).
It listens for incoming SIP calls (using a gevent-based SIP client) and runs some simple code to respond to calls.
I'd like the service to just keep the web server and SIP client running forever, but stop both immediately and gracefully if I try to stop the Windows service. This seems like it should be pretty simple.
What I'm currently doing to run the app is basically to run the web server and SIP client each in a greenlet, and run kill on both greenlets when I want to stop the app (simplified mockup):
from bottle import run
from mysipclient import SipClient
import gevent
def sip_listen():
client = SipClient()
try:
client.wait() # This method blocks on a gevent queue.get call
finally:
client.close() # This does some cleanup, like deregistering from the SIP server, that I really want to run when the service stops!
class App(object):
def start(self):
self.stop_event = gevent.event.Event()
self.server_greenlet = gevent.spawn(run, server='gevent', host='0.0.0.0', port=8080)
self.sip_greenlet = gevent.spawn(sip_listen)
gevent.joinall([self.server_greenlet, self.sip_greenlet])
def stop(self):
self.server_greenlet.kill()
self.sip_greenlet.kill()
if __name__ == "__main__":
app = App()
gevent.spawn_later(10, app.stop)
app.start()
If I run this from the command line, it works great: it starts the app with both greenlets working, then ten seconds later it shuts itself down, running the cleanup code for the SIP client and all.
Now, though, I try to make this into a Windows service, using pywin32's win32serviceutil:
import win32serviceutil
import win32service
import win32event
import servicemanager
from app import App
class TestService(win32serviceutil.ServiceFramework):
_svc_name_ = 'TestService'
_svc_display_name_ = 'TestService'
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.service_obj = App()
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
self.service_obj.stop()
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_, ''))
self.service_obj.start()
if __name__ == '__main__':
if len(sys.argv) == 1:
servicemanager.Initialize()
servicemanager.PrepareToHostSingle(TestService)
servicemanager.StartServiceCtrlDispatcher()
else:
win32serviceutil.HandleCommandLine(TestService)
When I install this as a service and run it, I get an exception when trying to kill the greenlets: LoopExit: This operation would block forever. Then the service fails to stop, and I have to kill it manually. (I can avoid this by catching the exception, using an event instead of joining the greenlets and setting the event on stop - but this means the cleanup doesn't run at all.)
I'm pretty new to both gevent and working with Windows services, and Google hasn't been terribly helpful. I thought maybe the difference was about how with the command line version I was running stop in another greenlet, so I tried replacing self.service_obj.stop() in SvcStop with gevent.spawn(self.service_obj.stop).join(), but that way it doesn't even throw the exception and just hangs completely until I kill the process.
What's going on here? Am I doing something fundamentally wrong? How do I stop the greenlets gracefully on SvcStop?

You need to trap the signal event
from gevent import signal
def shutdown():
print('Shutting down ...')
server.stop(timeout=60)
exit(signal.SIGTERM)
gevent.signal(signal.SIGTERM, shutdown)
gevent.signal(signal.SIGQUIT, shutdown)
gevent.signal(signal.SIGINT, shutdown) #CTRL C
... start server code

Related

Python - How to use FastAPI and uvicorn.run without blocking the thread?

I'm looking for a possibility to use uvicorn.run() with a FastAPI app but without uvicorn.run() is blocking the thread. I already tried to use processes, subprocessesand threads but nothing worked.
My problem is that I want to start the Server from another process that should go on with other tasks after starting the server. Additinally I have problems closing the server like this from another process.
Has anyone an idea how to use uvicorn.run() non blocking and how to stop it from another process?
Approach given by #HadiAlqattan will not work because uvicorn.run expects to be run in the main thread. Errors such as signal only works in main thread will be raised.
Correct approach is:
import contextlib
import time
import threading
import uvicorn
class Server(uvicorn.Server):
def install_signal_handlers(self):
pass
#contextlib.contextmanager
def run_in_thread(self):
thread = threading.Thread(target=self.run)
thread.start()
try:
while not self.started:
time.sleep(1e-3)
yield
finally:
self.should_exit = True
thread.join()
config = uvicorn.Config("example:app", host="127.0.0.1", port=5000, log_level="info")
server = Server(config=config)
with server.run_in_thread():
# Server is started.
...
# Server will be stopped once code put here is completed
...
# Server stopped.
Very handy to run a live test server locally using a pytest fixture:
# conftest.py
import pytest
#pytest.fixture(scope="session")
def server():
server = ...
with server.run_in_thread():
yield
Credits: uvicorn#742 by florimondmanca
This is an alternate version which works and was inspired by Aponace uvicorn#1103. The uvicorn maintainers want more community engagement with this issue, so if you are experiencing it, please join the conversation.
Example conftest.py file.
import pytest
from fastapi.testclient import TestClient
from app.main import app
import multiprocessing
from uvicorn import Config, Server
class UvicornServer(multiprocessing.Process):
def __init__(self, config: Config):
super().__init__()
self.server = Server(config=config)
self.config = config
def stop(self):
self.terminate()
def run(self, *args, **kwargs):
self.server.run()
#pytest.fixture(scope="session")
def server():
config = Config("app.main:app", host="127.0.0.1", port=5000, log_level="debug")
instance = UvicornServer(config=config)
instance.start()
yield instance
instance.stop()
#pytest.fixture(scope="module")
def mock_app(server):
client = TestClient(app)
yield client
Example test_app.py file.
def test_root(mock_app):
response = mock_app.get("")
assert response.status_code == 200
When I set reload to False, fastapi will start a multi-process web service. If it is true, there will only be one process for the web service
import uvicorn
from fastapi import FastAPI, APIRouter
from multiprocessing import cpu_count
import os
router = APIRouter()
app = FastAPI()
#router.post("/test")
async def detect_img():
print("pid:{}".format(os.getpid()))
return os.getpid
if __name__ == '__main__':
app.include_router(router)
print("cpu个数:{}".format(cpu_count()))
workers = 2*cpu_count() + 1
print("workers:{}".format(workers))
reload = False
#reload = True
uvicorn.run("__main__:app", host="0.0.0.0", port=8082, reload=reload, workers=workers, timeout_keep_alive=5,
limit_concurrency=100)
According to Uvicorn documentation there is no programmatically way to stop the server.
instead, you can stop the server only by pressing ctrl + c (officially).
But I have a trick to solve this problem programmatically using multiprocessing standard lib with these three simple functions :
A run function to run the server.
A start function to start a new process (start the server).
A stop function to join the process (stop the server).
from multiprocessing import Process
import uvicorn
# global process variable
proc = None
def run():
"""
This function to run configured uvicorn server.
"""
uvicorn.run(app=app, host=host, port=port)
def start():
"""
This function to start a new process (start the server).
"""
global proc
# create process instance and set the target to run function.
# use daemon mode to stop the process whenever the program stopped.
proc = Process(target=run, args=(), daemon=True)
proc.start()
def stop():
"""
This function to join (stop) the process (stop the server).
"""
global proc
# check if the process is not None
if proc:
# join (stop) the process with a timeout setten to 0.25 seconds.
# using timeout (the optional arg) is too important in order to
# enforce the server to stop.
proc.join(0.25)
With the same idea you can :
use threading standard lib instead of using multiprocessing standard lib.
refactor these functions into a class.
Example of usage :
from time import sleep
if __name__ == "__main__":
# to start the server call start function.
start()
# run some codes ....
# to stop the server call stop function.
stop()
You can read more about :
Uvicorn server.
multiprocessing standard lib.
threading standard lib.
Concurrency to know more about multi processing and threading in python.

Create process in tornado web server

I have a multiproccessing tornado web server and I want to create another process that will do some things in the background.
I have a server with to following code
start_background_process
app = Application([<someurls>])
server = HTTPServer(app)
server.bind(8888)
server.start(4) # Forks multiple sub-processes
IOLoop.current().start()
def start_background_process():
process = multiprocessing.Process(target=somefunc)
process.start()
and everything is working great.
However when I try to close the server (by crtl c or send signal)
I get AssertionError: can only join a child process
I understood the cause of this problem:
when I create a process with multiprocess a call for the process join method
is registered in "atexit" and because tornado does a simple fork all its childs also call the join method of the process I created and the can't since the process is their brother and not their son?
So how can I open a process normally in tornado?
"HTTPTserver start" uses os.fork to fork the 4 sub-processes as it can be seen in its source code.
If you want your method to be executed by all the 4 sub-processes, you have to call it after the processes have been forked.
Having that in mind your code can be changed to look as below:
import multiprocessing
import tornado.web
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
# A simple external handler as an example for completion
from handlers.index import IndexHandler
def method_on_sub_process():
print("Executing in sub-process")
def start_background_process():
process = multiprocessing.Process(target=method_on_sub_process)
process.start()
def main():
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
if __name__ == "__main__":
main()
Furthermore to keep the behavior of your program clean during any keyboard interruption , surround the instantiation of the sever by a try...except clause as below:
def main():
try:
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
except KeyboardInterrupt:
IOLoop.instance().stop()

how to write a multithread kivy game(on rasp Pi) that can listen to a port at the same time

I am writing a remote-control snake game on Raspberry Pi using kivy(output to the 7" display).
The socket is supposed to listen to the port while the game is running.
However it turns out that game loop and socketIO's wait loop can not run together. I tried multithreading but it didn't work as expected.
Code for socketIO:
from socketIO_client import SocketIO, BaseNamespace
class Namespace(BaseNamespace):
def on_connect(self):
print('[Connected]')
def on_message(self,packet):
print packet
self.get_data(packet)
def get_data(self, packet):
if(type(packet) is str):
matches = re.findall(PATTERN, packet)
if(matches[0][0]=='2'):
dataMatches = re.findall(DATAPATTERN, matches[0][4])
print dataMatches
......
Code for main that definitely does not work:
if __name__ == '__main__':
MyKeyboardListener() #keyboard listener, works fine
SnakeApp().run()
socketIO = SocketIO('10.0.0.4',8080,Namespace)
socketIO.wait()
I tried the following multithreading, but it didn't work:
if __name__ == '__main__':
MyKeyboardListener() #keyboard listener, works fine
threading.Thread(target = SnakeApp().run).start() #results in abort
socketIO = SocketIO('10.0.0.4',8080,Namespace)
socketIO.wait()
The above code results in making program to abort with error message :"Fatal Python error: (pygame parachute) Segmentation Fault
Aborted"
I also tried another multithreading method but it didn't work as well. This is really frustrating. Is there any way to let game loop and socketIO's wait loop run at the same time? or I just missed something?
UPDATE: working code for main:
def connect_socket():
socketIO = SocketIO('10.0.0.4',8080,Namespace)
socketIO.wait()
if __name__ == '__main__':
MyKeyboardListener() #keyboard listener, works fine
socketThread = threading.Thread(target = connect_socket) #creat thread for socket
socketThread.daemon = True #set daemon flag
socketThread.start()
SnakeApp().run
You should run the kivy main loop in the primary thread, and the socket listing in a secondary thread (reverse of your second try that didn't work).
But it will leave your app hanging when you simply close it, because the secondary thread will keep it alive despite the primary thread being dead.
The easiest solution to this problem is to start the secondary thread with a daemon = True flag, so it will be killed as soon as the primary thread is dead.

Stopping a tornado application

Let's take the hello world application in the Tornado home page:
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Is there a way, after the IOloop has been started and without stopping it, to essentially stop the application and start another one (on the same port or on another)?
I saw that I can add new application (listening on different ports) at runtime, but I do not know how I could stop existing ones.
Application.listen() method actually creates a HTTPServer and calls its listen() medthod. HTTPServer objects has stop() method which is probably what you need. But in order to do it you have to explicitly create HTTPServer object in your script.
server = HTTPServer(application)
server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
#somewhere in your code
server.stop()
Here is a gist about how to gracefully and safely shutdown the tornado ioloop.
https://gist.github.com/nicky-zs/6304878
However, you can refer to this implementation to achieve your goal.
To add to #Alex Shkop's answer a few years later, as of Tornado 4.3 .listen() returns a reference to its HTTPServer!
https://www.tornadoweb.org/en/stable/web.html#tornado.web.Application.listen
server = app.listen()
... # later
server.stop()
Further, if you're working in a Jupyter notebook and for some reason need a Tornado server, you can try to close the HTTPServer before you recreate it to avoid OSError: [Errno 98] Address already in use on re-running the cell
# some Jupyter cell
#
import tornado.web
try:
server.stop() # NameError on first cell run
except Exception as ex:
print(f"server not started to stop: {repr(ex)}")
else: # did not raise NameError: server was running
print(f"successfully stopped server: {server}")
app = tornado.web.Application(...)
server = app.listen(9006) # arbitrary listening port

CherryPy interferes with Twisted shutting down on Windows

I've got an application that runs Twisted by starting the reactor with reactor.run() in my main thread after starting some other threads, including the CherryPy web server. Here's a program that shuts down cleanly when Ctrl+C is pressed on Linux but not on Windows:
from threading import Thread
from signal import signal, SIGINT
import cherrypy
from twisted.internet import reactor
from twisted.web.client import getPage
def stop(signum, frame):
cherrypy.engine.exit()
reactor.callFromThread(reactor.stop)
signal(SIGINT, stop)
class Root:
#cherrypy.expose
def index(self):
reactor.callFromThread(kickoff)
return "Hello World!"
cherrypy.server.socket_host = "0.0.0.0"
Thread(target=cherrypy.quickstart, args=[Root()]).start()
def print_page(html):
print(html)
def kickoff():
getPage("http://acpstats/account/login").addCallback(print_page)
reactor.run()
I believe that CherryPy is the culprit here, because here's a different program that I wrote without CherryPy that does shutdown cleanly on both Linux and Windows when Ctrl+C is pressed:
from time import sleep
from threading import Thread
from signal import signal, SIGINT
from twisted.internet import reactor
from twisted.web.client import getPage
keep_going = True
def stop(signum, frame):
global keep_going
keep_going = False
reactor.callFromThread(reactor.stop)
signal(SIGINT, stop)
def print_page(html):
print(html)
def kickoff():
getPage("http://acpstats/account/login").addCallback(print_page)
def periodic_downloader():
while keep_going:
reactor.callFromThread(kickoff)
sleep(5)
Thread(target=periodic_downloader).start()
reactor.run()
Does anyone have any idea what the problem is? Here's my conundrum:
On Linux everything works
On Windows, I can call functions from signal handlers using reactor.callFromThread when CherryPy is not running
When CherryPy is running, no function that I call using reactor.callFromThread from a signal handler will ever execute (I've verified that the signal handler itself does get called)
What can I do about this? How can I shut down Twisted on Windows from a signal handler while running CherryPy? Is this a bug, or have I simply missed some important part of the documentation for either of these two projects?
CherryPy handles signals by default when you call quickstart. In your case, you should probably just unroll quickstart, which is only a few lines, and pick and choose. Here's basically what quickstart does in trunk:
if config:
cherrypy.config.update(config)
tree.mount(root, script_name, config)
if hasattr(engine, "signal_handler"):
engine.signal_handler.subscribe()
if hasattr(engine, "console_control_handler"):
engine.console_control_handler.subscribe()
engine.start()
engine.block()
In your case, you don't need the signal handlers, so you can omit those. You also don't need to call engine.block if you're not starting CherryPy from the main thread. Engine.block() is just a way to make the main thread not terminate immediately, but instead wait around for process termination (this is so autoreload works reliably; some platforms have issues calling execv from any thread but the main thread).
If you remove the block() call, you don't even need the Thread() around quickstart. So, replace your line:
Thread(target=cherrypy.quickstart, args=[Root()]).start()
with:
cherrypy.tree.mount(Root())
cherrypy.engine.start()

Categories

Resources