How to gracefully shutdown any WSGI server? - python

I've been experimenting with several WSGI servers and am unable to find a way for them to gracefully shut down. What I mean by graceful is that the server stops listen()'ing for new requests, but finishes processing all connections that have been accept()'ed. The server process then exits.
So far I have spent some time with FAPWS, Cherrypy, Tornado, and wsgiref. It seems like no matter what I do, some of the clients receive a "Connection reset by peer".
Can someone direct me to a WSGI server that handles this properly? Or know of a way to configure one of these servers to doing a clean shutdown? I think my next step is to mock up a simple http server that does what I want.

HTTPd has the graceful-stop predicate for -k that will allow it to bring down any workers after they have completed their request. mod_wsgi is required to make it a WSGI container.

Related

Uninterrupted nginx + 2 gunicorns setup

So what's the trick? Nginx is facing the client. Normally the requests are forwarded to gunicorn A at port 80.
You can't run code update in-place, since something might be wrong. So you do a fresh code checkout and launch a separate gunicorn B on some port 5678.
Once you test the new code on a development/testing database, you:
Adjust gunicorn B to point to the database, but do not send any requests.
Stop gunicorn A. Nginx now, ever so briefly, responds with an error.
Set nginx to point to gunicorn B, still at port 5678.
Restart nginx.
Is this about right? Do you just write a script to run the four actions faster and minimize the duration (between steps 2 and 4) the server responds with an error?
Nginx supports configuration reloading. Using this feature, updating your application can work like this:
Start a new instance Gunicorn B.
Adjust the nginx configuration to forward traffic to Gunicorn B.
Reload the nginx configuration with nginx -s reload. After this, Gunicorn B will serve new requests, while Gunicorn A will still finish serving old requests.
Wait for the old nginx worker process to exit (which means all requests initiated before the reload are now done) and then stop Gunicorn A.
Assuming your application works correctly with two concurrent instances, this gives you a zero-downtime update.
The relevant excerpt from the nginx documentation:
Once the master process receives the signal to reload configuration, it checks the syntax validity of the new configuration file and tries to apply the configuration provided in it. If this is a success, the master process starts new worker processes and sends messages to old worker processes, requesting them to shut down. Otherwise, the master process rolls back the changes and continues to work with the old configuration. Old worker processes, receiving a command to shut down, stop accepting new connections and continue to service current requests until all such requests are serviced. After that, the old worker processes exit.

Python bottle process reach timeout connection

I have a problem that after certain amount of time my bottle server is not reachable and you get connection reset- timeout connection error.
When checking if the process is running, I found it running, but after killing the process and running it again the server return to serve requests.
Any idea what it could be?
I wrapped most of my functions with exception catching , but didn't helped me to understand the problem.
I wonder if anybody has used bottle and had encountered such problem
My guess is because bottle is single threaded, and it's hanging on a request. I would suggest trying a multi-threaded server, such as cherrypy, to see if that resolves the issue. Then go back and see where the hangup was at.
Install cherrypy
pip install cherrypy
Update your python file
bottle.run(myapp, server='cherrypy')
Would need to see more code to identify any specific issue.

How to setup WSGI server to run similarly to Apache?

I'm coming from PHP/Apache world where running an application is super easy. Whenever PHP application crashes Apache process running that request will stop but server will be still ruining happily and respond to other clients. Is there a way to have Python application work in a smilar way. How would I setup wsgi server like Tornado or CherryPy so it will work similarly? also, how would I run several applications from one server with different domains?
What you are after would possibly happen anyway for WSGI severs. This is because any Python exception only affects the current request and the framework or WSGI server would catch the exception, log it and translate it to a HTTP 500 status page. The application would still be in memory and would continue to handle future requests.
What we get down to is what exactly you mean by 'crashes Apache process'.
It would be rare for your code to crash, as in cause the process to completely exit due to a core dump, the whole process. So are you being confused in your terminology in equating an application language level error to a full process crash.
Even if you did find a way to crash a process, Apache/mod_wsgi handles that okay and the process will be replaced. The Gunicorn WSGI server will also do that. CherryPy will not unless you have a process manager running which monitors it and the process monitor restarts it. Tornado in its single process mode will have the same problem. Using Tornado as the worker in Gunicorn is one way around that plus I believe Tornado itself may have some process manager in it now for running multiple process which allow it to restart processes if they die.
Do note that if your application bug which caused the Python exception is bad enough and it corrupts state within the process, subsequent requests may possibly have issues. This is the one difference with PHP. With PHP, after any request, whether successful or not, the application is effectively thrown away and doesn't persist. So buggy code cannot affect subsequent requests. In Python, because the process with loaded code and retained state is kept between requests, then technically you could get things in a state where you would have to restart the process to fix it. I don't know of any WSGI server though that has a mechanism to automatically restart a process if one request returned an error response.
If you're in an UNIX-like environment, you can run mod_wsgi under Apache in Daemon Mode. This means there will be a separate process for the Python code, and even if it crashes the server will continue running normally (and hopefully the WSGI process will restart itself). A WSGI application can run under multiple processes and multiple threads per process.
As for running multiple domains in the same server, check Name-Based Virtual Hosts.

how to run a xmpp process and django server simultaneously

I have tried to run a xmpp process along with django server, so i included the xmpp process in manage.py so that both of them run simultaneoulsy. Now I have a problem that xmpp process is in an infinite loop and so django server wont start until I break the loop which isn't the task I wanted to do.
Is there a way so that I can run them simultaneously.
Your problem is probably that the XMPP process expects to be the only thread in the process, and so it blocks waiting for input.
You might be able to get around the problem by creating a new thread that then runs the XMPP process, see http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/1/
Be aware that there might be other interactions between the XMPP process and Django that will lead to problems, because they share the same address space.
If you just want to start some process whenever you run the Django server, see: How do I run another script in Python without waiting for it to finish?

Gracefull shutdown of bottle python server

Hi is there a way out to gracefully shutdown the bottle server. In a way it should be able to do few steps before it eventually stops. This is critical for some clean up of threads and db state etc avoiding the corrupt state during the restart.
I am using mod wsgi apache module for running the bottle server.
In mod_wsgi you can register atexit callbacks and they will be called on normal process shutdown. You don't have too long to do stuff though. If embedded mode, or daemon mode and shutdown caused by Apache restart, you have only 3 seconds as Apache will kill off processes forcibly after that. If daemon mode and trigger is due to touching WSGI script file or you explicitly sent daemon process a signal, you have 5 seconds, which is when mod_wsgi will decide it is taking too long and forcibly kill them.
See the 'atexit' module in Python.

Categories

Resources