Question pretty much says it all. If I am running Tornado on a server with Supervisor, what happens to active requests when I deploy code and need to restart the Tornado server? Are they dropped mid-request? Are they allowed to finish?
Supervisord send a signal like HUP or TERM to tornado process, the most important point is how tornado deal with it.
Unfortunately, tornado will simple exit when it get signal like HUP, TERM, INT.
Tornado has a sub module named autoreload, it make the application could detect the code files' changes and reload the application, but it only works the debug mode for one process, and not in WSGI applications. It's development tool.
But, we can define a function within run tornado.autoreload._reload function by manual, and register it for HUP sigal. tornado.autoreload.add_reload_hook can add functions should be called when reload.
Because the tornado doesn't manage the processes well on fork mode, so it's suggested running many independent processes for different ports. On this mode, the _reload will works like set debug flag.
After all, test and benchmark it for make sure it works well in your application.
Related
I have a Flask app that I run with uWSGI. I have configured logging to file in the Python/Flask application, so on service start it logs that the application has been started.
I want to be able to do this when the service stops as well, but I don't know how to implement it.
For example, if I run the uwsgi app in console, and then interrupt it with Ctrl-C, I get only uwsgi logs ("Goodbye to uwsgi" etc) in console, but no logs from the stopped python application. Not sure how to do this.
I would be glad if someone advised on possible solutions.
Edit:
I've tried to use Python's atexit module, but the function that I registered to run on exit is executed not one time, but 4 times (which is the number of uWSGI workers).
There is no "stop" event in WSGI, so there is no way to detect when the application stops, only when the server / worker stops.
I am new to Python - and work on Slackware Linux with Python 3.4.3. I prefer simple no-dependency solutions within one single python script.
I am building a demonized server program (A) which I need to access through both a regular shell CLI and GUIs in my web browser: it serves various files, uses a corresponding database and updates a firefox tab through python's WEBBROWSER function. Currently, I access process (A) via the CLI or a threaded network socket. This all started to work in a localhost scenario with all processes running on one machine.
Now, it turns out that the WebSocket protocol would render my setup dramatically simpler and cut short on traditional flow protocols using Apache and complex frameworks as middlemen.
1st central question: How do I access daemon (A) with websockets from the CLI? I thought about firing up a non-daemon version of my server program, now called (B), and send a program call to its (A) counterpart via the WebSocket HTTP protocol. This would make process (B) a websocket CLIENT, and process (A) a websocket SERVER. Is such a communication at all possible today?
2nd question: Which is the best suited template solution for this scenario - that works with python 3.4.3 ?! I started to play with Pithikos' very sleek python-websocket-server template (see https://github.com/Pithikos/python-websocket-server) but I am unable to use it as CLIENT (initiating the network call) to call its SERVER equivalent (receiving the call while residing in a daemonized process).
Problem 'solved': I gave up on the zero-dependency zero-library idea :
pip install websockets
https://websockets.readthedocs.io
It works like a charm. The WebSocket server sits in the daemon process and receives and processes WebSocket client calls that come from the CLI processes and from the HTML GUIs.
I'm coming from PHP/Apache world where running an application is super easy. Whenever PHP application crashes Apache process running that request will stop but server will be still ruining happily and respond to other clients. Is there a way to have Python application work in a smilar way. How would I setup wsgi server like Tornado or CherryPy so it will work similarly? also, how would I run several applications from one server with different domains?
What you are after would possibly happen anyway for WSGI severs. This is because any Python exception only affects the current request and the framework or WSGI server would catch the exception, log it and translate it to a HTTP 500 status page. The application would still be in memory and would continue to handle future requests.
What we get down to is what exactly you mean by 'crashes Apache process'.
It would be rare for your code to crash, as in cause the process to completely exit due to a core dump, the whole process. So are you being confused in your terminology in equating an application language level error to a full process crash.
Even if you did find a way to crash a process, Apache/mod_wsgi handles that okay and the process will be replaced. The Gunicorn WSGI server will also do that. CherryPy will not unless you have a process manager running which monitors it and the process monitor restarts it. Tornado in its single process mode will have the same problem. Using Tornado as the worker in Gunicorn is one way around that plus I believe Tornado itself may have some process manager in it now for running multiple process which allow it to restart processes if they die.
Do note that if your application bug which caused the Python exception is bad enough and it corrupts state within the process, subsequent requests may possibly have issues. This is the one difference with PHP. With PHP, after any request, whether successful or not, the application is effectively thrown away and doesn't persist. So buggy code cannot affect subsequent requests. In Python, because the process with loaded code and retained state is kept between requests, then technically you could get things in a state where you would have to restart the process to fix it. I don't know of any WSGI server though that has a mechanism to automatically restart a process if one request returned an error response.
If you're in an UNIX-like environment, you can run mod_wsgi under Apache in Daemon Mode. This means there will be a separate process for the Python code, and even if it crashes the server will continue running normally (and hopefully the WSGI process will restart itself). A WSGI application can run under multiple processes and multiple threads per process.
As for running multiple domains in the same server, check Name-Based Virtual Hosts.
I use IDEA 10.5 for my Flask experimentation. Flask has en embedded test server (like Django does)
When I launch my test class, the dev server launches as well on port 5000. All good.
* Running on http://127.0.0.1:5000/
When I click on the "Stop process" button (red square), I get the message saying the process is finished :
Process finished with exit code 143
However the server is still alive (responds to requests) and I can see I still have a python process running.
Obviously this prevents me from relaunching the test straight away, I have to kill the server process first.
How do you manage to get both your program and the server ending at the same time ?
I guess what happens is that you start your flask app which then is forking the development server as a new process. If you stop the app the forked process is still running.
This looks like a problem, that cannot easily be solved within the means of your IDE. You could add something to your main to kill the already running server process, before starting the app again, but that seems ugly.
But why don't you just start your app with app.run(debug=True) as described in flask doc? The server will reload automatically everytime you changed your app so you don't have to stop and restart it manually.
EDIT:
Something a bit quirky just came to my mind: if you just need a comfortable way to kill the server from within the IDE all you have to do is to introduce a syntactical error in one of the places the reloader monitors, save the file and the server will choke on it and die :)
This doesn't happen anymore with newer versions (tested with PyCharm 2.0)
I've written a specialized JSON-RPC server and just started working my way up into the application logic and finding it is a tad annoying to constantly having to stop/restart the server to make certain changes.
Previously I had a handler that ran in intervals to compare module modified time stamps with the past check then reload the module as needed. Unfortunately I don't trust it to work correctly now.
Is there a way for a reactor to stop and restart itself in a manner similar to Paster's Reloadable HTTPServer?
Shipped with Twisted is the twisted.python.rebuild module, so that is probably a good place to start.
Also see this SO question: Checking for code changes in all imported python modules
You could write something similar to paster's reloader, that would work like this:
start your main function, and before importing / using any twisted code, fork/spawn a subprocess.
In the subprocess, run your twisted application.
In the main process, run your code which checks for changed files. If code has changed, reload the subprocess.
However, the issue here is that unlike a development webserver, most twisted apps have a lot more state and just flat out killing / restarting the process is a bad idea, you may lose some state.
There is a way to do it cleanly:
When you spawn the twisted app, use subprocess.Popen() or similar, to get stdin/stdout pipes. Now in your subprocess, use the twisted reactor to listen on stdin (there is code for this in twisted, see twisted.internet.stdio which allows you to have a Protocol which talks to a stdio transport, in the usual twisted non-blocking manner).
Finally, when you decide it's time to reload, write something to the stdin of the subprocess telling it to shutdown. Now your twisted code can respond and shut down gracefully. Once it's cleanly quit, your master process can just spawn it again.
(Alternately you can use signals to achieve this, but this may not be OS portable)