I am building a Django project that one the of the apps has a state that I want to keep (a complex rate limiter as a Python Object that is hard to implement through something like Redis).
By making the server to spawn new processes like
WSGIDaemonProcess localhost processes=2 threads=25
Will it cause to lose sync of the state between processes ? In standard python one will spin a Manager that proxies the state in multiple processes.
I know it's not recommended to run a Bottle or Flask app on production with python myapp.py --port=80 because it's a development server only.
I think it's not recommended as well to run it with python myapp.py --port=5000 and link it to Apache with: RewriteEngine On, RewriteRule /(.*) http://localhost:5000/$1 [P,L] (or am I wrong?), because WSGI is preferred.
So I'm currently setting up Python app <-> mod_wsgi <-> Apache (without gunicorn or other tool to keep things simple).
Question: when using WSGI, I know it's Apache and mod_wsgi that will automatically start/stop enough processes running myapp.py when requests will come, but:
how can I manually stop these processes?
more generally, is there a way to monitor them / know how many processes started by mod_wsgi are currently still running? (one reason, among others, is to check if the processes terminate after a request or if they stay running)
Example:
I made some changes in myapp.py, and I want to restart all processes running it, that have been launched by mod_wsgi (Note: I know that mod_wsgi can watch changes on the source code, and relaunch, but this only works on changes made on the .wsgi file, not on the .py file. I already read that touch myapp.wsgi can be a solution for that, but more generally I'd like to be able to stop and restart manually)
I want to temporarily stop the whole application myapp.py (all instances of it)
I don't want to use service apache2 stop for that because I also run other websites with Apache, not just this one (I have a few VirtualHosts). For the same reason (I run other websites with Apache, and some client might be downloading a 1 GB file at the same time), I don't want to do service apache2 restart that would have an effect on all websites using Apache.
I'm looking for a cleaner way than kill pid or SIGTERM, etc. (because I read it's not recommended to use signals in this case).
Note: I already read How to do graceful application shutdown from mod_wsgi, it helped, but here it's complementary questions, not a duplicate.
My current Python Bottle + Apache + mod_wsgi setup:
Installation:
apt-get install libapache2-mod-wsgi
a2enmod wsgi # might be done automatically by previous line, but just to be sure
Apache config (source: Bottle doc; a more simple config can be found here):
<VirtualHost *:80>
ServerName example.com
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5
WSGIScriptAlias / /home/www/wsgi_test/app.wsgi
<Directory />
Require all granted
</Directory>
</VirtualHost>
There should be up to 5 processes, is that right? As stated before in the question, how to know how many are running, how to stop them?
/home/www/wsgi_test/app.wsgi (source: Bottle doc)
import os
from bottle import route, template, default_app
os.chdir(os.path.dirname(__file__))
#route('/hello/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
application = default_app()
Taken partially from this question, add display-name to WSGIDaemonProcess so you can grab them using a command like:
ps aux | grep modwsgi
Add this to your configuration:
Define GROUPNAME modwsgi
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5 display-name=%{GROUPNAME}
Update
There are a couple of reasons why ps would not give you the DaemonProcess display-name.
As shown in the docs:
display-name=value Defines a different name to show for the daemon
process when using the ps command to list processes. If the value is
%{GROUP} then the name will be (wsgi:group) where group is replaced
with the name of the daemon process group.
Note that only as many characters of the supplied value can be
displayed as were originally taken up by argv0 of the executing
process. Anything in excess of this will be truncated.
This feature may not work as described on all platforms. Typically it
also requires a ps program with BSD heritage. Thus on some versions of
Solaris UNIX the /usr/bin/ps program doesn’t work, but /usr/ucb/ps
does. Other programs which can display this value include htop.
You could:
Set a display-name of smaller length:
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5 display-name=wsws
And try to find them by:
ps aux | grep wsws
Or set it to %{GROUP} and filter using the name of the daemon process group (wsgi:group).
The way which processes are managed with mod_wsgi for each mode is described in:
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
For embedded mode, where your WSGI application is run inside of the Apache child worker processes, Apache manages when processes are created and destroyed based on the Apache MPM settings. Because of how Apache manages the processes, they can be shutdown at any time if there is insufficient request throughput, or more processes could be created if request throughput increases. When running, the same process will handle many requests over time until it gets shutdown. In other words, Apache dynamically manages the number of processes.
Because of this dynamic process management, it is a bad idea to use embedded mode of mod_wsgi unless you know how to tune Apache properly and many other things as well. In short, never use embedded mode unless you have a good amount of experience with Apache and running Python applications with it. You can watch a video about why you wouldn't want to run in embedded mode at:
https://www.youtube.com/watch?v=k6Erh7oHvns
There is also the blog post:
http://blog.dscpl.com.au/2012/10/why-are-you-using-embedded-mode-of.html
So use daemon mode and verify that your configuration is correct and you are in fact using daemon mode by using the check in:
http://modwsgi.readthedocs.io/en/develop/user-guides/checking-your-installation.html#embedded-or-daemon-mode
For daemon mode, the WSGI application runs in a separate set of managed processed. These are created at the start and will run until Apache is restarted, or reloading of the process is triggered for various reasons, including:
The daemon process is sent a direct signal to shutdown by a user.
The code of the application sends itself a signal.
The WSGI script file is modified, which will trigger a shutdown so the WSGI application can be reloaded.
A defined request timeout occurs due to stuck or long running request.
A defined maximum number of requests has occurred.
A defined inactivity timeout expires.
A defined timer for periodic process restart expires.
A startup timeout is defined and the WSGI application failed to load in that time.
In these cases, when the process is shutdown, it is replaced.
More details about the various timeout options and how the processes respond to signals can be found in:
http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html
More details about source code reloading and touching of the WSGI script file can be found in:
http://modwsgi.readthedocs.io/en/develop/user-guides/reloading-source-code.html
One item which is documented is how you can incorporate code which will look for any changes to Python code files used by your application. When a change occurs to any of the files, the process will be restarted by sending itself a signal. This should only be used for development and never in production.
If you are using mod_wsgi-express in development, which is preferable to hand configuring Apache yourself, you can use the --reload-on-changes option.
If sending a SIGTERM signal to the daemon process, there is a set shutdown sequence where it will wait a few seconds to wait for current requests to finish. If the requests don't finish, the process will be shutdown anyway. That period of time is dictated by the shutdown timeout. You shouldn't play with that value.
If sending a SIGUSR1 signal to the daemon process, by default it acts just like sending a SIGTERM signal. If however you specify the graceful timeout for shutdown, you can extend how long it will wait for current requests to finish. New requests will be accepting during that period. That graceful timeout also applies in other cases as well, such as maxmimum number of requests received, or timer for periodic restart triggered. If you need the timeout when using SIGUSR1 to be different to those cases, define the eviction timeout instead.
As to how to identify the daemon processes to be sent the signal, use the display-name of option WSGIDaemonProcess. Then use ps to identify the processes, or possibly use killall if it uses the modified process name on your platform. Send the daemon processes the SIGUSR1 signal if want more graceful shutdown and SIGTERM if want them to restart straight away.
If you want to track how long a daemon process has been running, you can use:
import mod_wsgi
metrics = mod_wsgi.process_metrics()
The metrics value will include output like the following for the process the call is made in:
{'active_requests': 1,
'cpu_system_time': 0.009999999776482582,
'cpu_user_time': 0.05000000074505806,
'current_time': 1525047105.710778,
'memory_max_rss': 11767808,
'memory_rss': 11767808,
'pid': 4774,
'request_busy_time': 0.001851,
'request_count': 2,
'request_threads': 2,
'restart_time': 1525047096.31548,
'running_time': 9,
'threads': [{'request_count': 2, 'thread_id': 1},
{'request_count': 1, 'thread_id': 2}]}
If you just want to know how many processes/threads are used for the current daemon process group you can use:
mod_wsgi.process_group
mod_wsgi.application_group
mod_wsgi.maximum_processes
mod_wsgi.threads_per_process
to get details about the process group. The number of process is fixed at this time for daemon mode and the name maximum_processes is just to be consistent with what the name is in embedded mode.
If you need to run code on process shutdown, you should NOT try and define your own signal handlers. Do that and mod_wsgi will actually ignore them as they will interfere with normal operation of Apache and mod_wsgi. Instead, if you need to run code on process shutdown, use atexit.register(). Alternatively, you can subscribe to special events generated by mod_wsgi and trigger something off the process shutdown event.
Edit: a more simple WSGI config is given in my question of Python WSGI handler directly in Apache .htaccess, not in VirtualHost
Based on Evhz's answer, I made a simple test to check that the processes are still running:
Apache config:
<VirtualHost *:80>
ServerName example.com
<Directory />
AllowOverride All
Require all granted
</Directory>
WSGIScriptAlias / /home/www/wsgi_test/app.wsgi
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5 display-name=testwsgi
</VirtualHost>
app.wsgi file:
import os, time
from bottle import route, template, default_app
os.chdir(os.path.dirname(__file__))
#route('/hello/<name>')
def index(name):
global i
i += 1
return template('<b>Hello {{name}}</b>! request={{i}}, pid={{pid}}',
name=name, i=i, pid=os.getpid())
i = 0
time.sleep(3) # wait 3 seconds to make the client notice we launch a new process!
application = default_app()
Now access http://www.example.com/hello/you many times:
The initial time.sleep(3) will help, from the client browser, to see exactly when a new process is started, and the request counter i will allow to see how many requests have been served by each process.
The PIDs will correspond to those present in ps aux | grep testwsgi:
Also the time.sleep(3) will happen maximum 5 times (at the startup of each of the 5 processes), then the processes should run forever, until we restart/stop the server or modify the app.wsgi file (modifying it triggers a restart of the 5 processes, you can see new PIDs).
[I'll check that by letting my test run now, and access http://www.example.com/hello/you in 2 days to see if it's still a previously-launched process or a new one!]
Edit: the next day, the same processes were still up and running. Now, two days after, when reloading the same URL, I noticed new processes were created... (Is there a time after which a process with no request dies?)
All righty so I want to explain my small django issue, that I am having trouble getting around.
The Problem
I have a small website, just a couple of pages that display a list of database records. The website is an internal render farm monitor for my company which will have perhaps a dozen or two active connections at any time. No more than 50.
The problem is that I have three update services that cause a real performance hit when turned on.
The update services each are python scripts that:
Use urllib2 to make a http request to a url.
Wait for the response
Print a success message with time stamps to a log.
Wait 10 seconds, and start again.
The URLs they send requests to cause my django website to poll an external service and read new data into our django database. The urls look like this:
http://webgrid/updateJobs/ (takes about 5 - 15 seconds per update )
http://webgrid/updateTasks/ (takes about 25 - 45 seconds per update )
http://webgrid/updateHosts/ (takes about 5 - 15 seconds per update )
When these update services are turned on (especially updateTasks), it can take well over 10 seconds for http://webgrid/ to even start loading for normal users.
The Setup
Django 1.8, deployed with Gunicron v18.
The main gunicorn service is run with these arguments (Split into a list for easier reading).
<PATH_TO_PYTHON>
<PATH_TO_GUNICORN>
-b localhost:80001
-u farmer
-t 600
-g <COMPANY_NAME>
--max-requests 10000
-n bb_webgrid
-w 17
-p /var/run/gunicorn_bb_webgrid.pid
-D
--log-file /xfs/GridEngine/bbgrid_log/bb_webgrid.log
bb_webgrid.wsgi:application
Apache config for this site:
<VirtualHost *:80>
ServerName webgrid.<INTERAL_COMPANY_URL>
ServerAlias webgrid
SetEnv force-proxy-request-1.0 1
DocumentRoot /xfs/GridEngine/bb_webgrid/www
CustomLog logs/webgrid_access.log combined
ErrorLog logs/webgrid_error.log
#LogLevel warn
<Directory "/xfs/GridEngine/bb_webgrid/www">
AllowOverride All
</Directory>
WSGIDaemonProcess webgrid processes=17 threads=17
WSGIProcessGroup webgrid
</VirtualHost>
This kind of thing shouldn't be done online; by hitting a URL which directs to a view you are unnecessarily tying up your webserver which stops it from doing its real job, which is to respond to user requests.
Instead, do this out-of-band. A really quick an easy way to do this is to write a Django management command; that way you can easily call model methods from a command-line script. Now you can simply point your cron job, or whatever it is, to call these commands, rather than calling a separate Python script which calls a URL on your site.
An alternative is to use Celery; it's a really good system for doing long-running asynchronous tasks. It even has its own scheduling system, so you could replace your cron jobs completely.
I'm using wsgi apache and flask to run my application. I'm use from yourapplication import app as application to start my application. That works so far fine. The problem is, with every request a new instance of my application is created. That leads to the unfortunate situation that my flask application creates a new database connection but only closes it after about 15 min. Since my server allows only 16 open DB connections the server starts to block requests very soon. BTW: This is not happening when I run flask without apache/wsgi since it opens only one connection and serves all requests as I want.
What I want: I want to run only one flask instance which then servers all requests.
The WSGIApplicationGroup directive may be what you're looking for as long as you have the wsgi app running in daemon mode (otherwise I believe apache's default behavior is to use prefork which spins up a process to handle each individual request):
The WSGIApplicationGroup directive can be used to specify which application group a WSGI application or set of WSGI applications belongs to. All WSGI applications within the same application group will execute within the context of the same Python sub interpreter of the process handling the request.
You have to provide an argument to the directive that specifies a name for the application group. There's a few expanding variables: %{GLOBAL}, %{SERVER}, %{RESOURCE} and %{ENV:variable}; or you can specify your own explicit name. %{GLOBAL} is special in that it expands to the empty string, which has the following behavior:
The application group name will be set to the empty string.
Any WSGI applications in the global application group will always be executed within the context of the first interpreter created by Python when it is initialised. Forcing a WSGI application to run within the first interpreter can be necessary when a third party C extension module for Python has used the simplified threading API for manipulation of the Python GIL and thus will not run correctly within any additional sub interpreters created by Python.
I would recommend specifying something other than %{GLOBAL}.
For every process you have mod_wsgi spawn, everything will be executed in the same environment. Then you can simply control the number of database connections based on the number of processes you want mod_wsgi to spawn.
When I update the code on my website I (naturally) restart my apache instance so that the changes will take effect.
Unfortunately the first page served by each apache instance is quite slow while it loads everything into RAM for the first time (5-7 sec for this particular site).
Subsequent requests only take 0.5 - 1.5 seconds so I would like to eliminate this effect for my users.
Is there a better way to get everything loaded into RAM than to do a wget x times (where x is the number of apache instances defined by ServerLimit in my http.conf)
Writing a restart script that restarts apache and runs wget 5 times seems kind of hacky to me.
Thanks!
The default for Apache/mod_wsgi is to only load application code on first request to a process which requires that applications. So, first step is to configure mod_wsgi to preload your code when the process starts and not only the first request. This can be done in mod_wsgi 2.X using the WSGIImportScript directive.
Presuming daemon mode, which is better option anyway, this means you would have something like:
# Define process group.
WSGIDaemonProcess django display-name=%{GROUP}
# Mount application.
WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi
# Ensure application preloaded on process start. Must specify the
# process group and application group (Python interpreter) to use.
WSGIImportScript /usr/local/django/mysite/apache/django.wsgi \
process-group=django application-group=%{GLOBAL}
<Directory /usr/local/django/mysite/apache>
# Ensure application runs in same process group and application
# group as was preloaded into on process start.
WSGIProcessGroup django
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
When you have made a code change, instead of touch the WSGI script file, which is only checked on the next request, send a SIGINT signal to the processes in the daemon process group instead.
With the 'display-name' option to WSGIDaemonProcess you can identify which processes by using BSD style 'ps' program. With 'display-name' set to '%{GROUP}', the 'ps' output should show '(wsgi:django)' as process name. Identify the process ID and do:
kill -SIGINT pid
Swap 'pid' with actual process ID. If more than one process in daemon process group, send signal to all of them.
Not sure if 'killall' can be used to do this in one step. I had problem with doing it on MacOS X.
In mod_wsgi 3.X the configuration can be simpler and can use instead:
# Define process group.
WSGIDaemonProcess django display-name=%{GROUP}
# Mount application and designate which process group and
# application group (Python interpreter) to run it in. As
# process group and application group named, this will have
# side effect of preloading application on process start.
WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi \
process-group=django application-group=%{GLOBAL}
<Directory /usr/local/django/mysite/apache>
Order deny,allow
Allow from all
</Directory>
That is, no need to use separate WSGIImportScript directive as can specific process group and application group as arguments to WSGIScriptAlias instead with side effect that it will preload application.
How are you running Django (mod_python vs mod_wsgi)?
If you're running mod_wsgi (in daemon mode), restarting Apache isn't necessary to reload your application. All you need to do is update the mtime of your wsgi script (which is done easily with touch).
mod_wsgi's documentation has a pretty thorough explanation of the process:
ReloadingSourceCode