Django state sharing between server processes - python

I am building a Django project that one the of the apps has a state that I want to keep (a complex rate limiter as a Python Object that is hard to implement through something like Redis).
By making the server to spawn new processes like
WSGIDaemonProcess localhost processes=2 threads=25
Will it cause to lose sync of the state between processes ? In standard python one will spin a Manager that proxies the state in multiple processes.

Related

Manually stop processes launched by mod_wsgi, and monitor how many processes are running

I know it's not recommended to run a Bottle or Flask app on production with python myapp.py --port=80 because it's a development server only.
I think it's not recommended as well to run it with python myapp.py --port=5000 and link it to Apache with: RewriteEngine On, RewriteRule /(.*) http://localhost:5000/$1 [P,L] (or am I wrong?), because WSGI is preferred.
So I'm currently setting up Python app <-> mod_wsgi <-> Apache (without gunicorn or other tool to keep things simple).
Question: when using WSGI, I know it's Apache and mod_wsgi that will automatically start/stop enough processes running myapp.py when requests will come, but:
how can I manually stop these processes?
more generally, is there a way to monitor them / know how many processes started by mod_wsgi are currently still running? (one reason, among others, is to check if the processes terminate after a request or if they stay running)
Example:
I made some changes in myapp.py, and I want to restart all processes running it, that have been launched by mod_wsgi (Note: I know that mod_wsgi can watch changes on the source code, and relaunch, but this only works on changes made on the .wsgi file, not on the .py file. I already read that touch myapp.wsgi can be a solution for that, but more generally I'd like to be able to stop and restart manually)
I want to temporarily stop the whole application myapp.py (all instances of it)
I don't want to use service apache2 stop for that because I also run other websites with Apache, not just this one (I have a few VirtualHosts). For the same reason (I run other websites with Apache, and some client might be downloading a 1 GB file at the same time), I don't want to do service apache2 restart that would have an effect on all websites using Apache.
I'm looking for a cleaner way than kill pid or SIGTERM, etc. (because I read it's not recommended to use signals in this case).
Note: I already read How to do graceful application shutdown from mod_wsgi, it helped, but here it's complementary questions, not a duplicate.
My current Python Bottle + Apache + mod_wsgi setup:
Installation:
apt-get install libapache2-mod-wsgi
a2enmod wsgi # might be done automatically by previous line, but just to be sure
Apache config (source: Bottle doc; a more simple config can be found here):
<VirtualHost *:80>
ServerName example.com
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5
WSGIScriptAlias / /home/www/wsgi_test/app.wsgi
<Directory />
Require all granted
</Directory>
</VirtualHost>
There should be up to 5 processes, is that right? As stated before in the question, how to know how many are running, how to stop them?
/home/www/wsgi_test/app.wsgi (source: Bottle doc)
import os
from bottle import route, template, default_app
os.chdir(os.path.dirname(__file__))
#route('/hello/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
application = default_app()
Taken partially from this question, add display-name to WSGIDaemonProcess so you can grab them using a command like:
ps aux | grep modwsgi
Add this to your configuration:
Define GROUPNAME modwsgi
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5 display-name=%{GROUPNAME}
Update
There are a couple of reasons why ps would not give you the DaemonProcess display-name.
As shown in the docs:
display-name=value Defines a different name to show for the daemon
process when using the ps command to list processes. If the value is
%{GROUP} then the name will be (wsgi:group) where group is replaced
with the name of the daemon process group.
Note that only as many characters of the supplied value can be
displayed as were originally taken up by argv0 of the executing
process. Anything in excess of this will be truncated.
This feature may not work as described on all platforms. Typically it
also requires a ps program with BSD heritage. Thus on some versions of
Solaris UNIX the /usr/bin/ps program doesn’t work, but /usr/ucb/ps
does. Other programs which can display this value include htop.
You could:
Set a display-name of smaller length:
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5 display-name=wsws
And try to find them by:
ps aux | grep wsws
Or set it to %{GROUP} and filter using the name of the daemon process group (wsgi:group).
The way which processes are managed with mod_wsgi for each mode is described in:
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
For embedded mode, where your WSGI application is run inside of the Apache child worker processes, Apache manages when processes are created and destroyed based on the Apache MPM settings. Because of how Apache manages the processes, they can be shutdown at any time if there is insufficient request throughput, or more processes could be created if request throughput increases. When running, the same process will handle many requests over time until it gets shutdown. In other words, Apache dynamically manages the number of processes.
Because of this dynamic process management, it is a bad idea to use embedded mode of mod_wsgi unless you know how to tune Apache properly and many other things as well. In short, never use embedded mode unless you have a good amount of experience with Apache and running Python applications with it. You can watch a video about why you wouldn't want to run in embedded mode at:
https://www.youtube.com/watch?v=k6Erh7oHvns
There is also the blog post:
http://blog.dscpl.com.au/2012/10/why-are-you-using-embedded-mode-of.html
So use daemon mode and verify that your configuration is correct and you are in fact using daemon mode by using the check in:
http://modwsgi.readthedocs.io/en/develop/user-guides/checking-your-installation.html#embedded-or-daemon-mode
For daemon mode, the WSGI application runs in a separate set of managed processed. These are created at the start and will run until Apache is restarted, or reloading of the process is triggered for various reasons, including:
The daemon process is sent a direct signal to shutdown by a user.
The code of the application sends itself a signal.
The WSGI script file is modified, which will trigger a shutdown so the WSGI application can be reloaded.
A defined request timeout occurs due to stuck or long running request.
A defined maximum number of requests has occurred.
A defined inactivity timeout expires.
A defined timer for periodic process restart expires.
A startup timeout is defined and the WSGI application failed to load in that time.
In these cases, when the process is shutdown, it is replaced.
More details about the various timeout options and how the processes respond to signals can be found in:
http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html
More details about source code reloading and touching of the WSGI script file can be found in:
http://modwsgi.readthedocs.io/en/develop/user-guides/reloading-source-code.html
One item which is documented is how you can incorporate code which will look for any changes to Python code files used by your application. When a change occurs to any of the files, the process will be restarted by sending itself a signal. This should only be used for development and never in production.
If you are using mod_wsgi-express in development, which is preferable to hand configuring Apache yourself, you can use the --reload-on-changes option.
If sending a SIGTERM signal to the daemon process, there is a set shutdown sequence where it will wait a few seconds to wait for current requests to finish. If the requests don't finish, the process will be shutdown anyway. That period of time is dictated by the shutdown timeout. You shouldn't play with that value.
If sending a SIGUSR1 signal to the daemon process, by default it acts just like sending a SIGTERM signal. If however you specify the graceful timeout for shutdown, you can extend how long it will wait for current requests to finish. New requests will be accepting during that period. That graceful timeout also applies in other cases as well, such as maxmimum number of requests received, or timer for periodic restart triggered. If you need the timeout when using SIGUSR1 to be different to those cases, define the eviction timeout instead.
As to how to identify the daemon processes to be sent the signal, use the display-name of option WSGIDaemonProcess. Then use ps to identify the processes, or possibly use killall if it uses the modified process name on your platform. Send the daemon processes the SIGUSR1 signal if want more graceful shutdown and SIGTERM if want them to restart straight away.
If you want to track how long a daemon process has been running, you can use:
import mod_wsgi
metrics = mod_wsgi.process_metrics()
The metrics value will include output like the following for the process the call is made in:
{'active_requests': 1,
'cpu_system_time': 0.009999999776482582,
'cpu_user_time': 0.05000000074505806,
'current_time': 1525047105.710778,
'memory_max_rss': 11767808,
'memory_rss': 11767808,
'pid': 4774,
'request_busy_time': 0.001851,
'request_count': 2,
'request_threads': 2,
'restart_time': 1525047096.31548,
'running_time': 9,
'threads': [{'request_count': 2, 'thread_id': 1},
{'request_count': 1, 'thread_id': 2}]}
If you just want to know how many processes/threads are used for the current daemon process group you can use:
mod_wsgi.process_group
mod_wsgi.application_group
mod_wsgi.maximum_processes
mod_wsgi.threads_per_process
to get details about the process group. The number of process is fixed at this time for daemon mode and the name maximum_processes is just to be consistent with what the name is in embedded mode.
If you need to run code on process shutdown, you should NOT try and define your own signal handlers. Do that and mod_wsgi will actually ignore them as they will interfere with normal operation of Apache and mod_wsgi. Instead, if you need to run code on process shutdown, use atexit.register(). Alternatively, you can subscribe to special events generated by mod_wsgi and trigger something off the process shutdown event.
Edit: a more simple WSGI config is given in my question of Python WSGI handler directly in Apache .htaccess, not in VirtualHost
Based on Evhz's answer, I made a simple test to check that the processes are still running:
Apache config:
<VirtualHost *:80>
ServerName example.com
<Directory />
AllowOverride All
Require all granted
</Directory>
WSGIScriptAlias / /home/www/wsgi_test/app.wsgi
WSGIDaemonProcess yourapp user=www-data group=www-data processes=5 threads=5 display-name=testwsgi
</VirtualHost>
app.wsgi file:
import os, time
from bottle import route, template, default_app
os.chdir(os.path.dirname(__file__))
#route('/hello/<name>')
def index(name):
global i
i += 1
return template('<b>Hello {{name}}</b>! request={{i}}, pid={{pid}}',
name=name, i=i, pid=os.getpid())
i = 0
time.sleep(3) # wait 3 seconds to make the client notice we launch a new process!
application = default_app()
Now access http://www.example.com/hello/you many times:
The initial time.sleep(3) will help, from the client browser, to see exactly when a new process is started, and the request counter i will allow to see how many requests have been served by each process.
The PIDs will correspond to those present in ps aux | grep testwsgi:
Also the time.sleep(3) will happen maximum 5 times (at the startup of each of the 5 processes), then the processes should run forever, until we restart/stop the server or modify the app.wsgi file (modifying it triggers a restart of the 5 processes, you can see new PIDs).
[I'll check that by letting my test run now, and access http://www.example.com/hello/you in 2 days to see if it's still a previously-launched process or a new one!]
Edit: the next day, the same processes were still up and running. Now, two days after, when reloading the same URL, I noticed new processes were created... (Is there a time after which a process with no request dies?)

Django talking to multiple servers

There's a lot of info out there and honestly it's a bit too much to digest and I'm a bit lost.
My web app has to do so some very resource intensive tasks. Standard setup right now app on server static / media on another for hosting. What I would like to do is setup celery so I can call task.delay for these resource intensive tasks.
I'd like to dedicate the resources of entire separate servers to these resource intensive tasks.
Here's the question: How do I setup celery in this way so that from my main server (where the app is hosted) the calls for .delay are sent from the apps to these servers?
Note: These functions will be kicking data back to the database / affecting models so data integrity is important here. So, how does the data (assuming the above is possible...) retrieved get sent back to the database from the seperate servers while preserving integrity?
Is this possible and if so wth do I begin - information overload?
If not what should I be doing / what am I doing wrong?
The whole point of Celery is to work in exactly this way, ie as a distributed task server. You can spin up workers on as many machines as you like, and the broker - ie rabbitmq - will distribute them as necessary.
I'm not sure what you're asking about data integrity, though. Data doesn't get "sent back" to the database; the workers connect directly to the database in exactly the same way as the rest of your Django code.

start only one flask instance using apache + wsgi

I'm using wsgi apache and flask to run my application. I'm use from yourapplication import app as application to start my application. That works so far fine. The problem is, with every request a new instance of my application is created. That leads to the unfortunate situation that my flask application creates a new database connection but only closes it after about 15 min. Since my server allows only 16 open DB connections the server starts to block requests very soon. BTW: This is not happening when I run flask without apache/wsgi since it opens only one connection and serves all requests as I want.
What I want: I want to run only one flask instance which then servers all requests.
The WSGIApplicationGroup directive may be what you're looking for as long as you have the wsgi app running in daemon mode (otherwise I believe apache's default behavior is to use prefork which spins up a process to handle each individual request):
The WSGIApplicationGroup directive can be used to specify which application group a WSGI application or set of WSGI applications belongs to. All WSGI applications within the same application group will execute within the context of the same Python sub interpreter of the process handling the request.
You have to provide an argument to the directive that specifies a name for the application group. There's a few expanding variables: %{GLOBAL}, %{SERVER}, %{RESOURCE} and %{ENV:variable}; or you can specify your own explicit name. %{GLOBAL} is special in that it expands to the empty string, which has the following behavior:
The application group name will be set to the empty string.
Any WSGI applications in the global application group will always be executed within the context of the first interpreter created by Python when it is initialised. Forcing a WSGI application to run within the first interpreter can be necessary when a third party C extension module for Python has used the simplified threading API for manipulation of the Python GIL and thus will not run correctly within any additional sub interpreters created by Python.
I would recommend specifying something other than %{GLOBAL}.
For every process you have mod_wsgi spawn, everything will be executed in the same environment. Then you can simply control the number of database connections based on the number of processes you want mod_wsgi to spawn.

Can I have some code constantly run inside Django like a daemon

I'm using mod_wsgi to serve a django site through Apache. I also have some Python code that runs as a background process (dameon?). It keeps polling a server and inserts data into one of the Django models. This works fine but can I have this code be a part of my Django application and yet able to constantly run in the background? It doesn't need to be a process per se but a art of the Django site that is active constantly. If so, could you point me to an example or some documentation that would help me accomplish this?
Thanks.
You could either set up a cron job that runs some function you have defined, or - the more advanced and probably recommended method, integrate celery in your project (which is quite easy, actually).
You could create a background thread from the WSGI script when it is first being imported.
import threading
import time
def do_stuff():
time.sleep(60)
... do periodic job
_thread = threading.Thread(target=do_stuff)
_thread.setDaemon(True)
_thread.start()
For this to work though you would have to be using only one daemon process otherwise each process would be doing the same thing which you probably do not want.
If you are using multiple process in daemon process group, an alternative is to create a special daemon process group which the only purpose of is to run this background thread. In other words, the process doesn't actually receive any requests.
You can do this by having:
WSGIDaemonProcess django-jobs processes=1 threads=1
WSGIImportScript /usr/local/django/mysite/apache/django.wsgi \
process-group=django-jobs application-group=%{GLOBAL}
The WSGIImportScript directive says to load that script and run it on startup in the context of the process group 'django-jobs'.
To save having multiple scripts, I have pointed it at what would be your original WSGI script file you used for WSGIScriptAlias. We don't want it to run when it is loaded by that directive though, so we do:
import mod_wsgi
if mod_wsgi.process_group == 'django-jobs':
_thread = threading.Thread(target=do_stuff)
_thread.setDaemon(True)
_thread.start()
Here it looks at the name of the daemon process group and only runs when started up within the special daemon process group set up with single process just for this.
Overall you are just using Apache as a big gloried process manager, albeit one which is already known to be robust. It is a bit of overkill as this process will consume additional memory on top of those accepting and handling requests, but depending on the complexity of what you are doing it can still be useful.
One cute aspect of doing this is that since it is still a full Django application in there, you could map specific URLs to just this process and so provide a remote API to manage or monitor the background task and what it is doing.
WSGIDaemonProcess django-jobs processes=1 threads=1
WSGIImportScript /usr/local/django/mysite/apache/django.wsgi \
process-group=django-jobs application-group=%{GLOBAL}
WSGIDaemonProcess django-site processes=4 threads=5
WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi
WSGIProcessGroup django-site
WSGIApplicationGroup %{GLOBAL}
<Location /admin>
WSGIProcessGroup django-jobs
</Location>
Here, all URLs except for stuff under /admin run in 'django-site', with /admin in 'django-jobs'.
Anyway, that addresses the specific question of doing it within the Apache mod_wsgi daemon process as requested.
As pointed out, the alternative is to have a command line script which sets up and loads Django and does the work and execute that from a cron job. A command line script means occasional transient memory usage, but startup cost for job is higher as need to load everything each time.
I previously used a cron job but I telling you, you will switch to celery after a while.
Celery is the way to go. Plus you can tasked long async process so you can speed up the request/response time.

When using paster web server, does it service requests by creating a new thread?

Does paster create a new thread per request?
Can you set the maximum number of threads for paster to use i.e. a thread pool? How can you if this is possible?
Per the docs, paster supports different server choices, depending on the configuration -- including wsgiutils, "the start of support for twisted.web2 ... patches welcome" (that would be an async server instad), and "SCGI, FastCGI and AJP protocols, for connection an external web server (like Apache) to your application. Both threaded and forking versions are available. This is based on flup."
You can configure maximum number of threads (and/or forked processes) on Apache, for example, and quite independently from paster, by working exclusively on the Apache configuration; clearly that is what you'll want to do if you've picked the flup/Apache/threaded combo.
At (roughly) the other extreme in the simplicity / functionality spectrum, I don't believe wsgiutils, out of the box, can be configured to use a thread pool (i.e., if I'm not mistaken, coding a new server kind around the minimal skeleton that wsgiutil provides would be needed to use a thread pool with it).
Clearly, if you need any kind of advanced configuration options, Apache's enormous power and flexibility are hard to beat:-).

Categories

Resources