Running an arbitrary Django task along with "runserver" forever - python

For a Django-based server I require the simultaneous running of scripts in a fashion similar to cronjobs. I want to avoid the explicit usage of cronjobs and instead, integrate these periodic tasks to the HTTP server initialization - that is, when I run either manage.py runserver or a very similar management command, alongside the HTTP daemon, two other processes start that can perform my tasks periodically.
I already created management commands for these scripts. What are my options?
My best guess is starting two threads either in AppConfig.ready() like suggested here or somehow in manage.py itself. I'm not entirely sure if it has any caveats, though.

Since asking this question, I realized my only solution is the initialization of threads and also that I should do it explicitly in either asgi.py or wsgi.py, depending on my production solution - runserver management command is not suitable for production.

Related

what is a robust way to execute long-running tasks/batches under Django?

I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin.
Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore the systems using Django pages.
Run time for the batches isn't all that long, but runs to minutes, tens of minutes potentially, not seconds. Run frequency is infrequent at best, I think you could spend days without needing a refresh.
This is not celery's normal use case of long tasks which will eventually push the results back into the web UI via ajax and/or polling. It is more similar to a dev's occasional use of the django-admin commands, but this time intended for an end user.
The user should be able to initiate a run of one or several of those batches when they want in order to refresh the calculations of a given external database (the target db is a parameter to the batch).
Until the batches are done for a given db, the app really isn't useable. You can access its pages, but many functions won't be available.
It is very important, from a support point of view that the batches remain easily runnable at all times. Dropping down to the VMs SSH would probably require frequent handholding which wouldn't be good - it is best that you could launch them from the Django webpages.
What I currently have:
Each batch is in its own script.
I can run it on the command line (via if __name__ == "main":).
The batches are also hooked up as celery tasks and work fine that way.
Given the way I have written them, it would be relatively easy for me to allow running them from subprocess calls in Python. I haven't really looked into it, but I suppose I could make them into django-admin commands as well.
The batches already have their own rudimentary status checks. For example, they can look at the calculated data and tell whether they have been run and display that in Django pages without needing to look at celery task status backends.
The batches themselves are relatively robust and I can make them more so. This is about their launch mechanism.
What's not so great.
In Mac dev environment I find the celery/celerycam/rabbitmq stack to be somewhat unstable. It seems as if sometime rabbitmqs daemon balloons up in CPU/RAM use and then needs to be terminated. That mightily confuses the celery processes and I find I have to kill -9 various tasks and relaunch them manually. Sometimes celery still works but celerycam doesn't so no task updates. Some of these issues may be OSX specific or may be due to the DEBUG flag being switched for now, which celery warns about.
So then I need to run the batches on the command line, which is what I was trying to avoid, until the whole celery stack has been reset.
This might be acceptable on a normal website, with an admin watching over it. But I can't have that happen on a remote VM to which only the user has access.
Given that these are somewhat fire-and-forget batches, I am wondering if celery isn't overkill at this point.
Some options I have thought about:
writing a cleanup shell/Python script to restart rabbitmq/celery/celerycam and generally make it more robust. i.e. whatever is required to make celery & all more stable. I've already used psutil to figure out rabbit/celery process are running and display their status in Django.
Running the batches via subprocess instead and avoiding celery. What about django-admin commands here? Does that make a difference? Still needs to be run from the web pages.
an alternative task/process manager to celery with less capability but also less moving parts?
not using subprocess but relying on Python multiprocessing module? To be honest, I have no idea how that compares to launches via subprocess.
environment:
nginx, wsgi, ubuntu on virtualbox, chef to build VMs.
I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things.
Otherwise, just use cron as a driver to run periodic tasks. You can just let it run your script periodically and update the database, your UI component will poll the database with no conflict.

Long running tasks in Pyramid web app

I need to run some tasks in background of web app (checking the code out, etc) without blocking the views.
The twist in typical Queue/Celery scenario is that I have to ensure that the tasks will complete, surviving even web app crash or restart until those tasks complete, whatever their final result.
I was thinking about recording parameters for multiprocessing.Pool in a database and starting all the incomplete tasks at webapp restart. It's doable, but I'm wondering if there's a simpler or more cost-effective aproach?
UPDATE: Why not Celery itself? Well, I used Celery in some projects and it's really a great solution, but for this task it's on the big side: it requires a separate server, communication, etc., while all I need is spawning a few processes/threads, doing some work in them (git clone ..., svn co ...) and checking whether they succeeded or failed. Another issue is that I need the solution to be as small as possible since I have to make it follow elaborate corporate guidelines, procedures, etc., and the human administrative and bureaucratic overhead I'd have to go through to get Celery onboard is something I'd prefer to avoid if I can.
I would suggest you to use Celery.
Celery does not require its own server, you can have a worker running on the same machine. You can also have a "poor man's queue" using an SQL database instead of a "real" queue/messaging server such as RabbitMQ - this setup would look very much like what you're describing, only with a separate process doing the long-running tasks.
The problem with starting long-running tasks from the webserver process is that in the production environment the web "workers" are normally managed by the webserver - multiple workers can be spawned or killed at any time. The viability of your approach would highly depend on the web server you're using and its configuration. Also, with multiple workers each trying to do a task you may have some concurrency issues.
Apart from Celery, another option is to look at UWSGI's spooler subsystem, especially if you're already using UWSGI.

django server sharing scope with celery workers

I am working on a web application that uses a permanent object MyService. Using a web interface I am dynamically updating its state and monitor its behavior. Now I would like to periodically call one of its methods. I was thinking of using celery PeriodicTask but run into some scope issues. It seems I need to execute three different processes:
python manage.py runserver
python manage.py celery worker
python manage.py celerybeat
The problem is that even if I ensure that MyService is a singleton that can be safely used by more than one thread, celery creates its own fresh copy of the object. Is there a way I could share this object between both django server and celery main process? I tried to find a way to start celery from within django script but until now with no success. Would appreciate any help.
If you need to share something between multiple processes or maybe even multiple machines (eg. your workers could run on a seperate machine) the best (and probably easiest) practice to share information would be using an external service.
In the simplest case you could use Django's DB, but if you encounter that this is not suitable for you, for example if you have a heavy write load you can use something like Redis or Memcache (which you can also talk to via Django's caching API). These will enable you to be able to handle a big write load and besides you can use eg. Redis as a queue for celery as well.

How to start Django from code in background?

One can easily start Django using management command like this:
management.call_command('runserver', interactive=False)
But it actually blocks execution.
Any workaround apart from subprocess/threading/multiprocessing.
I mean how to do it in more native fashion?
A management command is not "starting django".
You "start django" by deploying on any number of web servers, each of which has methods to run in the background.
https://docs.djangoproject.com/en/dev/howto/deployment/
Dynamically deploying django isn't something I've seen, but I suppose you could write some scripts that generate webserver configuration files.
manage.py runserver should never be used for production environments / uses.
If that was just an example, and you actually want to run other asynchronous management commands, the accepted community answer to to use a task queue like Celery.
http://docs.celeryproject.org/en/latest/django/
You could then fire off 10000 non blocking management commands to be consumed "in the future" at some point by celery workers.

Testing a Django app in many separate threads

I'd like to run my Django app's tests in several threads (possibly dozens) in parallel. This is because my app spends almost all of its time waiting for remote requests, and I reckon that if I run the tests in parallel, they would all work at the same time without slowing each other down, and the whole suite would be over pretty quickly.
But... Tests are currently running with Django's runserver, which is single-threaded. So it won't be able to serve dozens of requests in parallel.
(I use Django's ./manage.py test with django_nose to invoke the tests.)
One idea I have is to use devserver instead. The question is, will it automatically be used when invoking ./manage.py test?
And another question is: I ran into devserver rather randomly, and I don't know whether it has any competitors that might be better. Does it?
use uWSGI
pip install uwsgi
Create .ini for your project:
[uwsgi]
# set the http port
http = :8000
# change to django project directory
chdir = /var/www/myapp
# add /var/www to the pythonpath, in this way we can use the project.app format
pythonpath = /var/www
# set the project settings name
env = DJANGO_SETTINGS_MODULE=myapp.settings
# load django
module = django.core.handlers.wsgi:WSGIHandler()
Start it with built-in http server
uwsgi --ini django.ini --async 10
async — number of threads
http://projects.unbit.it/uwsgi/wiki/Quickstart
http://projects.unbit.it/uwsgi/wiki/Doc095
I've recently began delving into django-celery which is an asynchronous task queue for django. It allows you to queue up tasks to run asynchronously so that you don't have to wait for responses. It's simple to install and get started and it would allow your application to utilize asynchronous queueing instead of just your test suite.
http://django-celery.readthedocs.org/en/latest/getting-started/index.html

Categories

Resources