How do I start redis queue worker on django start? - python

I decided I need to use an asynchronous queue system. And am setting up Redis/RQ/django-rq. I am wondering how I can start workers in my project.
django-rq provides a management command which is great, it looks like:
python manage.py rqworker high default low
But is it possible to start the worker when you start the django instance? Just wondering or is it something I will always have to start manually?
Thanks.

Django operates inside reques-response cycle, and it starts by request. So it is bad idea to attach such command to Django startup.
Instead of that, I would recommend you to look at supervisord - a process manager, that can automate services launch at system start and other things.

When I host Django project in Heroku. Heroku provide a Procfile, you can specify what to start with project.
It is my Procfile:
web: gunicorn RestApi.wsgi
worker: python manage.py rqworker default

Related

Run Django with python as daemon

We have web application which is running with django, python and PostgreSQL. We are also using virtualenv.
To start the web service, we first activate the virtualenv and then start python as service on 8080 with nohup.
But after sometime nohup process dies. Is there any way to launch service as demon like apache, or use some thing like monit?
I am new to this, please excuse my mistakes
So a runserver command should only be used in testing environments. And just like #Alasdair said, Django docs already have interesting information about that topic.
I would suggest using gunicorn as a wsgi with nginx as a reverse proxy. You can find more information here
And i would suggest using supervisor to monitor and control your gunicorn workers. More information can be found here
It may be a good idea to deploy your application using apache or ngnix. There is official Django documentation on how to do it with apache - https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/
Apache does support virtual environment - just add python-home=<path_to_your_virtual_env> to the WSGIDaemonProcess directive when using daemon mode of mod_wsgi:
WSGIDaemonProcess django python-path=/opt/portal/src/ python-home=/opt/venv/django home=/opt/portal/
Best practice for how to use mod_wsgi and virtual environments is explained in:
http://modwsgi.readthedocs.io/en/develop/user-guides/virtual-environments.html
I was able to do it, but forgot to update the answers.IF any one is looking for same they can follow this.
Best way to run django app in production is to run with
django+gunicorn+supervisor+nginx.
I used gunicorn which is a Python WSGI HTTP Server for UNIX where you can control thread count, timeout settings and much more. gunicorn was running was on socket, it could be run on port but to reduce tcp overhead we ran on socket.
Supervisor is used to run this gunicorn script as supervisor is simple tool which is used to control your process.
and with the help of nginx reverse proxy Our Django site was life.
For more details follow below blog.
http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/

How to start Django from code in background?

One can easily start Django using management command like this:
management.call_command('runserver', interactive=False)
But it actually blocks execution.
Any workaround apart from subprocess/threading/multiprocessing.
I mean how to do it in more native fashion?
A management command is not "starting django".
You "start django" by deploying on any number of web servers, each of which has methods to run in the background.
https://docs.djangoproject.com/en/dev/howto/deployment/
Dynamically deploying django isn't something I've seen, but I suppose you could write some scripts that generate webserver configuration files.
manage.py runserver should never be used for production environments / uses.
If that was just an example, and you actually want to run other asynchronous management commands, the accepted community answer to to use a task queue like Celery.
http://docs.celeryproject.org/en/latest/django/
You could then fire off 10000 non blocking management commands to be consumed "in the future" at some point by celery workers.

redis celeryd and apache

I'm a bit new to redis and celery. Do I need to restart celeryd and redis every time I restart apache? I'm using celery and redis with a django project hosted on webfaction.
Thanks for the info in advance.
Provided you're running Daemon processes of Redis and Celery you do not need to restart them when you restart Apache.
Generally, you will need to restart them when you make configuration changes to either Redis or Celery as the applications are dependent on eachother.

Testing a Django app in many separate threads

I'd like to run my Django app's tests in several threads (possibly dozens) in parallel. This is because my app spends almost all of its time waiting for remote requests, and I reckon that if I run the tests in parallel, they would all work at the same time without slowing each other down, and the whole suite would be over pretty quickly.
But... Tests are currently running with Django's runserver, which is single-threaded. So it won't be able to serve dozens of requests in parallel.
(I use Django's ./manage.py test with django_nose to invoke the tests.)
One idea I have is to use devserver instead. The question is, will it automatically be used when invoking ./manage.py test?
And another question is: I ran into devserver rather randomly, and I don't know whether it has any competitors that might be better. Does it?
use uWSGI
pip install uwsgi
Create .ini for your project:
[uwsgi]
# set the http port
http = :8000
# change to django project directory
chdir = /var/www/myapp
# add /var/www to the pythonpath, in this way we can use the project.app format
pythonpath = /var/www
# set the project settings name
env = DJANGO_SETTINGS_MODULE=myapp.settings
# load django
module = django.core.handlers.wsgi:WSGIHandler()
Start it with built-in http server
uwsgi --ini django.ini --async 10
async — number of threads
http://projects.unbit.it/uwsgi/wiki/Quickstart
http://projects.unbit.it/uwsgi/wiki/Doc095
I've recently began delving into django-celery which is an asynchronous task queue for django. It allows you to queue up tasks to run asynchronously so that you don't have to wait for responses. It's simple to install and get started and it would allow your application to utilize asynchronous queueing instead of just your test suite.
http://django-celery.readthedocs.org/en/latest/getting-started/index.html

Celery with Django - deployment

I am considering using celery in my project. I found a lot of information about how to use it etc. What I am interested in is how to deploy/package my solution.
I need to run two components - django app and then celeryd worker (component that sends emails). For example I would like my django app to use email_ticket task that would email support tickets. I create tasks.py in the django app.
#task
def email_ticket(from, message):
...
Do I deploy my django app and then just run celeryd as separate process from the same path?
./manage.py celeryd ...
What about workers on different servers? Deploy whole django application and run only celeryd? I understand I could use celery only for the worker, but I would like to use celerycam and celerybeat.
Any feedback is appreciated. Thanks
Thanks for any feedback.
This is covered in the documentation here. The gist is you need to download some init scripts and setup some config. Once that's done celeryd will start on boot and you'll be off and running.

Categories

Resources