celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
celery -A app worker -Q default -B -l debug --purge -n default_worker
celery -A app beat -l info
As of now we are running the three commands in screens. What is the more production way of running these commands?
The easiest way to create daemons is with supervisord. sentry, which also uses django and celery recommends using supervisord to run the workers - you can adjust the configuration to suit your set up:
[program:celery-priority-high]
directory=/www/my_app/
command=/path/to/celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=syslog
stderr_logfile=syslog
You can, of course, also run django itself using this method.
If supervisord is too much bloat for your needs, you can also create init scripts for your init system of choice (eg. systemd).
Related
I want to deploy dev server but I have a problem with starting celery and gunicorn. I'm using scripts for my purposes
celery.sh
#!/bin/bash
cd /home/dev/app
pipenv run celery -A config worker -B -l info
and start.sh for gunicorn
#!/bin/bash
cd /home/dev/app
pipenv run gunicorn config.wsgi:application -b 127.0.0.1:8005 -w 2 -t 60 \
--env DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE \
--env DSN=$SENTRY_DSN \
--env DATABASE_URL=$DATABASE_URL \
--log-file - \
--error-logfile /home/dev/app/errors.log
Also here is my config for supervisor
[program:back]
directory=/home/dev/app/
command=/home/dev/bin/start
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
[program:celery]
directory=/home/dev/app/
command=/home/dev/bin/celery
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
When I'm running sudo supervisorctl start celery I'm getting the following error:
/home/dev/bin/celery: line 3: pipenv: command not found
Also I added the following lines as pipenv documentation says (https://pipenv.readthedocs.io/en/latest/diagnose/)
[supervisord]
environment=LC_ALL='en_US.UTF-8',LANG='en_US.UTF-8'
UPDATE
Changed my supervisor config:
[program:back]
directory=/home/dev/app/
command=pipenv run gunicorn config.wsgi:application --bind 127.0.0.1:8005
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
[program:celery]
directory=/home/dev/app/
command=pipenv run celery -A config:celery_app worker -B -l info
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
And now I'm getting an error:
back: ERROR (no such file)
You need to give explicit path of gunicorn. Though I'm not sure about pipenv, but the error you are getting is because supervisor tries to find gunicorn in the directory. You should change your config file into something like this:
[program:back]
directory=/home/dev/app/
command=/path/to/pipenv run /path/to/gunicorn config.wsgi:application --bind 127.0.0.1:8005
Then you must restart your supervisord in order to load the settings.
sudo service supervisord reload
in your config file. change command= to bash -c followed by the full path and file to execute
this should do the trick
I cannot see the tasks in admin.
I followed the steps in https://github.com/jezdez/django-celery-monitor
I used
celery==4.1.1
django-celery-results==1.0.1
django-celery-beat==1.0.1
django_celery_monitor==1.1.2
ran manage.py migrate celery_monitor The migrations went well. ran celery -A lbb events -l info --camera django_celery_monitor.camera.Camera --frequency=2.0 and celery -A lbb worker -l info in separate shells. But still cannot see the tasks I ran in celery-monitor > tasks table.
Running celery command with -E to force event worked for me.
celery -A proj worker -l info -E
The scenario:
Two unrelated web apps with celery background tasks running on same server.
One RabbitMQ instance
Each web app has its own virtualenv (including celery). Same celery version in both virtualenvs.
I use the following command lines to start a worker and a beat process for each application.
celery -A firstapp.tasks worker
celery -A firstapp.tasks beat
celery -A secondapp.tasks worker --hostname foobar
celery -A secondapp.tasks beat
Now everything seems to work OK, but in the worker process of secondapp I get the following error:
Received unregistered task of type 'firstapp.tasks.do_something'
Is there a way to isolate the two celery's from each other?
I'm using Celery version 3.1.16, BTW.
I believe I fixed the problem by creating a RabbitMQ vhost and configuring the second app to use that one.
Create vhost (and set permissions):
sudo rabbitmqctl add_vhost /secondapp
sudo rabbitmqctl set_permissions -p /secondapp guest ".*" ".*" ".*"
And then change the command lines for the second app:
celery -A secondapp.tasks -b amqp://localhost//secondapp worker
celery -A secondapp.tasks -b amqp://localhost//secondapp beat
In my Procfile I have the following:
worker: cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
However, when my app submits a task it never happens, but I think the celery worker is at least sort of working, because
heroku logs --app appname
every so often gives me one of these:
2016-07-22T07:53:21+00:00 app[heroku-redis]: source=REDIS sample#active-connections=14 sample#load-avg-1m=0.03 sample#load-avg-5m=0.09 sample#load-avg-15m=0.085 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664884.0kB sample#memory-free=13458244.0kB sample#memory-cached=187136kB sample#memory-redis=566800bytes sample#hit-rate=0.17778 sample#evicted-keys=0
Also, when I open up bash by running
heroku run bash --app appname
and then type in
cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
It immediately tells me the task has been received and then executes it. I would like to have this happen without me having to manually log in and execute the command - is that possible? Do I need a paid account on heroku to do that?
I figured it out. Turns out you also have to do
heroku ps:scale worker=1 --app appname
Or else you won't actually be running a worker.
i am trying to setup celery with supervisord on AWS instance. I have the project up running on a virtual environment and i tested the celery by "python2.7 manage.py celery worker". But i am getting an error Unknown command: 'celery' while running with supervisor.
My supervisor configuration are below
1)
[program:celery-eatio]
environment = PYTHONPATH="/home/ubuntu/eatio_env/lib/python2.7:/home/ubuntu/eatio-server:/home/ubuntu/eatio_env/lib/python2.7/site-packages:$PYTHONPATH",DJANGO_SETTINGS_MODULE="eatio_web.settings"
command=/home/ubuntu/eatio-server/manage.py celery worker
user=ubuntu
numprocs=1
stdout_logfile=/home/ubuntu/celery_error.log
stderr_logfile=/home/ubuntu/celery_out.log
autostart=true
autorestart=true
2)
[program:celery-eatio]
command=/home/ubuntu/eatio_env/bin/python2.7 /home/ubuntu/eatio-server/manage.py celery worker --loglevel=info -c 1 -E -B -Q celery_periodic -n periodic_worker
directory=/home/ubuntu/eatio-server
user=ubuntu
autostart=true
autorestart=true
stdout_logfile=/home/ubuntu/celery_error.log
stderr_logfile=/home/ubuntu/celery_out.log