This is the file provided at https://github.com/ask/django-celery/blob/master/contrib/supervisord/celeryd.conf . How can I run this conf file ?
I am running my django app using gunicorn
; =======================================
; celeryd supervisor example for Django
; =======================================
[program:celery]
command=/path/to/project/manage.py celeryd --loglevel=INFO
directory=/path/to/project
user=nobody
numprocs=1
stdout_logfile=/var/log/celeryd.log
stderr_logfile=/var/log/celeryd.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
Thanks
That configuration file can't be run on its own; it is for use with supervisord.
You need to install supervisord (if you have pip, use pip install supervisor), create a configuration file for it using sudo echo_supervisord_conf > /etc/supervisord.conf and then copy paste the contents of the file above into your supervisord configuration file as described in the supervisord documentation under Adding a program.
So basically run the following at your shell:
pip install supervisor
sudo echo_supervisord_conf > /etc/supervisord.conf
sudo wget -O - -o /dev/null https://raw.github.com/ask/django-celery/master/contrib/supervisord/celeryd.conf >> /etc/supervisor.conf
sudo $EDITOR /etc/supervisor.conf
and edit the config file to your heart's content.
Related
I've a django application which uses celery which I'm moving onto AWS. I've upgraded from Python 2.7 to 3.6 at the same time.
I'm having trouble getting celery to run on AWS, with this error:
Application deployment failed at 2018-08-06T14:02:02Z with exit status 127 and error: Hook /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh failed.
[program:django-celery-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A myapp --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%(ENV_PATH)s",RDS_PORT="5432",RDS_PASSWORD="stmtrx321",RDS_DB_NAME="ebdb",RDS_USERNAME="postgres",RDS_HOSTNAME="aan9bvuq3a4wd1.cxzlzbczjyp1.us-east-1.rds.amazonaws.com"
/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh: line 47: supervisorctl: command not found
/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh: line 50: supervisorctl: command not found
/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh: line 53: supervisorctl: command not found.
Incorrect application version "app-b38e-180802_150757" (deployment 23). Expected version "app-0def-180806_140451" (deployment 26).
supervisor is installed.
This is my config file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh" :
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create celery configuration script
celeryconf="[program:django-celery-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/python /opt/python/current/app/manage.py celery worker -A myapp --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart django-celery-worker
container_commands:
01_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: myapp/wsgi.py
aws:elb:listener:443:
SSLCertificateId: arn:aws:acm:us-east-1:349705394408:certificate/5e94d3d2-039a-4ffc-bf7e-06995dc65fc9
ListenerProtocol: HTTPS
InstancePort: 80
What have I got wrong?
I have configured celery worker and celery beat on EB. There are no errors in logs during deploy and celery worker works fine, but doesn't see periodic tasks. On local machine everything works smoothly.
Here is my config file for the celery
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Create required directories
sudo mkdir -p /var/log/celery/
sudo mkdir -p /var/run/celery/
# Create group called 'celery'
sudo groupadd -f celery
# add the user 'celery' if it doesn't exist and add it to the group with same name
id -u celery &>/dev/null || sudo useradd -g celery celery
# add permissions to the celery user for r+w to the folders just created
sudo chown -R celery:celery /var/log/celery/
sudo chmod -R 777 /var/log/celery/
sudo chown -R celery:celery /var/run/celery/
sudo chmod -R 777 /var/run/celery/
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create CELERY configuration script
celeryconf="[program:celeryd]
directory=/opt/python/current/app
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A config.celery:app --loglevel=INFO --logfile="/var/log/celery/celery_worker.log" --pidfile="/var/run/celery/celery_worker_pid.pid"
user=celery
numprocs=1
stdout_logfile=/var/log/std_celery_worker.log
stderr_logfile=/var/log/std_celery_worker_errors.log
autostart=true
autorestart=true
startsecs=10
startretries=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create CELERY BEAT configuraiton script
celerybeatconf="[program:celerybeat]
directory=/opt/python/current/app
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A config.celery:app --loglevel=INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler --logfile="/var/log/celery/celery_beat.log" --pidfile="/var/run/celery/celery_beat_pid.pid"
user=celery
numprocs=1
stdout_logfile=/var/log/std_celery_beat.log
stderr_logfile=/var/log/std_celery_beat_errors.log
autostart=true
autorestart=true
startsecs=10
startretries=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: uwsgi.conf celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Enable supervisor to listen for HTTP/XML-RPC requests.
# supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
# Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf
then
echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf
echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat
commands:
01_kill_other_beats:
command: "ps auxww | grep 'celery beat' | awk '{print $2}' | sudo xargs kill -9 || true"
ignoreErrors: true
02_restart_beat:
command: "supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat"
leader_only: true
03_upgrade_pip_global:
command: "if test -e /usr/bin/pip; then sudo /usr/bin/pip install --upgrade pip; fi"
04_upgrade_pip_global:
command: "if test -e /usr/local/bin/pip; then sudo /usr/local/bin/pip install --upgrade pip; fi"
05_upgrade_pip_for_venv:
command: "if test -e /opt/python/run/venv/bin/pip; then sudo /opt/python/run/venv/bin/pip install --upgrade pip; fi"
Can somebody say where is the error?
I start periodic task like this:
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
pass
Update:
supervisord logs
2018-07-10 12:56:18,683 INFO stopped: celerybeat (terminated by SIGTERM)
2018-07-10 12:56:18,691 INFO spawned: 'celerybeat' with pid 1626
2018-07-10 12:56:19,181 INFO stopped: celerybeat (terminated by SIGTERM)
2018-07-10 12:56:20,187 INFO spawned: 'celerybeat' with pid 1631
2018-07-10 12:56:30,200 INFO success: celerybeat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2018-07-10 12:56:30,466 INFO stopped: celeryd (terminated by SIGTERM)
2018-07-10 12:56:31,472 INFO spawned: 'celeryd' with pid 1638
2018-07-10 12:56:41,486 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2018-07-10 13:28:32,572 CRIT Supervisor running as root (no user in config file)
2018-07-10 13:28:32,573 WARN No file matches via include "/opt/python/etc/uwsgi.conf"
2018-07-10 13:28:32,573 WARN Included extra file "/opt/python/etc/celery.conf" during parsing
2018-07-10 13:28:32,573 WARN Included extra file "/opt/python/etc/celerybeat.conf" during parsing
2018-07-10 13:28:32,591 INFO RPC interface 'supervisor' initialized
2018-07-10 13:28:32,591 CRIT Server 'inet_http_server' running without any HTTP authentication checking
The source of the issue were imports when tried to set periodic tasks:
celery.py
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
from my_app.tasks.task1 import some_task
sender.some_task(
60.0, some_task.s(), name='call every 60 seconds'
)
The solution is to use task inside the celery app:
celery.py
#app.task
def temp_task():
from my_app.tasks.task1 import some_task
some_task()
so set periodic tasks will look like this
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
from my_app.tasks.task1 import some_task
sender.some_task(
60.0, temp_task.s(), name='call every 60 seconds'
)
It was difficult to get the source of the issue as there were no error logs and regular logs were empty as indeed celery was not started.
I am trying to launch Celery v3.1.25 in production. As I have found in docs
there are 4 ways to run it, 1 of them is to use Supervisor. I have used it before - my Django project is running using it. I made following steps:
1. Created /etc/supervisor/conf.d/mycelery.conf on the basics of official git repo.
[program:mycelery]
command=celery worker -A cosmetics_crawler_project --loglevel=INFO
directory=/home/chiefir/polo/Cosmo
user=chiefir
numprocs=1
stdout_logfile=/home/chiefir/logs/celery/celery.log
stderr_logfile=/home/chiefir/logs/celery/celery.log
autostart=true
autorestart=true
startsecs=10
stopasgroup=true
priority=1000
2. Created /etc/supervisor/conf.d/mycelerybeat.conf:
[program:mycelerybeat]
command=celery beat -A cosmetics_crawler_project --schedule /var/lib/celery/beat.db --loglevel=INFO
directory=/home/chiefir/polo/Cosmo
user=chiefir
numprocs=1
stdout_logfile=/home/chiefir/logs/celery/celerybeat.log
stderr_logfile=/home/chiefir/logs/celery/celerybeat.log
autostart=true
autorestart=true
startsecs=10
stopasgroup=true
priority=999
3. And my /etc/supervisord.conf is set to:
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
I have found in this question that I have to run supervisord to fire the Celery, but it throws error:
Traceback (most recent call last):
File "/usr/bin/supervisord", line 9, in <module>
load_entry_point('supervisor==3.2.0', 'console_scripts', 'supervisord')()
File "/usr/lib/python2.7/dist-packages/supervisor/supervisord.py", line 367, in main
go(options)
File "/usr/lib/python2.7/dist-packages/supervisor/supervisord.py", line 377, in go
d.main()
File "/usr/lib/python2.7/dist-packages/supervisor/supervisord.py", line 77, in main
info_messages)
File "/usr/lib/python2.7/dist-packages/supervisor/options.py", line 1388, in make_logger
stdout = self.nodaemon,
File "/usr/lib/python2.7/dist-packages/supervisor/loggers.py", line 346, in getLogger
handlers.append(RotatingFileHandler(filename,'a',maxbytes,backups))
File "/usr/lib/python2.7/dist-packages/supervisor/loggers.py", line 172, in __init__
FileHandler.__init__(self, filename, mode)
File "/usr/lib/python2.7/dist-packages/supervisor/loggers.py", line 98, in __init__
self.stream = open(filename, mode)
IOError: [Errno 13] Permission denied: '/var/log/supervisor/supervisord.log'
More confusing here is why does it try to use python2.7 if I have my project running in python 3.5? And what should I do further cuz there is so small information in the official Celery docs about that.
UPDATE 1: If I run supervisord as a root user.
/usr/lib/python2.7/dist-packages/supervisor/options.py:297: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord.For help, use /usr/bin/supervisord -h
UPDATE 2:
Might be there a problems with wrong supervisor installation? I have found here that I have install supervisor like this:
sudo apt-get install -y supervisor
pip install supervisor==3.3.3
But I have done only first part, and with that my project works (without Celery, obviously). Should I also install supervisor with pip?
UPDATE 3 When i tried to recipe from update 2 - i got a message that it is impossible to use pip install supervisor with python 3+ :/
Supervisor requires Python 2.4 or later but does not work on any version of Python 3. You are using version 3.5.2
I'm following a tutorial on how to use Celery on my Django production server.
When I get to the bit where it says:
Now reread the configuration and add the new process:
sudo supervisorctl reread
sudo supervisorctl update
When I perform sudo supervisorctl reread in my server (Ubuntu 16.04) terminal, it returns this:
ERROR: CANT_REREAD:
The directory named as part of the path /home/app/logs/celery.log does not exist.
in section 'app-celery' (file: '/etc/supervisor/conf.d/app-celery.conf')
I've done all of the instructions prior to this including installing supervisor as well as creating a file named mysite-celery.conf (app-celery.conf) in the folder: /etc/supervisor/conf.d
If you're curious my app-celery.conf file looks like this:
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
user=zorgan
numprocs=1
stdout_logfile=/home/app/logs/celery.log
stderr_logfile=/home/app/logs/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?
Somehow supervisor is not able to create the folder - /home/app/logs/.
You can create it manually using mkdir and restart the supervisor service
mkdir /home/app/logs
sudo service supervisor restart
I added my username to the superisord.conf file under the [unix_http_server] section like so:
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0770 ; sockef file mode (default 0700)
chown=appuser:supervisor ;(username:group)
This seemed to work- time will tell if it continues working after I manage solve the rest of the supervisor issues.
I have a django app for which i am using celery tasks to perform some csv processing in background, and so i installed rabbitmq-server like sudo apt-get install rabbitmq-server, by this command the rabbitmq-server was installed and running successfully.
And i have some celery tasks code in tasks.py module inside an app and running the celery like below
celery -A app.tasks worker --loglevel=info
which was working fine and executing the csv files in background successfully, but now i just want to daemonize the above command, and i searched about any option to daemonize it but i din't found any arguments to pass like -D to daemonize the above command. So is there anyway that i can daemonize the above command and make celery run ?
I think you're looking for the --detach option. [1]
But is recommended that you use something like systemd.
The celery docs has a whole page on this topic. [2]
[1] http://celery.readthedocs.org/en/latest/reference/celery.bin.base.html#daemon-options
[2] http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html
supervisorctl will be a better bet on this.
Installation: sudo apt-get install supervisor
The main configuration file of supervisor is here: /etc/supervisor/supervisord.conf
Run $vim /etc/supervisor/supervisord.conf to inspect. Looking into the file, at the bottom, youu'll notice:
[include]
files = /etc/supervisor/conf.d/*.conf
This basically means that config files of your projects can be stored here /etc/supervisor/conf.d/ and they will be automatically included.
Run: sudo vim /etc/supervisor/conf.d/myapp.conf. Your configuration may look like:
[program:myapp]
command={{ your celery commands without curly braces }}
directory=/directory/to/myapp
autostart=true
autorestart=true
stderr_logfile=/var/log/myapp.err.log
stdout_logfile=/var/log/myapp.out.log
To Restart service: $sudo service supervisor restart
To Re-read after making updates to any *.conf file: $sudo supervisorctl reread
To record updates: $sudo supervisorctl update
To check status of specific *.conf: sudo supervisorctl status myapp
Check your log files for more status data.