I installed Celery as a Windows service. My code moves *.pid and Celery log files into another directory, but three files (celerybeat-schedule.bak, celerybeat-schedule.dir, celerybeat-schedule.dat) which I am not able to move.
I used below code for changing other file's default path:
command = '"{celery_path}" -A {proj_dir} beat -f "{log_path}" -l info --pidfile="{pid_path}" '.format(
celery_path=os.path.join(PYTHONSCRIPTPATH, 'celery.exe'),
proj_dir=PROJECTDIR,
# log_path_1=os.path.join(INSTDIR,'celery_2.log')),
log_path=os.path.join(tmpdir,'celery_'+cur_date_time+'.log'),
pid_path = os.path.join(tmpdir,'celerybeat_'+cur_date_time+'.pid'))
How to change default path of Celery beat service?
If you executed celery -A your.project.app beat --help it would print you very useful CLI help where you would find the solution to your problem - the -s <path to the scheduler database file> flag.
-s SCHEDULE, --schedule SCHEDULE
Path to the schedule database. Defaults to celerybeat-
schedule. The extension '.db' may be appended to the
filename. Default is celerybeat-schedule.
All you have to do is to pass a full path to the schedule database file to your Celery beat process. Example: -s C:/services/celery/celerybeat-schedule.db
Finally I am able to change path of celery services using below code.
command = '"{celery_path}" -A {proj_dir} beat -f "{log_path}" -l info --pidfile="{pid_path}" '.format(
celery_path=os.path.join(PYTHONSCRIPTPATH, 'celery.exe'),
proj_dir=PROJECTDIR,
# log_path_1=os.path.join(INSTDIR,'celery_2.log')),
log_path=os.path.join(CELERYDIR,'celery_'+cur_date_time+'.log'),
# bak_path=os.path.join(CELERYDIR,'celerybeat-schedule'),
pid_path = os.path.join(CELERYDIR,'celerybeat_'+cur_date_time+'.pid'))
Related
Celery worker suddenly not working and displaying error message saying unknown option -A.
I am running celery 5.0.0 on windows within virtual environment of python.
The command is
pipenv run celery worker -A <celery_file> -l info
Error message is as follows:
Usage: celery worker [OPTIONS]
Try 'celery worker --help' for help.
Error: no such option: -A
Please let me know why this error is occurring, as I am unable to find the cause of it.
Worker has no flag -A, I think you want to use that on the celery level.
Like this:
pipenv run celery -A worker <celery_file> -l info
Now I am not on Windows so I can't verify but it seems to be in line with the commands in the official documentation on workers.
$ celery -A proj worker -l info
3.1.25 was the last version that works on windows(just tested on my win10 machine)
pip install celery==3.1.25
In your Python interpreter, type the following commands:
>>> import os
>>> import sys
>>> os.path.dirname(sys.executable)
'C:\\python\python'
note celery have dropped support for Windows(since v4).
"c:\python\python" -m celery -A your-application worker -Q your-queue -l info --concurrency=300
or using other format
celery worker --app=app.app --pool=your-pool --loglevel=INFO
The correct way (for those using pipenv) to start the worker should be something like pipenv run celery -A <package.module> worker -l info . Note that -A comes before worker command as it is general Celery option. Look at pipenv run celery --help for more details.
Also, I notice you use the latest 5.0.0 Celery - they have changed the command-line handler so switching to 5.0.0 may cause problems with some of your old startup scripts.
I'm following a tutorial on how to use Celery on my Django production server.
When I get to the bit where it says:
Now reread the configuration and add the new process:
sudo supervisorctl reread
sudo supervisorctl update
When I perform sudo supervisorctl reread in my server (Ubuntu 16.04) terminal, it returns this:
ERROR: CANT_REREAD:
The directory named as part of the path /home/app/logs/celery.log does not exist.
in section 'app-celery' (file: '/etc/supervisor/conf.d/app-celery.conf')
I've done all of the instructions prior to this including installing supervisor as well as creating a file named mysite-celery.conf (app-celery.conf) in the folder: /etc/supervisor/conf.d
If you're curious my app-celery.conf file looks like this:
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
user=zorgan
numprocs=1
stdout_logfile=/home/app/logs/celery.log
stderr_logfile=/home/app/logs/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?
Somehow supervisor is not able to create the folder - /home/app/logs/.
You can create it manually using mkdir and restart the supervisor service
mkdir /home/app/logs
sudo service supervisor restart
I added my username to the superisord.conf file under the [unix_http_server] section like so:
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0770 ; sockef file mode (default 0700)
chown=appuser:supervisor ;(username:group)
This seemed to work- time will tell if it continues working after I manage solve the rest of the supervisor issues.
I cannot setup Celery as daemon on server (django 1.6.11, celery 3.1, Ubuntu 14.04)
Tried lot of options, can anyone place full setting of working configuration to run celery as daemon?
I am very disappointed from official docs http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts - none of this working, no full step-by-step tutorial. Zero (!!!) videos on youtube on how to setup daemon.
Now i able to run celery simple by celery worker -A engine -l info -E
tasks from django are executed successfully.
I have done configs:
/etc/defaults/celery
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute path to "manage.py"
CELERY_BIN="/var/www/engine/manage.py"
# How to call manage.py
CELERYD_MULTI="celery multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=2"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="root"
/etc/init.d/celeryd
got from https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd without changes
Now, when i go to console and run:
cd /etc/init.d
celery multi start w1
i see output:
celery multi v3.1.11 (Cipater)
> Starting nodes...
> w1#engine: OK
So, no errors! Tasks are not invoked and i cannot figure out whats wrong.
I would suggest to use Supervisor. It's better way than init scripts, because you can run multiple Celery instances for different projects on one server. Example config for Supervisor you can find in Celery repo or fully working example from my project:
# /etc/supervisor/conf.d/celery.conf
[program:celery]
command=/home/newspos/.virtualenvs/newspos/bin/celery worker -A newspos --loglevel=INFO
user=newspos
environment=DJANGO_SETTINGS_MODULE="newspos.settings"
directory=/home/newspos/projects/newspos/
autostart=true
autorestart=true
stopwaitsecs = 600
killasgroup=true
startsecs=10
stdout_logfile=/var/log/celery/newspos-celeryd.log
stderr_logfile=/var/log/celery/newspos-celeryd.log
I have set up an environment variable which I execute locally using a .sh file:
.sh file:
#!/bin/sh
echo "environment variables"
export BROKER="amqp://admin:password#11.11.11.11:4672//"
Locally inside a virtual environment I can now read this in Python using:
BROKER = os.environ['BROKER']
However, on my production server (Ubuntu). I run the same file chmod +x name_of_file.sh and source settings.sh and can see the variable using printenv, but Python gives the error KeyError: 'BROKER' Why?
This only happens on my production machine despite the fact I can see the variable using printenv. Note my production machines does not use virtualenv.
If I run the python shell on Ubuntu and do os.environ['BROKER'] it prints out the correct value. So I have not idea what the app file does not find it.
This is the task that gets run which cannot find the variable (supervisor task)
[program:celery]
directory = /srv/app_test/
command=celery -A tasks worker -l info
stdout_logfile = /var/log/celeryd_.log
autostart=true
autorestart=true
startsecs=5
stopwaitsecs = 600
killasgroup=true
priority=998
user=ubuntu
Celery Config (which does not find the variable when executed under supervisor:
from kombu import Exchange, Queue
import os
# Celery Settings
BROKER = os.environ['BROKER']
When I restart supervisor it gives the key error.
The environment variables from your shell will not be visible within supervisor tasks.
You need to use the environment setting in your supervisor config:
[program:celery]
...
environment=BROKER="amqp://admin:password#11.11.11.11:4672//"
This requires supervisor 3.0+.
I have a django app for which i am using celery tasks to perform some csv processing in background, and so i installed rabbitmq-server like sudo apt-get install rabbitmq-server, by this command the rabbitmq-server was installed and running successfully.
And i have some celery tasks code in tasks.py module inside an app and running the celery like below
celery -A app.tasks worker --loglevel=info
which was working fine and executing the csv files in background successfully, but now i just want to daemonize the above command, and i searched about any option to daemonize it but i din't found any arguments to pass like -D to daemonize the above command. So is there anyway that i can daemonize the above command and make celery run ?
I think you're looking for the --detach option. [1]
But is recommended that you use something like systemd.
The celery docs has a whole page on this topic. [2]
[1] http://celery.readthedocs.org/en/latest/reference/celery.bin.base.html#daemon-options
[2] http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html
supervisorctl will be a better bet on this.
Installation: sudo apt-get install supervisor
The main configuration file of supervisor is here: /etc/supervisor/supervisord.conf
Run $vim /etc/supervisor/supervisord.conf to inspect. Looking into the file, at the bottom, youu'll notice:
[include]
files = /etc/supervisor/conf.d/*.conf
This basically means that config files of your projects can be stored here /etc/supervisor/conf.d/ and they will be automatically included.
Run: sudo vim /etc/supervisor/conf.d/myapp.conf. Your configuration may look like:
[program:myapp]
command={{ your celery commands without curly braces }}
directory=/directory/to/myapp
autostart=true
autorestart=true
stderr_logfile=/var/log/myapp.err.log
stdout_logfile=/var/log/myapp.out.log
To Restart service: $sudo service supervisor restart
To Re-read after making updates to any *.conf file: $sudo supervisorctl reread
To record updates: $sudo supervisorctl update
To check status of specific *.conf: sudo supervisorctl status myapp
Check your log files for more status data.