Celery worker displaying "unknown option -A" on windows - python

Celery worker suddenly not working and displaying error message saying unknown option -A.
I am running celery 5.0.0 on windows within virtual environment of python.
The command is
pipenv run celery worker -A <celery_file> -l info
Error message is as follows:
Usage: celery worker [OPTIONS]
Try 'celery worker --help' for help.
Error: no such option: -A
Please let me know why this error is occurring, as I am unable to find the cause of it.

Worker has no flag -A, I think you want to use that on the celery level.
Like this:
pipenv run celery -A worker <celery_file> -l info
Now I am not on Windows so I can't verify but it seems to be in line with the commands in the official documentation on workers.
$ celery -A proj worker -l info

3.1.25 was the last version that works on windows(just tested on my win10 machine)
pip install celery==3.1.25
In your Python interpreter, type the following commands:
>>> import os
>>> import sys
>>> os.path.dirname(sys.executable)
'C:\\python\python'
note celery have dropped support for Windows(since v4).
"c:\python\python" -m celery -A your-application worker -Q your-queue -l info --concurrency=300
or using other format
celery worker --app=app.app --pool=your-pool --loglevel=INFO

The correct way (for those using pipenv) to start the worker should be something like pipenv run celery -A <package.module> worker -l info . Note that -A comes before worker command as it is general Celery option. Look at pipenv run celery --help for more details.
Also, I notice you use the latest 5.0.0 Celery - they have changed the command-line handler so switching to 5.0.0 may cause problems with some of your old startup scripts.

Related

Detect and Initiate celery worker in Python Code

Normally I run the following in terminal to start the worker process
celery -A myapp worker --loglevel=info
What I want to achieve now is that with python code
I will check whether they are worker process being initiated,
if not only I run this command (with python code)
How to achieve that?
There is no need for that as Celery gives you standard way to do it...
--pidfile PIDFILE Optional file used to store the process pid. The
program won't start if this file already exists and
the pid is still alive.
So simply change how you start your worker to something like celery -A myapp worker --loglevel=info --pidfile celery1.pid
If you open another terminal and run the command I wrote above, it will not run as the PID file is already created.

Trying to get supervisor to create a worker for python-rq

I am trying to get supervisor to spawn a worker following this pattern using python-RQ, much like what is mentioned in this stackoverflow question. I can start workers manually from the terminal as follows:
$ venv/bin/rq worker
14:35:27 Worker rq:worker:fd403822518a4f21802fa0dc417e526a: started, version 1.2.2
14:35:27 *** Listening on default...
It works great. I can confirm the worker exists in another terminal:
$ venv/bin/rq info
0 queues, 0 jobs total
fd403822518a4f21802fa0dc417e526a (b'' 41735): idle default
1 workers, 0 queues
Now to start a worker using supervisor.... Here is my supervisord.conf file, located in the same directory.
[supervisord]
;[program:worker]
command=venv/bin/rq worker
process_name=%(program_name)s-%(process_num)s
numprocs=1
directory=.
stopsignal=TERM
autostart=false
autorestart=false
I can start supervisor as follows:
$ venv/bin/supervisord -n
2020-03-05 14:36:45,079 INFO supervisord started with pid 41757
However, checking for a new worker, I see it's not there.
$ venv/bin/rq info
0 queues, 0 jobs total
0 workers, 0 queues
I have tried a multitude of other ways to get this worker to start, such as...
... within the virtual environment:
$ source venv/bin/activate
(venv) $ rq worker
*** Listening on default...
... using a shell file
#!/bin/bash
source /venv/bin/activate
rq worker low
$ ./start.sh
*** Listening on default...
... using a python script
$ venv/bin/python3 worker.py
*** Listening on default...
When started manually they all work fine. Changing the command= in supervisord.conf doesn't seem to make a difference. There is no worker to be found. What am I missing? Why won't supervisor start a worker? I am running this in Mac OS and my file structure is as follows:
.
|--__pycache__
|--supervisord.conf
|--supervisord.log
|--supervisord.pid
|--main.py
|--task.py
|--venv
|--bin
|--rq
|--supervisord
|--...etc
|--include
|--lib
|--pyenv.cfg
Thanks in advance.
I had two problems with supervisord.conf, which was preventing the worker from starting. The corrected config file is as follows:
[supervisord]
[program:worker]
command=venv/bin/rqworker
process_name=%(program_name)s-%(process_num)s
numprocs=1
directory=.
stopsignal=TERM
autostart=true
autorestart=false
First, the line [program:worker] was in fact commented out. I must have taken this line from the commented out sample file and not realized. However removing the comment still didn't start the worker.... I also had to set autostart=true, as starting supervisor does not automatically start a command.

Starting Celery worker in windows

I'm trying to start celery worker in windows 7 with the following command
celery worker -A routes.celery --loglevel=info
result of the above command is
c:\users\xxxxx\appdata\local\continuum\anaconda2\python.exe: can't open file 'C:\Users\xxxxx\AppData\Local\Continuum\Anaconda2\Scripts\celery': [Errno 2] No such file or directory
is command "celery" is designed only for Unix-like system?
if so then how to start celery worker from python script instead of command line.
You probably didn't install celery package on your machine. That is why your Python doesn't recognize 'celery' command.

Cannot setup Celery as daemon on server

I cannot setup Celery as daemon on server (django 1.6.11, celery 3.1, Ubuntu 14.04)
Tried lot of options, can anyone place full setting of working configuration to run celery as daemon?
I am very disappointed from official docs http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts - none of this working, no full step-by-step tutorial. Zero (!!!) videos on youtube on how to setup daemon.
Now i able to run celery simple by celery worker -A engine -l info -E
tasks from django are executed successfully.
I have done configs:
/etc/defaults/celery
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute path to "manage.py"
CELERY_BIN="/var/www/engine/manage.py"
# How to call manage.py
CELERYD_MULTI="celery multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=2"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="root"
/etc/init.d/celeryd
got from https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd without changes
Now, when i go to console and run:
cd /etc/init.d
celery multi start w1
i see output:
celery multi v3.1.11 (Cipater)
> Starting nodes...
> w1#engine: OK
So, no errors! Tasks are not invoked and i cannot figure out whats wrong.
I would suggest to use Supervisor. It's better way than init scripts, because you can run multiple Celery instances for different projects on one server. Example config for Supervisor you can find in Celery repo or fully working example from my project:
# /etc/supervisor/conf.d/celery.conf
[program:celery]
command=/home/newspos/.virtualenvs/newspos/bin/celery worker -A newspos --loglevel=INFO
user=newspos
environment=DJANGO_SETTINGS_MODULE="newspos.settings"
directory=/home/newspos/projects/newspos/
autostart=true
autorestart=true
stopwaitsecs = 600
killasgroup=true
startsecs=10
stdout_logfile=/var/log/celery/newspos-celeryd.log
stderr_logfile=/var/log/celery/newspos-celeryd.log

How to run celery as daemon with normal celery command

I have a django app for which i am using celery tasks to perform some csv processing in background, and so i installed rabbitmq-server like sudo apt-get install rabbitmq-server, by this command the rabbitmq-server was installed and running successfully.
And i have some celery tasks code in tasks.py module inside an app and running the celery like below
celery -A app.tasks worker --loglevel=info
which was working fine and executing the csv files in background successfully, but now i just want to daemonize the above command, and i searched about any option to daemonize it but i din't found any arguments to pass like -D to daemonize the above command. So is there anyway that i can daemonize the above command and make celery run ?
I think you're looking for the --detach option. [1]
But is recommended that you use something like systemd.
The celery docs has a whole page on this topic. [2]
[1] http://celery.readthedocs.org/en/latest/reference/celery.bin.base.html#daemon-options
[2] http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html
supervisorctl will be a better bet on this.
Installation: sudo apt-get install supervisor
The main configuration file of supervisor is here: /etc/supervisor/supervisord.conf
Run $vim /etc/supervisor/supervisord.conf to inspect. Looking into the file, at the bottom, youu'll notice:
[include]
files = /etc/supervisor/conf.d/*.conf
This basically means that config files of your projects can be stored here /etc/supervisor/conf.d/ and they will be automatically included.
Run: sudo vim /etc/supervisor/conf.d/myapp.conf. Your configuration may look like:
[program:myapp]
command={{ your celery commands without curly braces }}
directory=/directory/to/myapp
autostart=true
autorestart=true
stderr_logfile=/var/log/myapp.err.log
stdout_logfile=/var/log/myapp.out.log
To Restart service: $sudo service supervisor restart
To Re-read after making updates to any *.conf file: $sudo supervisorctl reread
To record updates: $sudo supervisorctl update
To check status of specific *.conf: sudo supervisorctl status myapp
Check your log files for more status data.

Categories

Resources