app-celery: ERROR (no such file) - python

I've followed the tutorial on how to implement celery on my django production server, using supervisor.
I've done this successfully, however when I try to start supervisor with sudo supervisorctl start app-celery - it returns:
app-celery: ERROR (no such file)
Here is my config in the folder /etc/supervisor/conf.d (app-celery.conf):
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
numprocs=1
stdout_logfile=/var/log/supervisor/celery.log
stderr_logfile=/var/log/supervisor/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?

I had the same issue. Adding the following resolved the issue for me.
environment=DJANGO_SETTINGS_MODULE="my_proj.settings"
I'm not sure why this is necessary. It's not listed in the documentation I've seen, and running the raw command either inside or outside of the virtual environment seems to be fine. Nevertheless, celery now starts and restarts without issue for me.

Related

Trying to get supervisor to create a worker for python-rq

I am trying to get supervisor to spawn a worker following this pattern using python-RQ, much like what is mentioned in this stackoverflow question. I can start workers manually from the terminal as follows:
$ venv/bin/rq worker
14:35:27 Worker rq:worker:fd403822518a4f21802fa0dc417e526a: started, version 1.2.2
14:35:27 *** Listening on default...
It works great. I can confirm the worker exists in another terminal:
$ venv/bin/rq info
0 queues, 0 jobs total
fd403822518a4f21802fa0dc417e526a (b'' 41735): idle default
1 workers, 0 queues
Now to start a worker using supervisor.... Here is my supervisord.conf file, located in the same directory.
[supervisord]
;[program:worker]
command=venv/bin/rq worker
process_name=%(program_name)s-%(process_num)s
numprocs=1
directory=.
stopsignal=TERM
autostart=false
autorestart=false
I can start supervisor as follows:
$ venv/bin/supervisord -n
2020-03-05 14:36:45,079 INFO supervisord started with pid 41757
However, checking for a new worker, I see it's not there.
$ venv/bin/rq info
0 queues, 0 jobs total
0 workers, 0 queues
I have tried a multitude of other ways to get this worker to start, such as...
... within the virtual environment:
$ source venv/bin/activate
(venv) $ rq worker
*** Listening on default...
... using a shell file
#!/bin/bash
source /venv/bin/activate
rq worker low
$ ./start.sh
*** Listening on default...
... using a python script
$ venv/bin/python3 worker.py
*** Listening on default...
When started manually they all work fine. Changing the command= in supervisord.conf doesn't seem to make a difference. There is no worker to be found. What am I missing? Why won't supervisor start a worker? I am running this in Mac OS and my file structure is as follows:
.
|--__pycache__
|--supervisord.conf
|--supervisord.log
|--supervisord.pid
|--main.py
|--task.py
|--venv
|--bin
|--rq
|--supervisord
|--...etc
|--include
|--lib
|--pyenv.cfg
Thanks in advance.
I had two problems with supervisord.conf, which was preventing the worker from starting. The corrected config file is as follows:
[supervisord]
[program:worker]
command=venv/bin/rqworker
process_name=%(program_name)s-%(process_num)s
numprocs=1
directory=.
stopsignal=TERM
autostart=true
autorestart=false
First, the line [program:worker] was in fact commented out. I must have taken this line from the commented out sample file and not realized. However removing the comment still didn't start the worker.... I also had to set autostart=true, as starting supervisor does not automatically start a command.

ERROR: CANT_REREAD: The directory named as part of the path /home/app/logs/celery.log does not exist

I'm following a tutorial on how to use Celery on my Django production server.
When I get to the bit where it says:
Now reread the configuration and add the new process:
sudo supervisorctl reread
sudo supervisorctl update
When I perform sudo supervisorctl reread in my server (Ubuntu 16.04) terminal, it returns this:
ERROR: CANT_REREAD:
The directory named as part of the path /home/app/logs/celery.log does not exist.
in section 'app-celery' (file: '/etc/supervisor/conf.d/app-celery.conf')
I've done all of the instructions prior to this including installing supervisor as well as creating a file named mysite-celery.conf (app-celery.conf) in the folder: /etc/supervisor/conf.d
If you're curious my app-celery.conf file looks like this:
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
user=zorgan
numprocs=1
stdout_logfile=/home/app/logs/celery.log
stderr_logfile=/home/app/logs/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?
Somehow supervisor is not able to create the folder - /home/app/logs/.
You can create it manually using mkdir and restart the supervisor service
mkdir /home/app/logs
sudo service supervisor restart
I added my username to the superisord.conf file under the [unix_http_server] section like so:
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0770 ; sockef file mode (default 0700)
chown=appuser:supervisor ;(username:group)
This seemed to work- time will tell if it continues working after I manage solve the rest of the supervisor issues.

Why the supervisor make the celery worker changing form running to starting all the time?

backgroud
The system is Centos7, which have a python2.x. 1GB memory and single core.
I install python3.x , I can code python3 into python3.
The django-celery project is based on a virtualenv python3.x,and I had make it well at nginx,uwsgi,mariadb. At least,I think so for no error happend.
I try to use supervisor to control the django-celery's worker,like below:
command=env/bin/python project/manage.py celeryd -l INFO -n worker_%(process_num)s
numprocs=4
process_name=projects_worker_%(process_num)s
stdout_logfile=logfile.log
etderr_logfile=logfile_err.log
Also had make setting about celery events,celery beat,this part is well ,no error happend. Error comes from the part of worker.
When I keep the proces big than 1,it would run at first,when I do supervisorctl status,all are running.
But when I do the same command to see status once more times,some process status change to starting.
So I try more times,I found that:the worker's status would always change from running to starting and then changeing starting to running-- no stop.
When I check the supervisor's logfile at tmp/supervisor.log,it shows like:
exit status 1; not expected
entered runnging state,process has stayed up for > than 1 seconds(startsecs)
'project_worker_0' with pid 2284
Maybe it shows why the worker change status all the time.
What's more ,when I change the proces to 1,the worker could failed.The worker's log show me:
stale pidfile exists.Removing it
But,I did not ponit the pidfile path to worker.And,I just found the events's and beat 's pidfie at the / path,no worker's pidfile.Also ,I try find / -name *.pid to find a pidfile like worker,or celeryd,but here did not exist.
question
firstly, I want to deploy the project , so ,did here any other way to deploy the django-celery with virtulanev's celery part?
If here anyone can tell me how this phenomenon comes,I would better to choose supervisor to deploy the celery part. Anyone can help me about it ?
PS
Any of your thoughts may be helpful to me, best wishs!
Finally, I solve this problem yesterday night.
about the reason
I make the project could success running at a windows 10 system, but did no check when I change the project to centos7.+. The command:env/bin/python project/manage.py celeryd could not run success. So the supervisor would start a process which will failed soon.
Why the command could not success? I had pip installed all the package need. But it show err below:
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
I try to search some blog about this error, and get the anser:
export C_FORCE_ROOT='true' # at the centos enviroument
action to solve(after meeting error like this)
add export C_FORCE_ROOT='true' to centos's enviroment file and source it.
check command 'env/bin/python project/manage.py celeryd ',did it run successful.
restart the supervisord. Attention please! not supervisorctl reload,it just reload the .conf file,not the environment file. Try kill the process supervisord -c xx.conf(ps aux | grep supervisord and kill -9 process_number,be careful).
some url about the blog
the error when just run celeryd not sucess in chinese

Celery is constantly starting with supervisor

How to run celery in Supervisor?
This is my .conf file:
[program:celery_worker]
command=celery -A urlextractor worker -l info
process_name=%(program_name)s ; process_name expr
numprocs=1
directory=/home/omuntean/Django/urlextractor /urlextractor ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=unexpected ; when to restart if exited after running
user=root
stopasgroup=true
stopsignal=QUIT
stdout_logfile=/var/log/urlextractor/celery_w_out.log
stderr_logfile=/var/log/urlextractor/celery_w_err.log
If I run the celery command normally it works fine without any errors, however, when I type:
sudo service supervisor start
Then see the status with:
supervisorctl status
It gives me:
celery_worker RUNNING pid 10651, uptime 0:00:02
urlextractor RUNNING pid 9761, uptime 0:08:08
And then after I type again it gives me:
celery_worker STARTING
urlextractor RUNNING pid 9761, uptime 0:08:09
Why is this happening and how can I make it work?
I have found the problem. Its the user. Mine is set to root. Celery does not permit to be activated via root unless it is forced. I only had to change the user.

Cannot setup Celery as daemon on server

I cannot setup Celery as daemon on server (django 1.6.11, celery 3.1, Ubuntu 14.04)
Tried lot of options, can anyone place full setting of working configuration to run celery as daemon?
I am very disappointed from official docs http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts - none of this working, no full step-by-step tutorial. Zero (!!!) videos on youtube on how to setup daemon.
Now i able to run celery simple by celery worker -A engine -l info -E
tasks from django are executed successfully.
I have done configs:
/etc/defaults/celery
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute path to "manage.py"
CELERY_BIN="/var/www/engine/manage.py"
# How to call manage.py
CELERYD_MULTI="celery multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=2"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="root"
/etc/init.d/celeryd
got from https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd without changes
Now, when i go to console and run:
cd /etc/init.d
celery multi start w1
i see output:
celery multi v3.1.11 (Cipater)
> Starting nodes...
> w1#engine: OK
So, no errors! Tasks are not invoked and i cannot figure out whats wrong.
I would suggest to use Supervisor. It's better way than init scripts, because you can run multiple Celery instances for different projects on one server. Example config for Supervisor you can find in Celery repo or fully working example from my project:
# /etc/supervisor/conf.d/celery.conf
[program:celery]
command=/home/newspos/.virtualenvs/newspos/bin/celery worker -A newspos --loglevel=INFO
user=newspos
environment=DJANGO_SETTINGS_MODULE="newspos.settings"
directory=/home/newspos/projects/newspos/
autostart=true
autorestart=true
stopwaitsecs = 600
killasgroup=true
startsecs=10
stdout_logfile=/var/log/celery/newspos-celeryd.log
stderr_logfile=/var/log/celery/newspos-celeryd.log

Categories

Resources