Celery Running as Daemon stops - python

I have installed celery using redis on a centos linux server.
I start a celery worker using the following command:
celery multi start worker1 -A proj -Q lopri,lopri2 -l info --pidfile="$HOME/run/celery/%n.pid" --logfile="$HOME/log/celery/%n.log"
The problem is that after a few hours, the worker no longer responds task creation. I have to restart the worker in order for it to process tasks again.
Here is the celery settings file located at /etc/default/celeryd:
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples):
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="proj"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
#django settings
#export DJANGO_SETTINGS_MODULE="settings"
# Where to chdir at start.
CELERYD_CHDIR="/var/www/html/proj/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="ec2-user"
CELERYD_GROUP="ec2-user"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
One problem that maybe a clue as to what is happening is that I have to run the "start" command from the project folder other wise I receive the error:
ImportError: No module named proj
Shouldn't the CELERYD_CHDIR setting take care of this? Does that mean that my start command is not using this celery default setting file?
Thanks for any light you can shed on this.

If you are running celery by command line, your settings file is pretty much useless. I mean, they are not being applied, that's why you get an error saying your project can't be imported.
This is the script to run celery as daemon:
https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd
As you can see here: https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd#L56
That's where the script imports your settings.
I'm not quite sure why your celeryd stops working after a few hours, but this shows that's you're not really running as a daemon.
Maybe setting the propert init.d script with settings can fix that, but I doubt it.

Related

Why the supervisor make the celery worker changing form running to starting all the time?

backgroud
The system is Centos7, which have a python2.x. 1GB memory and single core.
I install python3.x , I can code python3 into python3.
The django-celery project is based on a virtualenv python3.x,and I had make it well at nginx,uwsgi,mariadb. At least,I think so for no error happend.
I try to use supervisor to control the django-celery's worker,like below:
command=env/bin/python project/manage.py celeryd -l INFO -n worker_%(process_num)s
numprocs=4
process_name=projects_worker_%(process_num)s
stdout_logfile=logfile.log
etderr_logfile=logfile_err.log
Also had make setting about celery events,celery beat,this part is well ,no error happend. Error comes from the part of worker.
When I keep the proces big than 1,it would run at first,when I do supervisorctl status,all are running.
But when I do the same command to see status once more times,some process status change to starting.
So I try more times,I found that:the worker's status would always change from running to starting and then changeing starting to running-- no stop.
When I check the supervisor's logfile at tmp/supervisor.log,it shows like:
exit status 1; not expected
entered runnging state,process has stayed up for > than 1 seconds(startsecs)
'project_worker_0' with pid 2284
Maybe it shows why the worker change status all the time.
What's more ,when I change the proces to 1,the worker could failed.The worker's log show me:
stale pidfile exists.Removing it
But,I did not ponit the pidfile path to worker.And,I just found the events's and beat 's pidfie at the / path,no worker's pidfile.Also ,I try find / -name *.pid to find a pidfile like worker,or celeryd,but here did not exist.
question
firstly, I want to deploy the project , so ,did here any other way to deploy the django-celery with virtulanev's celery part?
If here anyone can tell me how this phenomenon comes,I would better to choose supervisor to deploy the celery part. Anyone can help me about it ?
PS
Any of your thoughts may be helpful to me, best wishs!
Finally, I solve this problem yesterday night.
about the reason
I make the project could success running at a windows 10 system, but did no check when I change the project to centos7.+. The command:env/bin/python project/manage.py celeryd could not run success. So the supervisor would start a process which will failed soon.
Why the command could not success? I had pip installed all the package need. But it show err below:
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
I try to search some blog about this error, and get the anser:
export C_FORCE_ROOT='true' # at the centos enviroument
action to solve(after meeting error like this)
add export C_FORCE_ROOT='true' to centos's enviroment file and source it.
check command 'env/bin/python project/manage.py celeryd ',did it run successful.
restart the supervisord. Attention please! not supervisorctl reload,it just reload the .conf file,not the environment file. Try kill the process supervisord -c xx.conf(ps aux | grep supervisord and kill -9 process_number,be careful).
some url about the blog
the error when just run celeryd not sucess in chinese

yowsup-celery How to run yowsup-celery in daemon mode passing whats app config file as argument

I am using:
yowsup-celery: https://github.com/jlmadurga/yowsup-celery
For trying to integrate whats app in my system.
I have been successfully able to store messages and want to now run celery in daemon mode rather than running in terminal
To run it normally we use:
celery multi start -P gevent -c 2 -l info --yowconfig:conf_wasap
To run daemon mode we use:
sudo /etc/init.d/celeryd start
Here how can I pass config file as argument or is there a way to remove dependency of passing it as an argument rather reading the file inside script.
Since version yowsup-celery 0.2.0 it is possible to pass config file path through configuration instead of argument.
YOWSUPCONFIG = "path/to/credentials/file"

Cannot setup Celery as daemon on server

I cannot setup Celery as daemon on server (django 1.6.11, celery 3.1, Ubuntu 14.04)
Tried lot of options, can anyone place full setting of working configuration to run celery as daemon?
I am very disappointed from official docs http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts - none of this working, no full step-by-step tutorial. Zero (!!!) videos on youtube on how to setup daemon.
Now i able to run celery simple by celery worker -A engine -l info -E
tasks from django are executed successfully.
I have done configs:
/etc/defaults/celery
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute path to "manage.py"
CELERY_BIN="/var/www/engine/manage.py"
# How to call manage.py
CELERYD_MULTI="celery multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=2"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="root"
/etc/init.d/celeryd
got from https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd without changes
Now, when i go to console and run:
cd /etc/init.d
celery multi start w1
i see output:
celery multi v3.1.11 (Cipater)
> Starting nodes...
> w1#engine: OK
So, no errors! Tasks are not invoked and i cannot figure out whats wrong.
I would suggest to use Supervisor. It's better way than init scripts, because you can run multiple Celery instances for different projects on one server. Example config for Supervisor you can find in Celery repo or fully working example from my project:
# /etc/supervisor/conf.d/celery.conf
[program:celery]
command=/home/newspos/.virtualenvs/newspos/bin/celery worker -A newspos --loglevel=INFO
user=newspos
environment=DJANGO_SETTINGS_MODULE="newspos.settings"
directory=/home/newspos/projects/newspos/
autostart=true
autorestart=true
stopwaitsecs = 600
killasgroup=true
startsecs=10
stdout_logfile=/var/log/celery/newspos-celeryd.log
stderr_logfile=/var/log/celery/newspos-celeryd.log

Running and stopping gunicorn server using monit

I am trying to manage stuff on my server using monit. What I would like to do is to run 3 different gunicorn servers on 3 different ports.
Currently I am able to run all the servers at once for example in screen. I can launch servers by commands:
gunicorn -c app1.http_server.config app1.http_server.server:app
gunicorn -c app2.http_server.config app2.http_server.server:app
gunicorn -c app3.http_server.config app3.http_server.server:app
From what I understand of how monit works, I should monitrc file and there specify all the stuff, something like:
#set mailserver localhost
#set alert myemail#gmail.com
check process app1 with pidfile /var/run/app1.pid
start program = "gunicorn -c app1.http_server.config app1.http_server.server:app"
stop program = "???"
if failed unixsocket ??? then start
if cpu > 50% for 5 cycles then alert
# TODO app2, app3
check system resources
if loadavg (1min) > 4 then alert
if loadavg (5min) > 2 then alert
if memory usage > 75% then alert
if cpu usage (user) > 70% then alert
if cpu usage (system) > 30% then alert
if cpu usage (wait) > 20% then alert
check filesystem rootfs with path /
if space usage > 80% then alert
I have tried to put various stuff to stop program field and same to start program, but monit is not able to launch the gunicorn server. So my question is how can I run and stop gunicorn server from monit? And what would be the gunicorn's unixsocket when I'll launch it? Could anyone provide some example that might help me to set this up?
I had the same problem. Monit is happier with full paths like "/virtualenv_path/bin/gunicorn". If you don't use virtualenv, just remove it wherever I placed it. The commands are quite long, but it worked that way:
check process pymonit with pidfile /path/to/pid/gunicorn.pid
start program = "/virtualenv_path/bin/python /virtualenv_path/bin/gunicorn -c /project/path/gunicorn.conf.py /project/path/wsgi:application"
stop program = "/usr/bin/pkill -f '/virtualenv_path/bin/python /virtualenv_path/bin/gunicorn -c /project/path/gunicorn.conf.py /project/path/wsgi:application'"
if failed host 127.0.0.1 port 8011 protocol http then restart
if 5 restarts within 5 cycles then alert
In your gunicorn.conf, you should have something like this:
bind = '127.0.0.1:8011'
...
pidfile = '/path/to/pid/gunicorn.pid'
in your gunicorn.conf.
It seems like you are using monit as a startup service?
Depending on your OS, you should still create a sysvinit, systemd or upstart file for your apps, that also means it will launch on boot, and not be dependent on monit to be running and have its first tick.
This also simplifies your monit config, to just be service app1 start / stop.
If the OS doesn't have either of those systems create a shell script that sources your virtualenv properly and changes directory (not needed with gunicorns chdir though), then you have a more readable monit config.

Cannot read environment variable in Python on Ubuntu

I have set up an environment variable which I execute locally using a .sh file:
.sh file:
#!/bin/sh
echo "environment variables"
export BROKER="amqp://admin:password#11.11.11.11:4672//"
Locally inside a virtual environment I can now read this in Python using:
BROKER = os.environ['BROKER']
However, on my production server (Ubuntu). I run the same file chmod +x name_of_file.sh and source settings.sh and can see the variable using printenv, but Python gives the error KeyError: 'BROKER' Why?
This only happens on my production machine despite the fact I can see the variable using printenv. Note my production machines does not use virtualenv.
If I run the python shell on Ubuntu and do os.environ['BROKER'] it prints out the correct value. So I have not idea what the app file does not find it.
This is the task that gets run which cannot find the variable (supervisor task)
[program:celery]
directory = /srv/app_test/
command=celery -A tasks worker -l info
stdout_logfile = /var/log/celeryd_.log
autostart=true
autorestart=true
startsecs=5
stopwaitsecs = 600
killasgroup=true
priority=998
user=ubuntu
Celery Config (which does not find the variable when executed under supervisor:
from kombu import Exchange, Queue
import os
# Celery Settings
BROKER = os.environ['BROKER']
When I restart supervisor it gives the key error.
The environment variables from your shell will not be visible within supervisor tasks.
You need to use the environment setting in your supervisor config:
[program:celery]
...
environment=BROKER="amqp://admin:password#11.11.11.11:4672//"
This requires supervisor 3.0+.

Categories

Resources