celery daemon stopped working after I killed celery process - python

I have started job with wrong parameters which would cause it to run forever. I just killed main celery process with kill -9. After that service does not start whatever I do. When running from command line, it starts fine. I'm running Ubuntu 18.04. Log file created but has 0 bytes.
Here is the output of systemctl status celery:
user#ubuntu18-0:~/repository/backend$ systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/lib/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Mar 12 09:00:10 ubuntu18-0 sh[5889]: celery multi v4.3.0 (rhubarb)
Mar 12 09:00:10 ubuntu18-0 sh[5889]: > ubuntu18-0#ubuntu18-0: DOWN
Mar 12 09:40:51 ubuntu18-0 systemd[1]: Starting Celery Service...
Mar 12 09:40:54 ubuntu18-0 sh[9234]: 2020-03-12 09:40:54,234 - Asynchronous - INFO - ######################## NEW LOG ########################
Mar 12 09:40:54 ubuntu18-0 sh[9234]: celery multi v4.3.0 (rhubarb)
Mar 12 09:40:54 ubuntu18-0 sh[9234]: > Starting nodes...
Mar 12 09:40:54 ubuntu18-0 sh[9234]: > ubuntu18-0#ubuntu18-0: OK
Mar 12 09:40:54 ubuntu18-0 systemd[1]: Started Celery Service.
Mar 12 09:40:55 ubuntu18-0 sh[9266]: celery multi v4.3.0 (rhubarb)
Mar 12 09:40:55 ubuntu18-0 sh[9266]: > ubuntu18-0#ubuntu18-0: DOWN
user#ubuntu18-0:~/repository/backend$
Here is the output of starting it manually:
./start_celery.sh
celery multi v4.3.0 (rhubarb)
> Starting nodes...
2020-03-12 09:44:12,459 - Asynchronous - INFO - ######################## NEW LOG ########################
> ubuntu18-0#ubuntu18-0: OK
Here is my celery.service:
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
EnvironmentFile=/etc/celery.conf
WorkingDirectory=/home/user/repository/backend
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
Restart=no
#RestartSec=60
[Install]
WantedBy=multi-user.target
Here is /etc/celery.conf:
# Names of node, adapt depending on host
CELERYD_NODES="ubuntu18-0"
ENV_PYTHON="/home/user/venv/bin/python3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/user/venv/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="asynchronous.async_tasks.celery"
# Where to chdir at start.
CELERYD_CHDIR="/home/user/repository/backend/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--concurrency=4 -Ofair -Q hcs_async,celery"
# Set logging level to DEBUG
CELERYD_LOG_LEVEL="INFO"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/home/user/celery/%n%I.log"
CELERYD_PID_FILE="/home/user/celery/%n.pid"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
Reboot did not help. I suspect there might be a file somewhere which tells systemd or celery that service is in some bad state.

Related

systemd service keep giving me error when start or get status

I have a python application and I need it to be run as a service, I tried many methods and I was advised to make it as systemd service
I searched and tried some code
here is my unit code
[Unit]
Description=Airnotifier Service
After=network.target
[Service]
Type=idle
Restart=on-failure
User=root
ExecStart=python3 /home/airnotifier/airnotifier/app.py
[Install]
WantedBy=multi-user.target
and then I run the following commands
sudo systemctl daemon-reload
sudo systemctl enable airnotifier.service
sudo systemctl start airnotifier.service
sudo systemctl status airnotifier.service
the service does not run and I am getting this errors
airnotifier#airnotifier:~$ sudo systemctl status airnotifier.service
● airnotifier.service - Airnotifier Service
Loaded: loaded (/lib/systemd/system/airnotifier.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-01-09 14:07:38 UTC; 1s ago
Process: 2072 ExecStart=/usr/bin/python3 /home/airnotifier/airnotifier/app.py (code=exited, status=1/FAILURE)
Main PID: 2072 (code=exited, status=1/FAILURE)
Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Scheduled restart job, restart counter is at 5.
Jan 09 14:07:38 airnotifier systemd[1]: Stopped Airnotifier Service.
Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Start request repeated too quickly.
Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Failed with result 'exit-code'.
Jan 09 14:07:38 airnotifier systemd[1]: Failed to start Airnotifier Service.
This is the code that works with me
[Unit]
Description=Airnotifier Service
[Install]
WantedBy=multi-user.target
[Service]
Type=simple
WorkingDirectory=/home/airnotifier/airnotifier
ExecStart=python3 app.py
Restart=always

celery.service: Failed with result 'signal'

I am setting up my django production project to work with celery on an ubuntu server. It works fine when i run task manually with
celery -A project worker -l INFO
but systemd stops every time i set it up and run. Configuration settings below.
/etc/default/celeryd
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/steph/icecream/venv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="homestud"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/steph/icecream/hometutors/homestud/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Configure node-specific settings by appending node name to arguments:
#CELERYD_OPTS="--time-limit=300 -c 8 -c:worker2 4 -c:worker3 2 -Ofair:worker1"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the node name.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERYD_LOG_LEVEL="INFO"
# If enabled PID and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=/etc/default/celeryd
WorkingDirectory=/home/steph/icecream/hometutors/homestud
ExecStart=/home/steph/icecream/venv/bin/celery multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}
ExecStop=/home/steph/icecream/venv/bin/celery ${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}
ExecReload=/home/steph/icecream/venv/bin/celery ${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}
[Install]
WantedBy=multi-user.target
when i run sudo systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Tue 2021-04-06 12:38:51 UTC; 5min ago
Process: 80579 ExecStart=/home/steph/icecream/venv/bin/celery multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CEL>
Main PID: 80594 (code=killed, signal=KILL)
Apr 06 12:37:15 homestud-ubuntu-server systemd[1]: Starting Celery Service...
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: celery multi v5.0.5 (singularity)
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: > Starting nodes...
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: > worker1#homestud-ubuntu-server: OK
Apr 06 12:37:17 homestud-ubuntu-server systemd[1]: Started Celery Service.
Apr 06 12:37:21 homestud-ubuntu-server systemd[1]: celery.service: Main process exited, code=killed, status=9/KILL
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: State 'stop-sigterm' timed out. Killing.
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: Killing process 80606 (python) with signal SIGKILL.
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: Failed with result 'signal'.
I also had the same issue, but turns out that it is because of low RAM (1 GB) at the production server. Kindly upgrade it to at least 2 GB and this issue will be resolved.

run Gunicorn with WSGI inside a folder

I have my project structure as below. I use folders to put all the settings files in them.
~/myproject/
- env
- server
- api
- home
- settings
- dev.py
- prod.py
- wsgi
- dev.py
- prod.py
the myproject/server/home/wsgi/dev.py is:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "home.settings.dev")
application = get_wsgi_application()
also inside myproject/server/home/settings/dev.py is:
WSGI_APPLICATION = 'home.wsgi.dev.application'
with all the above setup the server runs perfectly. When i try to deploy and run the gunicorn, it just fails. Here is my gunicorn.service:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=demouser
Group=www-data
WorkingDirectory=/home/demouser/myproject
ExecStart=/home/demouser/myproject/env/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/demouser/myproject.sock home.wsgi.dev:application
[Install]
WantedBy=multi-user.target
I am not sure why i get this error as the gunicorn never starts:
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-08-30 05:04:48 UTC; 14min ago
Process: 13354 ExecStart=/home/demouser/myproject/env/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/
Main PID: 13354 (code=exited, status=203/EXEC)
Aug 30 05:04:48 localhost systemd[1]: Started gunicorn daemon.
Aug 30 05:04:48 localhost systemd[13354]: gunicorn.service: Failed to execute command: No such file or directory
Aug 30 05:04:48 localhost systemd[13354]: gunicorn.service: Failed at step EXEC spawning /home/demouser/myproject/env/
Aug 30 05:04:48 localhost systemd[1]: gunicorn.service: Main process exited, code=exited, status=203/EXEC
Aug 30 05:04:48 localhost systemd[1]: gunicorn.service: Failed with result 'exit-code'.
~
i hope i made things clear for everyone. thanks
I solved it and here is how if you are interested:
I have installed Gunicorn3 and then i changed the gunicorn.service file and edited these lines :
WorkingDirectory=/home/demouser/myproject/server
ExecStart=/usr/bin/gunicorn3 --access-logfile - --workers 3 --bind unix:/home/demouser/myproject.sock home.wsgi.dev:application

python django celery systemd

I'm trying to build an NGINX/Gunicorn/Celery/RabbitMQ/Django server for several services.I am failed to do demonetization for celery.
My project directory below where celery is installed in virtualenv:
home/ubuntu/fanvault/bin/fanvault
My configuration file below in /etc/conf.d/celery
CELERYD_NODES="w1 w2 w3"
CELERY_BIN="home/ubuntu/fanvault/bin/celery"
CELERY_APP="fanvault"
CELERYD_MULTI="multi"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
My celery.service in etc/systemd/system/
[unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=ubuntu
Group=ubuntu
EnvironmentFile=-/etc/conf.d/celery
WorkingDirectory=/home/ubuntu/fanvault/bin/fanvault/fanvault/
ExecStart=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
My celery.py file in home/ubuntu/fanvault/bin/fanvault/fanvault
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from datetime import timedelta
from fanvault.settings import DEBUG
if DEBUG is True:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "fanvault.local_settings")
else:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "fanvault.aws_settings")
app = Celery('fanvault')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.conf.beat_schedule = {
'pull_movie_home': {
'task': 'movies.tasks.pull_movie_explore',
'schedule': timedelta(minutes=3)
}
}
app.conf.timezone = 'UTC'
when I do "sudo service celery start" getting following error:
Job for celery.service failed because the control process exited with error code. See "systemctl status celery.service" and "journalctl -xe" for details.
When I do "sudo journalctl -xe" getting following:
-- Unit celery.service has begun starting up.
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: File "<string>", line 1
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: home/ubuntu/fanvault/bin/celery multi start w1 w2 w3 -A fanvault --pidfile=/var/run/celery/%n.pid --
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: ^
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: SyntaxError: invalid syntax
Apr 06 12:00:11 ip-172-31-53-174 systemd[1]: celery.service: Control process exited, code=exited status=1
Apr 06 12:00:11 ip-172-31-53-174 sudo[23337]: pam_unix(sudo:session): session closed for user root
Apr 06 12:00:11 ip-172-31-53-174 systemd[1]: Failed to start celery.service.
-- Subject: Unit celery.service has failed
-- Defined-By: systemd
I'm not sure why you are passing the Celery startup commands to Python3 in your service file. Those are shell commands, to be executed directly.

ImportError when starting Celery in daemon mode

My Celery works well in CLI mode:
My project folder is organised this way: jive/jive.py
jive.py file looks like:
app = Celery(include=['tasks.dummy_tasks','tasks.people_tasks',])
app.config_from_object(CELERY_CONFIG_PATH)
I run a worker in CLI this way: celery worker -A jive and it works when I'm inside the jive folder.
Recently, I tried to daemonize Celery using systemd.
For this, 2 files are required. I will paste the important part only for both:
/etc/celery/conf.d
CELERYD_NODES="w1"
CELERY_BIN="/home/user1/venv/bin/celery"
CELERY_APP="jive"
CELERYD_MULTI="multi"
/etc/systemd/system/celery.service
[Service]
Type=forking
User=user1
EnvironmentFile=-/etc/celery/conf.d
WorkingDirectory=/home/user1
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
When running the service, it fails with the following errors after displaying status:
(venv) [user1#localhost jive]$ systemctl status celery.service
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-03-07 14:59:56 CET; 2s ago
Process: 16493 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS} (code=exited, status=1/FAILURE)
Mar 07 14:59:56 localhost.localdomain sh[16493]: File "<frozen importlib._bootstrap>", line 2224, in _find_and_load_unlocked
Mar 07 14:59:56 localhost.localdomain sh[16493]: ImportError: No module named 'jive'
Mar 07 14:59:56 localhost.localdomain sh[16493]: celery multi v4.0.2 (latentcall)
AttributeError: 'module' object has no attribute 'celery' -> I suspect a PATH problem but not sure how to deal with this in a service. Thanks for your help.

Categories

Resources