celery.service: Failed with result 'signal' - python

I am setting up my django production project to work with celery on an ubuntu server. It works fine when i run task manually with
celery -A project worker -l INFO
but systemd stops every time i set it up and run. Configuration settings below.
/etc/default/celeryd
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/steph/icecream/venv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="homestud"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/steph/icecream/hometutors/homestud/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Configure node-specific settings by appending node name to arguments:
#CELERYD_OPTS="--time-limit=300 -c 8 -c:worker2 4 -c:worker3 2 -Ofair:worker1"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the node name.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERYD_LOG_LEVEL="INFO"
# If enabled PID and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=/etc/default/celeryd
WorkingDirectory=/home/steph/icecream/hometutors/homestud
ExecStart=/home/steph/icecream/venv/bin/celery multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}
ExecStop=/home/steph/icecream/venv/bin/celery ${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}
ExecReload=/home/steph/icecream/venv/bin/celery ${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}
[Install]
WantedBy=multi-user.target
when i run sudo systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Tue 2021-04-06 12:38:51 UTC; 5min ago
Process: 80579 ExecStart=/home/steph/icecream/venv/bin/celery multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CEL>
Main PID: 80594 (code=killed, signal=KILL)
Apr 06 12:37:15 homestud-ubuntu-server systemd[1]: Starting Celery Service...
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: celery multi v5.0.5 (singularity)
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: > Starting nodes...
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: > worker1#homestud-ubuntu-server: OK
Apr 06 12:37:17 homestud-ubuntu-server systemd[1]: Started Celery Service.
Apr 06 12:37:21 homestud-ubuntu-server systemd[1]: celery.service: Main process exited, code=killed, status=9/KILL
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: State 'stop-sigterm' timed out. Killing.
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: Killing process 80606 (python) with signal SIGKILL.
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: Failed with result 'signal'.

I also had the same issue, but turns out that it is because of low RAM (1 GB) at the production server. Kindly upgrade it to at least 2 GB and this issue will be resolved.

Related

systemd service keep giving me error when start or get status

I have a python application and I need it to be run as a service, I tried many methods and I was advised to make it as systemd service
I searched and tried some code
here is my unit code
[Unit]
Description=Airnotifier Service
After=network.target
[Service]
Type=idle
Restart=on-failure
User=root
ExecStart=python3 /home/airnotifier/airnotifier/app.py
[Install]
WantedBy=multi-user.target
and then I run the following commands
sudo systemctl daemon-reload
sudo systemctl enable airnotifier.service
sudo systemctl start airnotifier.service
sudo systemctl status airnotifier.service
the service does not run and I am getting this errors
airnotifier#airnotifier:~$ sudo systemctl status airnotifier.service
● airnotifier.service - Airnotifier Service
Loaded: loaded (/lib/systemd/system/airnotifier.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-01-09 14:07:38 UTC; 1s ago
Process: 2072 ExecStart=/usr/bin/python3 /home/airnotifier/airnotifier/app.py (code=exited, status=1/FAILURE)
Main PID: 2072 (code=exited, status=1/FAILURE)
Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Scheduled restart job, restart counter is at 5.
Jan 09 14:07:38 airnotifier systemd[1]: Stopped Airnotifier Service.
Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Start request repeated too quickly.
Jan 09 14:07:38 airnotifier systemd[1]: airnotifier.service: Failed with result 'exit-code'.
Jan 09 14:07:38 airnotifier systemd[1]: Failed to start Airnotifier Service.
This is the code that works with me
[Unit]
Description=Airnotifier Service
[Install]
WantedBy=multi-user.target
[Service]
Type=simple
WorkingDirectory=/home/airnotifier/airnotifier
ExecStart=python3 app.py
Restart=always

celery daemon stopped working after I killed celery process

I have started job with wrong parameters which would cause it to run forever. I just killed main celery process with kill -9. After that service does not start whatever I do. When running from command line, it starts fine. I'm running Ubuntu 18.04. Log file created but has 0 bytes.
Here is the output of systemctl status celery:
user#ubuntu18-0:~/repository/backend$ systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/lib/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Mar 12 09:00:10 ubuntu18-0 sh[5889]: celery multi v4.3.0 (rhubarb)
Mar 12 09:00:10 ubuntu18-0 sh[5889]: > ubuntu18-0#ubuntu18-0: DOWN
Mar 12 09:40:51 ubuntu18-0 systemd[1]: Starting Celery Service...
Mar 12 09:40:54 ubuntu18-0 sh[9234]: 2020-03-12 09:40:54,234 - Asynchronous - INFO - ######################## NEW LOG ########################
Mar 12 09:40:54 ubuntu18-0 sh[9234]: celery multi v4.3.0 (rhubarb)
Mar 12 09:40:54 ubuntu18-0 sh[9234]: > Starting nodes...
Mar 12 09:40:54 ubuntu18-0 sh[9234]: > ubuntu18-0#ubuntu18-0: OK
Mar 12 09:40:54 ubuntu18-0 systemd[1]: Started Celery Service.
Mar 12 09:40:55 ubuntu18-0 sh[9266]: celery multi v4.3.0 (rhubarb)
Mar 12 09:40:55 ubuntu18-0 sh[9266]: > ubuntu18-0#ubuntu18-0: DOWN
user#ubuntu18-0:~/repository/backend$
Here is the output of starting it manually:
./start_celery.sh
celery multi v4.3.0 (rhubarb)
> Starting nodes...
2020-03-12 09:44:12,459 - Asynchronous - INFO - ######################## NEW LOG ########################
> ubuntu18-0#ubuntu18-0: OK
Here is my celery.service:
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
EnvironmentFile=/etc/celery.conf
WorkingDirectory=/home/user/repository/backend
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
Restart=no
#RestartSec=60
[Install]
WantedBy=multi-user.target
Here is /etc/celery.conf:
# Names of node, adapt depending on host
CELERYD_NODES="ubuntu18-0"
ENV_PYTHON="/home/user/venv/bin/python3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/user/venv/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="asynchronous.async_tasks.celery"
# Where to chdir at start.
CELERYD_CHDIR="/home/user/repository/backend/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--concurrency=4 -Ofair -Q hcs_async,celery"
# Set logging level to DEBUG
CELERYD_LOG_LEVEL="INFO"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/home/user/celery/%n%I.log"
CELERYD_PID_FILE="/home/user/celery/%n.pid"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
Reboot did not help. I suspect there might be a file somewhere which tells systemd or celery that service is in some bad state.

systemd execstart python daemon dynamically using virtualenv from environment variable

I have Python script with works as a systemd daemon in CentOS 7. The daemon gets executed by the version of python I created in a virtualenv. I am trying to tweak the script to be able to set the virtualenv path in an environment variable so I can easily switch to a different virtualenv by changing paths via the one variable and restarting the service. I have created my systemd scripts to be able to initialize multiple instances of the daemon and this works great. When I try to use an environment variable to point to my python parser things break. Here is what I have so far.
/etc/systemd/system/pipeline-remove#.service:
[Unit]
Description=pipeline remove tickets worker instances as a service, instance %i
Requires=pipeline-remove.service
Before=pipeline-remove.service
BindsTo=pipeline-remove.service
[Service]
PermissionsStartOnly=true
Type=idle
User=root
ExecStart=/path/to/venv/bin/python /pipeline/python/daemons/remove_tickets.py
Restart=always
TimeoutStartSec=10
RestartSec=10
[Install]
WantedBy=pipeline-remove.service
/etc/systemd/system/pipeline-remove.service (to start all instances):
[Unit]
Description=manages pipeline remove tickets worker instances as a service, instance
[Service]
Type=oneshot
ExecStart=/usr/bin/sh /usr/bin/pipeline-remove-start.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
pipeline-remove-start.sh:
#!/bin/bash
systemctl start pipeline-remove#{1..2}
This works great for me, but things break when I try to set the python dir the following way:
/etc/profile.d/pipeline_envvars.sh:
PIPELINE_VIRTUALENV=/path/to/venv
/etc/systemd/system/pipeline-remove#.service:
[Unit]
Description=pipeline remove tickets worker instances as a service, instance %i
Requires=pipeline-remove.service
Before=pipeline-remove.service
BindsTo=pipeline-remove.service
[Service]
PermissionsStartOnly=true
Type=idle
User=root
EnvironmentFile=/etc/profile.d/pipeline_envvars.sh
ExecStart=/${PIPELINE_VIRTUALENV}/bin/python /pipeline/python/daemons/remove_tickets.py
Restart=always
TimeoutStartSec=10
RestartSec=10
[Install]
WantedBy=pipeline-remove.service
I then try to start it:
sudo systemctl daemon-reload
sudo systemctl restart pipeline-remove#{1..1}
sudo systemctl status pipeline-remove#{1..1}
The status shows the following exit code 203 which means executable not found:
● pipeline-remove#1.service - pipeline remove tickets worker instances as a service, instance 1
Loaded: loaded (/etc/systemd/system/pipeline-remove#.service; disabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2018-01-26 15:04:50 UTC; 6s ago
Process: 11716 ExecStart=/${PIPELINE_VIRTUALENV}/bin/python /pipeline/python/daemons/remove_tickets.py (code=exited, status=203/EXEC)
Main PID: 11716 (code=exited, status=203/EXEC)
Jan 26 15:04:50 dev systemd[1]: pipeline-remove#1.service: main process exited, code=exited, status=203/EXEC
Jan 26 15:04:50 dev systemd[1]: Unit pipeline-remove#1.service entered failed state.
Jan 26 15:04:50 dev systemd[1]: pipeline-remove#1.service failed.
Found the exec code here. Also found this in syslog, /var/log/messages:
Jan 26 15:07:13 dev systemd: Starting pipeline remove tickets worker instances as a service, instance 1...
Jan 26 15:07:13 dev systemd: Failed at step EXEC spawning /${PIPELINE_VIRTUALENV}/bin/python: No such file or directory
Jan 26 15:07:13 dev systemd: pipeline-remove#1.service: main process exited, code=exited, status=203/EXEC
Jan 26 15:07:13 dev systemd: Unit pipeline-remove#1.service entered failed state.
Jan 26 15:07:13 dev systemd: pipeline-remove#1.service failed.
Jan 26 15:07:23 dev systemd: pipeline-remove#1.service holdoff time over, scheduling restart.
Jan 26 15:07:23 dev systemd: Started pipeline remove tickets worker instances as a service, instance 1.
When I try to drop the leading / in ExecStart I get a relative path error even though my env var does contain an absolute path:
Failed to start pipeline-remove#1.service: Unit is not loaded properly:
Invalid argument.
See system logs and 'systemctl status pipeline-remove#1.service' for
details.
And status shows the following:
vagrant#dev:~$ sudo systemctl status pipeline-remove#{1..1}
● pipeline-remove#1.service - pipeline remove tickets worker instances as a service, instance 1
Loaded: error (Reason: Invalid argument)
Active: inactive (dead)
Jan 26 15:11:39 dev systemd[1]: pipeline-remove#1.service failed.
Jan 26 15:11:42 dev systemd[1]: Stopped pipeline remove tickets worker instances as a service, instance 1.
Jan 26 15:11:42 dev systemd[1]: [/etc/systemd/system/pipeline-remove#.service:12] Executable path is not absolute, ignoring: ${PIPELINE_VIRTUALENV}/bin/python /pipel...e_tickets.py
Jan 26 15:11:42 dev systemd[1]: pipeline-remove#1.service lacks both ExecStart= and ExecStop= setting. Refusing.
I used this guide to help me get started but now I'm stuck. How can I get my python daemon to come up while setting the python executable path from an environment variable?
After some more reading, I stumbled on the answer here. The problem is the first argument of ExecStart must be a literal:
ExecStart= Commands with their arguments that are executed when this
service is started. For each of the specified commands, the first
argument must be an absolute and literal path to an executable.
Reading on further on ExecStart it says:
Variables whose value is not known at expansion time are treated as
empty strings. Note that the first argument (i.e. the program to
execute) may not be a variable.
I also ended up stumbling across this answer which looks like the same problem. In the end this is what worked: wrapping the entire thing with shell to run:
[Unit]
Description=pipeline remove tickets worker instances as a service, instance %i
Requires=pipeline-remove.service
Before=pipeline-remove.service
BindsTo=pipeline-remove.service
[Service]
PermissionsStartOnly=true
Type=idle
User=root
EnvironmentFile=/etc/profile.d/pipeline_envvars.sh
ExecStart=/bin/sh -c '${PIPELINE_VIRTUALENV}/bin/python /pipeline/python/daemons/remove_tickets.py'
Restart=always
TimeoutStartSec=10
RestartSec=10
[Install]
WantedBy=pipeline-remove.service
So ExecStart=/bin/sh -c '<your command>' saves the day, I can now specify the python interpreter path for my environment variable. Will leave the answer up, hopefully, saves others some time.

python django celery systemd

I'm trying to build an NGINX/Gunicorn/Celery/RabbitMQ/Django server for several services.I am failed to do demonetization for celery.
My project directory below where celery is installed in virtualenv:
home/ubuntu/fanvault/bin/fanvault
My configuration file below in /etc/conf.d/celery
CELERYD_NODES="w1 w2 w3"
CELERY_BIN="home/ubuntu/fanvault/bin/celery"
CELERY_APP="fanvault"
CELERYD_MULTI="multi"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
My celery.service in etc/systemd/system/
[unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=ubuntu
Group=ubuntu
EnvironmentFile=-/etc/conf.d/celery
WorkingDirectory=/home/ubuntu/fanvault/bin/fanvault/fanvault/
ExecStart=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
My celery.py file in home/ubuntu/fanvault/bin/fanvault/fanvault
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from datetime import timedelta
from fanvault.settings import DEBUG
if DEBUG is True:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "fanvault.local_settings")
else:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "fanvault.aws_settings")
app = Celery('fanvault')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.conf.beat_schedule = {
'pull_movie_home': {
'task': 'movies.tasks.pull_movie_explore',
'schedule': timedelta(minutes=3)
}
}
app.conf.timezone = 'UTC'
when I do "sudo service celery start" getting following error:
Job for celery.service failed because the control process exited with error code. See "systemctl status celery.service" and "journalctl -xe" for details.
When I do "sudo journalctl -xe" getting following:
-- Unit celery.service has begun starting up.
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: File "<string>", line 1
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: home/ubuntu/fanvault/bin/celery multi start w1 w2 w3 -A fanvault --pidfile=/var/run/celery/%n.pid --
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: ^
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: SyntaxError: invalid syntax
Apr 06 12:00:11 ip-172-31-53-174 systemd[1]: celery.service: Control process exited, code=exited status=1
Apr 06 12:00:11 ip-172-31-53-174 sudo[23337]: pam_unix(sudo:session): session closed for user root
Apr 06 12:00:11 ip-172-31-53-174 systemd[1]: Failed to start celery.service.
-- Subject: Unit celery.service has failed
-- Defined-By: systemd
I'm not sure why you are passing the Celery startup commands to Python3 in your service file. Those are shell commands, to be executed directly.

SystemD python task in Ubuntu 16

I am trying to execute a python3 with systemd in a ubuntu 16.
Following are the configs
[Unit]
Description=email notification server
After=multi-user.target
[Service]
WorkingDirectory=/home/ubuntu/email-noti
Restart=on-failure
ExecStart=/usr/lib/python3 /home/ubuntu/email-noti/email_reader.py
[Install]
WantedBy=multi-user.target
*please note that email_reader.py import a config file and few python3 files from /home/ubuntu/email-noti/
but it always end with following error
● email-noti.service - email notification server
Loaded: loaded (/etc/systemd/system/email-noti.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Tue 2017-03-07 18:00:38 UTC; 6s ago
Process: 11859 ExecStart=/usr/lib/python3 /home/ubuntu/email-noti/email_reader.py (code=exited, status=203/EXEC)
Main PID: 11859 (code=exited, status=203/EXEC)
Mar 07 18:00:38 ip-172-31-24-115 systemd[1]: email-noti.service: Service hold-off time over, scheduling restart.
Mar 07 18:00:38 ip-172-31-24-115 systemd[1]: Stopped email notification server.
Mar 07 18:00:38 ip-172-31-24-115 systemd[1]: email-noti.service: Start request repeated too quickly.
Mar 07 18:00:38 ip-172-31-24-115 systemd[1]: Failed to start email notification server.
but when i manually execute the email_reader.py works totally fine.
Any help appreciated

Categories

Resources