I'm trying to build an NGINX/Gunicorn/Celery/RabbitMQ/Django server for several services.I am failed to do demonetization for celery.
My project directory below where celery is installed in virtualenv:
home/ubuntu/fanvault/bin/fanvault
My configuration file below in /etc/conf.d/celery
CELERYD_NODES="w1 w2 w3"
CELERY_BIN="home/ubuntu/fanvault/bin/celery"
CELERY_APP="fanvault"
CELERYD_MULTI="multi"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
My celery.service in etc/systemd/system/
[unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=ubuntu
Group=ubuntu
EnvironmentFile=-/etc/conf.d/celery
WorkingDirectory=/home/ubuntu/fanvault/bin/fanvault/fanvault/
ExecStart=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/home/ubuntu/fanvault/bin/python3.5 -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
My celery.py file in home/ubuntu/fanvault/bin/fanvault/fanvault
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from datetime import timedelta
from fanvault.settings import DEBUG
if DEBUG is True:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "fanvault.local_settings")
else:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "fanvault.aws_settings")
app = Celery('fanvault')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.conf.beat_schedule = {
'pull_movie_home': {
'task': 'movies.tasks.pull_movie_explore',
'schedule': timedelta(minutes=3)
}
}
app.conf.timezone = 'UTC'
when I do "sudo service celery start" getting following error:
Job for celery.service failed because the control process exited with error code. See "systemctl status celery.service" and "journalctl -xe" for details.
When I do "sudo journalctl -xe" getting following:
-- Unit celery.service has begun starting up.
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: File "<string>", line 1
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: home/ubuntu/fanvault/bin/celery multi start w1 w2 w3 -A fanvault --pidfile=/var/run/celery/%n.pid --
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: ^
Apr 06 12:00:11 ip-172-31-53-174 python3.5[23368]: SyntaxError: invalid syntax
Apr 06 12:00:11 ip-172-31-53-174 systemd[1]: celery.service: Control process exited, code=exited status=1
Apr 06 12:00:11 ip-172-31-53-174 sudo[23337]: pam_unix(sudo:session): session closed for user root
Apr 06 12:00:11 ip-172-31-53-174 systemd[1]: Failed to start celery.service.
-- Subject: Unit celery.service has failed
-- Defined-By: systemd
I'm not sure why you are passing the Celery startup commands to Python3 in your service file. Those are shell commands, to be executed directly.
Related
I am trying to configure a gunicorn service on an Red hat EC2 vm of amazon.
I created the sercvice file, but when I run it and check the status it tells me that it failed:
[Unit]
Description=Gunicorn instance for a simple hello world app
After=network.target
[Service]
User=ec2-user
Group=nginx
WorkingDirectory=/home/ec2-user/webserverflask
Environment="PATH=/home/ec2-user/webserverflask/venv/bin"
ExecStart=/home/ec2-user/webserverflask/venv/bin/gunicorn --workers 3
--bind unix:webserverflask.sock -m 007 wsgi
Restart=always
[Install]
WantedBy=multi-user.target
The error message:
● webserver.service - Gunicorn instance for a simple hello world app
Loaded: loaded (/etc/systemd/system/webserver.service; enabled; vendor
preset: disabled) Active: failed (Result: exit-code) since Wed
2022-07-06 19:31:08 UTC; 20h ago Main PID: 25957 (code=exited,
status=203/EXEC)
Jul 06 19:31:08 ip-172-31-95-13.ec2.internal systemd[1]:
webserver.service: Main process exited, code=exited, status=203/EXEC
Jul 06 19:31:08 ip-172-31-95-13.ec2.internal systemd[1]:
webserver.service: Failed with result 'exit-code'. Jul 06 19:31:08
ip-172-31-95-13.ec2.internal systemd[1]: webserver.service: Service
RestartSec=100ms expired, scheduling restart. Jul 06 19:31:08
ip-172-31-95-13.ec2.internal systemd[1]: webserver.service: Scheduled
restart job, restart counter is at 5. Jul 06 19:31:08
ip-172-31-95-13.ec2.internal systemd[1]: Stopped Gunicorn instance for
a simple hello world app. Jul 06 19:31:08 ip-172-31-95-13.ec2.internal
systemd[1]: webserver.service: Start request repeated too quickly. Jul
06 19:31:08 ip-172-31-95-13.ec2.internal systemd[1]:
webserver.service: Failed with result 'exit-code'. Jul 06 19:31:08
ip-172-31-95-13.ec2.internal systemd[1]: Failed to start Gunicorn
instance for a simple hello world app.
and here is my wsgi:
from app import app as application
if __name__ == "__main__":
app.run()
and flask app:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == "__main__":
app.run()
I would start debugging by trying to run the ExecStart command manually, and see if that works (or what error you get):
$ cd /home/ec2-user/webserverflask
$ /home/ec2-user/webserverflask/venv/bin/gunicorn --workers 3 --bind unix:webserverflask.sock -m 007 wsgi
I managed to solve the issue.
If anyone had the same issue while trying to deploy a flask gunicorn using the tutorials on internet, here is my answer:
The problem was that the gunicorn file wasn't accessible, I still don't know why but I managed to fix the issue by moving gunicorn to /usr/local/bin/gunicorn
So my service file look like this now:
[Unit]
Description=Gunicorn instance for a simple hello world app
After=network.target
[Service]
User=ec2-user
Group=nginx
ExecStart=/usr/local/bin/gunicorn --workers 3 --chdir /home/ec2-user --bind unix :webserverflask.sock -m 007 webserverflask.wsgi
[Install]
WantedBy=multi-user.target
I am setting up my django production project to work with celery on an ubuntu server. It works fine when i run task manually with
celery -A project worker -l INFO
but systemd stops every time i set it up and run. Configuration settings below.
/etc/default/celeryd
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/steph/icecream/venv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="homestud"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/steph/icecream/hometutors/homestud/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Configure node-specific settings by appending node name to arguments:
#CELERYD_OPTS="--time-limit=300 -c 8 -c:worker2 4 -c:worker3 2 -Ofair:worker1"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the node name.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERYD_LOG_LEVEL="INFO"
# If enabled PID and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=/etc/default/celeryd
WorkingDirectory=/home/steph/icecream/hometutors/homestud
ExecStart=/home/steph/icecream/venv/bin/celery multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}
ExecStop=/home/steph/icecream/venv/bin/celery ${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}
ExecReload=/home/steph/icecream/venv/bin/celery ${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}
[Install]
WantedBy=multi-user.target
when i run sudo systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Tue 2021-04-06 12:38:51 UTC; 5min ago
Process: 80579 ExecStart=/home/steph/icecream/venv/bin/celery multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CEL>
Main PID: 80594 (code=killed, signal=KILL)
Apr 06 12:37:15 homestud-ubuntu-server systemd[1]: Starting Celery Service...
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: celery multi v5.0.5 (singularity)
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: > Starting nodes...
Apr 06 12:37:16 homestud-ubuntu-server celery[80579]: > worker1#homestud-ubuntu-server: OK
Apr 06 12:37:17 homestud-ubuntu-server systemd[1]: Started Celery Service.
Apr 06 12:37:21 homestud-ubuntu-server systemd[1]: celery.service: Main process exited, code=killed, status=9/KILL
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: State 'stop-sigterm' timed out. Killing.
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: Killing process 80606 (python) with signal SIGKILL.
Apr 06 12:38:51 homestud-ubuntu-server systemd[1]: celery.service: Failed with result 'signal'.
I also had the same issue, but turns out that it is because of low RAM (1 GB) at the production server. Kindly upgrade it to at least 2 GB and this issue will be resolved.
I have started job with wrong parameters which would cause it to run forever. I just killed main celery process with kill -9. After that service does not start whatever I do. When running from command line, it starts fine. I'm running Ubuntu 18.04. Log file created but has 0 bytes.
Here is the output of systemctl status celery:
user#ubuntu18-0:~/repository/backend$ systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/lib/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Mar 12 09:00:10 ubuntu18-0 sh[5889]: celery multi v4.3.0 (rhubarb)
Mar 12 09:00:10 ubuntu18-0 sh[5889]: > ubuntu18-0#ubuntu18-0: DOWN
Mar 12 09:40:51 ubuntu18-0 systemd[1]: Starting Celery Service...
Mar 12 09:40:54 ubuntu18-0 sh[9234]: 2020-03-12 09:40:54,234 - Asynchronous - INFO - ######################## NEW LOG ########################
Mar 12 09:40:54 ubuntu18-0 sh[9234]: celery multi v4.3.0 (rhubarb)
Mar 12 09:40:54 ubuntu18-0 sh[9234]: > Starting nodes...
Mar 12 09:40:54 ubuntu18-0 sh[9234]: > ubuntu18-0#ubuntu18-0: OK
Mar 12 09:40:54 ubuntu18-0 systemd[1]: Started Celery Service.
Mar 12 09:40:55 ubuntu18-0 sh[9266]: celery multi v4.3.0 (rhubarb)
Mar 12 09:40:55 ubuntu18-0 sh[9266]: > ubuntu18-0#ubuntu18-0: DOWN
user#ubuntu18-0:~/repository/backend$
Here is the output of starting it manually:
./start_celery.sh
celery multi v4.3.0 (rhubarb)
> Starting nodes...
2020-03-12 09:44:12,459 - Asynchronous - INFO - ######################## NEW LOG ########################
> ubuntu18-0#ubuntu18-0: OK
Here is my celery.service:
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
EnvironmentFile=/etc/celery.conf
WorkingDirectory=/home/user/repository/backend
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
Restart=no
#RestartSec=60
[Install]
WantedBy=multi-user.target
Here is /etc/celery.conf:
# Names of node, adapt depending on host
CELERYD_NODES="ubuntu18-0"
ENV_PYTHON="/home/user/venv/bin/python3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/user/venv/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="asynchronous.async_tasks.celery"
# Where to chdir at start.
CELERYD_CHDIR="/home/user/repository/backend/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--concurrency=4 -Ofair -Q hcs_async,celery"
# Set logging level to DEBUG
CELERYD_LOG_LEVEL="INFO"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/home/user/celery/%n%I.log"
CELERYD_PID_FILE="/home/user/celery/%n.pid"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
Reboot did not help. I suspect there might be a file somewhere which tells systemd or celery that service is in some bad state.
I follow this article to deploy my Django project. I created gunicorn.service file in /etc/systemd/system/gunicorn.service with this configuration:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=azizbek
Group=www-data
WorkingDirectory=/home/admin/respositories/ninersComingSoon
ExecStart=/root/.local/share/virtualenvs/ninersComingSoon-_UZsUc5R/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/admin/repositories/ninersComingSoon/niners.sock ninersComingSoon.wsgi:application
[Install]
WantedBy=multi-user.target
Location of my project is /home/admin/respositories/ninersComingSoon
And when I run
systemctl start gunicorn
systemctl enable gunicorn
it must create niners.sock file inside the project directory but it doesn't.
Then I typed this command to figure out what I did wrong.
journalctl -u gunicorn
And the result was
Dec 05 02:05:26 server.niners.uz systemd[1]: Started gunicorn daemon.
Dec 05 02:05:26 server.niners.uz systemd[1]: gunicorn.service: Main process exited, code=exited, status=203/EXEC
Dec 05 02:05:26 server.niners.uz systemd[1]: gunicorn.service: Unit entered failed state.
Dec 05 02:05:26 server.niners.uz systemd[1]: gunicorn.service: Failed with result 'exit-code'.
So can you help me to solve this problem?
The problem was in WorkingDirectory. There was an incorrect path. There should be .../repositories/... instead of .../respositories/...
My Celery works well in CLI mode:
My project folder is organised this way: jive/jive.py
jive.py file looks like:
app = Celery(include=['tasks.dummy_tasks','tasks.people_tasks',])
app.config_from_object(CELERY_CONFIG_PATH)
I run a worker in CLI this way: celery worker -A jive and it works when I'm inside the jive folder.
Recently, I tried to daemonize Celery using systemd.
For this, 2 files are required. I will paste the important part only for both:
/etc/celery/conf.d
CELERYD_NODES="w1"
CELERY_BIN="/home/user1/venv/bin/celery"
CELERY_APP="jive"
CELERYD_MULTI="multi"
/etc/systemd/system/celery.service
[Service]
Type=forking
User=user1
EnvironmentFile=-/etc/celery/conf.d
WorkingDirectory=/home/user1
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
When running the service, it fails with the following errors after displaying status:
(venv) [user1#localhost jive]$ systemctl status celery.service
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-03-07 14:59:56 CET; 2s ago
Process: 16493 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS} (code=exited, status=1/FAILURE)
Mar 07 14:59:56 localhost.localdomain sh[16493]: File "<frozen importlib._bootstrap>", line 2224, in _find_and_load_unlocked
Mar 07 14:59:56 localhost.localdomain sh[16493]: ImportError: No module named 'jive'
Mar 07 14:59:56 localhost.localdomain sh[16493]: celery multi v4.0.2 (latentcall)
AttributeError: 'module' object has no attribute 'celery' -> I suspect a PATH problem but not sure how to deal with this in a service. Thanks for your help.