how to start celery flower as as daemon in RHEL? - python

Im trying to run flower as daemon. My flower.service file read as follows:
[Unit]
Description=Flower Service
After=network.target
[Service]
Type=forking
User=maas
Group=maas
PermissionsStartOnly=true
ExecStart=/bin/flower --broker=amqp://oser000300//
[Install]
WantedBy=multi-user.target
But when i start the service, it is giving error.
//systemctl status flower.service
* flower.service - Flower Service
Loaded: loaded (/etc/systemd/system/flower.service; enabled; vendor preset: disabled)
Active: failed (Result: timeout) since Mon 2017-07-10 20:25:59 UTC; 4min 38s ago
Process: 49255 ExecStart=/bin/flower --broker=amqp://oser000300// (code=exited, status=0/SUCCESS)
Connected to amqp://guest:**#oser000300:5672//
flower.service start operation timed out. Terminating.
SIGTERM detected, shutting down
Failed to start Flower Service.
Unit flower.service entered failed state.
flower.service failed.

I had the same timeout problem when starting the service.
Those parameters has done the trick (i had already celery service running with forking type):
Type=simple
Restart=on-failure

Related

Service not running on startup - RabbitMQ used as well

I creat a service on raspberry pi, however when rebooting and then checking the services statute I'm getting the below
pi#raspberrypi:~ $ sudo systemctl status non_verbal.service
non_verbal.service - My Sample Service
Loaded: loaded (/lib/systemd/system/non_verbal.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2022-06-27 12:06:54 BST;
51s ago
Process: 4378 ExecStart=/home/pi/env/bin/python3 /home/pi/linux-sdk-python-
2/examples/audio/classify3.py
Main PID: 4378 (code=exited, status=2)
lines 1-5/5 (END)
Please note that this service should send a message to Rabbitmq which is already running. I have another service as well, which is a receiver to receive messages from the rabbitMQ and perform a certain action.
This is the content of the service file
[Unit]
Description=My Sample Service
After=multi-user.target
[Service]
Type=idle
user=pi
ExecStart=/home/pi/env/bin/python3 /home/pi/linux-sdk-python-2/examples/audio/classify3.py
Restart=always
RestartSec=60
[Install]
WantedBy=multi-user.target

Deploying Django project on Centos 8, using Gunicorn, Nginx (Gunicorn.service problem)

I followed this tutorial
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-centos-7
Tried to deploy Django project on centos 8
Everything went fine and worked, except gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=facealatoo
Group=nginx
WorkingDirectory=/home/facealatoo/nadyr/promed
ExecStart=/home/facealatoo/nadyr/promed/venv/bin/gunicorn \
--workers 3 \
--bind unix:/home/facealatoo/nadyr/promed/promed.sock \
configs.wsgi:application
[Install]
WantedBy=multi-user.target
Folders destinations
my project folder destination '/home/facealatoo/nadyr/promed' settings.py file 'home/facealatoo/nadyr/promed/configs/settings.py'
server user name 'facealatoo'
after running
sudo systemctl daemon-reload
sudo systemctl start gunicorn
sudo systemctl enable gunicorn
sudo systemctl status gunicorn.service
Error message
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor
preset: disabled)
Active: failed (Result: exit-code) since Fri 2020-05-15 18:37:22 +06; 13s
ago
Main PID: 32293 (code=exited, status=203/EXEC)
May 15 18:37:22 facealatoo.net.kg systemd[1]: Started gunicorn daemon.
May 15 18:37:22 facealatoo.net.kg systemd[1]: gunicorn.service: Main
process exited, code=exited, status=203/EXEC
May 15 18:37:22 facealatoo.net.kg systemd[1]: gunicorn.service: Failed
with result 'exit-code'.
Please help me! ;) Thanks in advance ))))
I just change socket file destination (home/facealatoo(user)/) and gunicorn destination (usr/local/bin/gunicorn). And these actions solved my problem)))

Systemd service sometimes crashes on startup

I try to start my Gunicorn server using systemd.
The service definition is presented below:
[Unit]
Description=Gunicorn instance to serve Flask application using gunicorn
After=network.target
[Service]
PIDFile=/home/username/application/app.pid
User=username
Group=nginx
WorkingDirectory=/home/username/application
Environment=PATH=/opt/venv/bin/
ExecStart=/opt/venv/bin/gunicorn --pid /home/username/application/app.pid --workers 3 --bind unix:socket.sock -m 007 app:app
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target
The problem is that sometimes the app crashes on startup with the following messages in journalctl
gunicorn[28376]: [2019-08-22 15:01:48 +0300] [28379] [INFO] Worker exiting (pid: 28379)
gunicorn[28376]: [2019-08-22 15:01:48 +0300] [28381] [INFO] Worker exiting (pid: 28381)
gunicorn[28376]: [2019-08-22 15:01:48 +0300] [28376] [INFO] Shutting down: Master
gunicorn[28376]: [2019-08-22 15:01:48 +0300] [28376] [INFO] Reason: Worker failed to boot.
systemd[1]: urzchat-dev.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
systemd[1]: urzchat-dev.service: control process exited, code=exited status=1
systemd[1]: Unit urzchat-dev.service entered failed state.
systemd[1]: urzchat-dev.service failed.
This behavior occurs about 1/3 of all starts.
Tell me, why might that be and how to fix it?
The Environment directive should be enclosed as a string:
Environment="PATH=/opt/venv/bin/"
Also, you may add a condition for your network being connected to the internet before start:
After=network.target network-online.target

Python script (which open Reverse SSH tunnel) called by service doesn't work

Here is my python script:
#!/usr/bin/env python3
import subprocess
subprocess.run(['ssh', '-fNT','-o', 'ExitOnForwardFailure=yes', '-R', '2222:localhost:22', 'martin#192.168.11.111'])
called by my service:
[Unit]
Description=reverse SSH
After=multi-user.target
Conflicts=getty#tty1.service
[Service]
Type=simple
ExecStart=/usr/bin/python3 /home/pi/Public/OnPushButton_PULLUP.py
User=pi
Group=pi
WorkingDirectory=/home/pi/Public/
StandardInput=tty-force
[Install]
WantedBy=multi-user.target
This script exit 0/ Success if I trust systemctl, even if ssh tunnel connection doesn't work after .
● reverse_ssh.service - reverse SSH
Loaded: loaded (/lib/systemd/system/reverse_ssh.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-08-01 10:01:21 CEST; 6min ago
Process: 549 ExecStart=/usr/bin/python3 /home/pi/Public/OnPushButton_PULLUP.py (code=exited, status=0/SUCCESS)
Main PID: 549 (code=exited, status=0/SUCCESS)
août 01 10:01:19 raspberrypi systemd[1]: Started reverse SSH.
If I execute this script standalone (I mean like "./script.py") this script works.
At the moment I use service to call it this issue occurs...Where did I do it wrong ??
Thanks !
EDIT
Problem solved. The problem was on my service file.
I had to change"Type:simple" to "Type=forking" because I need to call another process from my python script.
I also have to wait until device get an #IP otherwise the script script trew "Host unreacheable"
For this I used this service file at the end:
[Unit]
Description=reverse SSH
Wants=network-online.target
After=network.target network-online.target
[Service]
Type=forking
ExecStartPre=/bin/sleep 10
ExecStart=/usr/bin/python3 /home/pi/Public/OnPushButton_PULLUP.py
User=pi
Group=pi
WorkingDirectory=/home/pi/Public/
TimeoutSec=infinity
[Install]
WantedBy=multi-user.target
Normally adding just this works:
Wants=network-online.target
After=network.target network-online.target
But it didn't for me. That's why I put a :
ExecStartPre=/bin/sleep 10
This line mention to the service that he will wait 10s before trying to be executed. This will give time to device to get #IP from the dhcp.
Finally, forking wasn't the solution. Forking was okay but with this Type of service, the script was stuck on activating until user push the button. This was a problem, other services were waiting for this service to be running, stoped or at least loaded but not stuck on activting. This issue was induced by while loop (true until the user push the button). An then once the user pushed the button the service was running or exit 0, not before.
I changed the service with this following one and it worked:
[Unit]
After=network.target network-online.target
Description=reverse SSH
Wants=network-online.target
[Service]
ExecStart=/usr/bin/python3 /home/pi/OnPushButton_PULLUP.py
ExecStartPre=/bin/sleep 10
Group=pi
RemainAfterExit=yes
TimeoutSec=infinity
Type=simple
User=pi
WorkingDirectory=/home/pi/
[Install]
WantedBy=multi-user.target
Notice the "RemainAfterexit=Yes" otherwise the sshtunnel process(induced by this script) will be closed when the programm is exited.

How to get celery to work with SCL python and systemd?

I am having a Python application that uses celery for background tasks. I use Python interpreter provided by SCL.
I was able to create a systemd unit file for teh app: How to use user's pipenv via systemd? Python is installed via SCL
But I do not understand how to put a similar sytstemd unit file for celery.
I tried:
[Unit]
Description=app celery service
# Requirements
Requires=network.target
# Dependency ordering
After=network.target
[Service]
# Let processes take awhile to start up
TimeoutStartSec=0
Type=forking
RestartSec=10
Restart=always
Environment="APP_SITE_SETTINGS=/home/app/.config/settings.cfg"
Environment="PYTHONPATH=/home/app/.local/lib/python3.6/site-packages"
WorkingDirectory=/home/app/app-site/app
User=app
Group=app
PermissionsStartOnly=true
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
# Main process
ExecStart=/usr/bin/scl enable rh-python36 -- /home/app/.local/bin/pipenv run celery -A celery_service.celery worker
[Install]
WantedBy=multi-user.target
When I start the systemd unit, in the journal I see that the celery app starts. After a few seconds, the service fails.
Job for app_celery.service failed because a timeout was exceeded. See "systemctl status app_celery.service" and "journalctl -xe" for details.
Here's a journal entry:
Jul 17 07:43:31 some.host.com scl[5181]: worker: Cold shutdown (MainProcess)
I tried with Type=oneshot and Type=simple too. None of them worked. I suspect this has something to do with SCL.
Is there a way to get the celery app to work with SCL and systemd?
Celery has a command line option --detach. When you use --detach Celery starts workers as a background process.
Here's the working systemd unit file:
[Unit]
Description=app celery service
# Requirements
Requires=network.target
# Dependency ordering
After=network.target
[Service]
# Let processes take awhile to start up
TimeoutStartSec=0
Type=simple
RemainAfterExit=yes
RestartSec=10
Restart=always
Environment="SETTINGS=/home/app/.config/settings.cfg"
Environment="PYTHONPATH=/home/app/.local/lib/python3.6/site-packages"
WorkingDirectory=/home/app/app-site/app/
User=app
Group=app
PermissionsStartOnly=true
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
LimitMEMLOCK=infinity
LimitNOFILE=20480
LimitNPROC=8192
# Main process
ExecStart=/usr/bin/scl enable rh-python36 -- /home/app/.local/bin/pipenv run celery -A celery_service.celery worker --detach
[Install]
WantedBy=multi-user.target

Categories

Resources