Systemd Execstart - python

I have a Python script which I want to run as a daemon. I am doing that by creating a unit file /etc/systemd/system/service
and I want to run it as systemctl start/stop/restart myservice
and depending on this start stop arguments I am handling system signals like SIGUP,SIGINT
Problem is i can run my script as Python main.py start/stop/restart and my logic works.
But after converting into a unit file this python file is invoked by ExecStart and I don't know how to pass arguments there?
[Unit]
Description=This service monitors docker daemon for events
After=multi-user.target
[Service]
Type=simple
ExecStart=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/main.py
User=root
WorkingDirectory=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/
Restart=on-failure
[Install]
WantedBy=multi-user.target

Isn't it that you are actually running "main.py start" and "main.py stop"? In that case you have programmed a "forking" service.
[Unit]
Description=This service monitors docker daemon for events
After=multi-user.target
[Service]
Type=forking
Environment=script=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/main.py
ExecStart=$script start
ExecStop=$script stop
User=root
WorkingDirectory=/home/PycharmProjects/python_tests/service-discovery/utils/auto_registeration_script/
Restart=on-failure
[Install]
WantedBy=multi-user.target

Related

Python script (which open Reverse SSH tunnel) called by service doesn't work

Here is my python script:
#!/usr/bin/env python3
import subprocess
subprocess.run(['ssh', '-fNT','-o', 'ExitOnForwardFailure=yes', '-R', '2222:localhost:22', 'martin#192.168.11.111'])
called by my service:
[Unit]
Description=reverse SSH
After=multi-user.target
Conflicts=getty#tty1.service
[Service]
Type=simple
ExecStart=/usr/bin/python3 /home/pi/Public/OnPushButton_PULLUP.py
User=pi
Group=pi
WorkingDirectory=/home/pi/Public/
StandardInput=tty-force
[Install]
WantedBy=multi-user.target
This script exit 0/ Success if I trust systemctl, even if ssh tunnel connection doesn't work after .
● reverse_ssh.service - reverse SSH
Loaded: loaded (/lib/systemd/system/reverse_ssh.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-08-01 10:01:21 CEST; 6min ago
Process: 549 ExecStart=/usr/bin/python3 /home/pi/Public/OnPushButton_PULLUP.py (code=exited, status=0/SUCCESS)
Main PID: 549 (code=exited, status=0/SUCCESS)
août 01 10:01:19 raspberrypi systemd[1]: Started reverse SSH.
If I execute this script standalone (I mean like "./script.py") this script works.
At the moment I use service to call it this issue occurs...Where did I do it wrong ??
Thanks !
EDIT
Problem solved. The problem was on my service file.
I had to change"Type:simple" to "Type=forking" because I need to call another process from my python script.
I also have to wait until device get an #IP otherwise the script script trew "Host unreacheable"
For this I used this service file at the end:
[Unit]
Description=reverse SSH
Wants=network-online.target
After=network.target network-online.target
[Service]
Type=forking
ExecStartPre=/bin/sleep 10
ExecStart=/usr/bin/python3 /home/pi/Public/OnPushButton_PULLUP.py
User=pi
Group=pi
WorkingDirectory=/home/pi/Public/
TimeoutSec=infinity
[Install]
WantedBy=multi-user.target
Normally adding just this works:
Wants=network-online.target
After=network.target network-online.target
But it didn't for me. That's why I put a :
ExecStartPre=/bin/sleep 10
This line mention to the service that he will wait 10s before trying to be executed. This will give time to device to get #IP from the dhcp.
Finally, forking wasn't the solution. Forking was okay but with this Type of service, the script was stuck on activating until user push the button. This was a problem, other services were waiting for this service to be running, stoped or at least loaded but not stuck on activting. This issue was induced by while loop (true until the user push the button). An then once the user pushed the button the service was running or exit 0, not before.
I changed the service with this following one and it worked:
[Unit]
After=network.target network-online.target
Description=reverse SSH
Wants=network-online.target
[Service]
ExecStart=/usr/bin/python3 /home/pi/OnPushButton_PULLUP.py
ExecStartPre=/bin/sleep 10
Group=pi
RemainAfterExit=yes
TimeoutSec=infinity
Type=simple
User=pi
WorkingDirectory=/home/pi/
[Install]
WantedBy=multi-user.target
Notice the "RemainAfterexit=Yes" otherwise the sshtunnel process(induced by this script) will be closed when the programm is exited.

How to get celery to work with SCL python and systemd?

I am having a Python application that uses celery for background tasks. I use Python interpreter provided by SCL.
I was able to create a systemd unit file for teh app: How to use user's pipenv via systemd? Python is installed via SCL
But I do not understand how to put a similar sytstemd unit file for celery.
I tried:
[Unit]
Description=app celery service
# Requirements
Requires=network.target
# Dependency ordering
After=network.target
[Service]
# Let processes take awhile to start up
TimeoutStartSec=0
Type=forking
RestartSec=10
Restart=always
Environment="APP_SITE_SETTINGS=/home/app/.config/settings.cfg"
Environment="PYTHONPATH=/home/app/.local/lib/python3.6/site-packages"
WorkingDirectory=/home/app/app-site/app
User=app
Group=app
PermissionsStartOnly=true
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
# Main process
ExecStart=/usr/bin/scl enable rh-python36 -- /home/app/.local/bin/pipenv run celery -A celery_service.celery worker
[Install]
WantedBy=multi-user.target
When I start the systemd unit, in the journal I see that the celery app starts. After a few seconds, the service fails.
Job for app_celery.service failed because a timeout was exceeded. See "systemctl status app_celery.service" and "journalctl -xe" for details.
Here's a journal entry:
Jul 17 07:43:31 some.host.com scl[5181]: worker: Cold shutdown (MainProcess)
I tried with Type=oneshot and Type=simple too. None of them worked. I suspect this has something to do with SCL.
Is there a way to get the celery app to work with SCL and systemd?
Celery has a command line option --detach. When you use --detach Celery starts workers as a background process.
Here's the working systemd unit file:
[Unit]
Description=app celery service
# Requirements
Requires=network.target
# Dependency ordering
After=network.target
[Service]
# Let processes take awhile to start up
TimeoutStartSec=0
Type=simple
RemainAfterExit=yes
RestartSec=10
Restart=always
Environment="SETTINGS=/home/app/.config/settings.cfg"
Environment="PYTHONPATH=/home/app/.local/lib/python3.6/site-packages"
WorkingDirectory=/home/app/app-site/app/
User=app
Group=app
PermissionsStartOnly=true
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
LimitMEMLOCK=infinity
LimitNOFILE=20480
LimitNPROC=8192
# Main process
ExecStart=/usr/bin/scl enable rh-python36 -- /home/app/.local/bin/pipenv run celery -A celery_service.celery worker --detach
[Install]
WantedBy=multi-user.target

How can I configure celery to run on startup of nginx?

I have celery running locally by just running celery -A proj -l info (although I don't even know if I should be using this command in production), and I want to get celery running on my production web server every time nginx starts. The init system is systemd
Create a service file like this celery.service
[Unit]
Description=celery service
After=network.target
[Service]
PIDFile=/run/celery/pid
User=celery
Group=celery
RuntimeDirectory=/path/to/project
WorkingDirectory=/path/to/project
ExecStart=celery -A proj -l info
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
Restart=on-abort
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Move the file to /etc/systemd/system/ and next time your restart server, celery will be started by systemd on boot.

Reload systemd/gunicorn/flask worker threads sequentially for config changes

I need to force an app reload to pick up config changes. I'm using systemd to kick of gunicorn that runs a flask app.
I pick up the config changes in a /var/run/xx.conf file, which is watched by a systemd/service/app.path:
[Path]
PathChanged=/var/run/app.conf
[Unit]
Description=app-restart
and a corresponding app.system:
[Unit]
Description=app-restart
After=network.target
[Service]
Type=oneshot
PIDFile=/run/app-restart/pid
User=root
Group=root
ExecStart=/usr/bin/app-reload.py
PrivateTmp=false
EnvironmentFile=-/etc/environment
[Install]
WantedBy=multi-user.target
The question is - how to gracefully terminate each gunicorn flask worker thread?
The app's systemd service uses:
[Unit]
Description=app gunicorn daemon
After=network.target
[Service]
PIDFile=/run/app/pid
User=ubuntu
Group=www-data
WorkingDirectory=/opt/app
ExecStart=/usr/local/bin/gunicorn --bind unix:/var/tmp/app.sock -m 007 --workers=2 -t 400 --backlog 2048 --log-config=/etc/app/log.cfg --log-level=DEBUG app
Restart=always
RestartSec=15
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=false
EnvironmentFile=-/etc/environment
[Install]
WantedBy=multi-user.target
so that I can kill off gunicorn worker threads and they will be restarted.
I would like to avoid killing off threads that are in the process of handling repsonses.
Ideally, after each response if handled, the thread would check whether it needs to quit or not. I know I can do this in each and every flask API method, but is there a better way of transitioning all worker threads to the new config one at a time?
I don't want to sysctl restart app.service as that kills off all threads and creates a dead time where there are no workers active.
I want each tread to terminate and reload independently so there's always some live workers.
Some options I've considered:
get a list of app pids with app-reload.py and kill them off one at a time. Gunicorn will restart each as it dies. May terminate an in-progress request.
After each event handled, check for existance of file, that is created by app-reload.py, and if present, terminate.
I'm assuming there must be a way to migrate workers from one config to the other without creating a service dead spot.
RTM wins again.
http://docs.gunicorn.org/en/stable/signals.html
HUP: Reload the configuration, start the new worker processes with a
new configuration and gracefully shutdown older workers. If the
application is not preloaded (using the preload_app option), Gunicorn
will also load the new version of it.
So the end result is just this in the systemd app.path:
[Unit]
Description=app-restart
After=network.target
[Service]
Type=oneshot
User=root
Group=root
ExecStart=/bin/kill -s HUP $(cat /run/app/pid)
PrivateTmp=false
Alternately, one could also rely on the fact that the app.system contains an ExecReload line and use:
ExecReload=/bin/systemctl reload app

How to debug a Python script that segfaults when run as a systemd?

This is driving me nuts.
A Flask app works fine if i personally run uWSGI from the CLI:
uwsgi --emperor /etc/uwsgi/emperor.ini
but when trying to start it as a service with systemd, there is a segfault and the resulting coredump says almost nothing:
sudo systemctl start emperor.uwsgi
coredump:
[New LWP 7639]
Core was generated by `/usr/bin/uwsgi --ini website.ini'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000123c in ?? ()
So, I have no idea how to get more detailed information. This is not a script I'm running with Python app.py, but is a script being served by uWSGI.
I'm clueless and would appreciate any advice.
Thanks.
Edit I - systemd init script:
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/usr/bin/uwsgi --ini /etc/uwsgi/emperor.ini
ExecReload=/bin/kill -HUP $MAINPID
ExecStop=/bin/kill -INT $MAINPID
Restart=always
Type=notify
StandardError=syslog
NotifyAccess=all
KillSignal=SIGQUIT
[Install]
WantedBy=multi-user.target
If i run /usr/bin/uwsgi --ini /etc/uwsgi/emperor.ini manually, works fine.

Categories

Resources