I've made a very simple python program which extract the latests news popped in some website and send them to me via Telegram. The program is working perfectly when I am launching the command below in the console :
/usr/bin/python3.9 /home/dietpi/news/news.py
However when I try to automate it in systemd (to automatically restart if there is any bug or so), I noticed the services is blocked into the ExecStartPre step forever :
[Unit]
Description=News Service
Wants=network.target
After=network.target
[Service]
ExecStartPre=/bin/sleep 10
ExecStart=/usr/bin/python3.9 /home/dietpi/news/news.py
Restart=always
[Install]
WantedBy=multi-user.target
I put ExecStartPre command to let the Pi to setup properly the network before launching the program (I noticed a failure occur if not done, as the program starts too quickly and generates an error).
When I reboot the Pi, here is what I can see when I am opening the status of the services (by using the command: systemctl --type=service):
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
news.service loaded activating start-pre start News Service
When I look more into detail on this service here is what I have (by using command: sudo systemctl status news.service):
● news.service - News Service
Loaded: loaded (/etc/systemd/system/news.service; enabled; vendor preset: enabled)
Active: activating (start-pre) since Fri 2022-02-04 17:03:58 GMT; 2s ago
Cntrl PID: 552 (sleep)
Tasks: 1 (limit: 4915)
CPU: 4ms
CGroup: /system.slice/news.service
└─552 /bin/sleep 10
Feb 04 17:03:58 DietPi systemd[1]: Starting News Service...
If I launch this command multiple time, I see the "activating" step goes up to 10s, then starts again from 0s >>> Which shows I am stuck in the ExecStartPre step :(
If you have any idea on how to solve this issue, it would be much appreciated :)
Try to create your own sleep script:
sleep.py:
import time
import sys
if __name__ == '__main__':
time.sleep(10)
sys.exit()
In your system unit:
ExecStartPre /usr/bin/python3.9 /home/dietpi/news/sleep.py
Personally, I prefer use supervisord to launch my python script as a service:
/etc/supervisor/conf.d/news.conf:
[program:news]
command = /usr/bin/python3.9 news.py
directory = /home/dietpi/news/
user = dietpi
autostart = true
autorestart = true
stdout_logfile = /var/log/supervisor/news.log
redirect_stderr = true
Related
I am learning how to restart a Python script in case of error (following this tutorial, however with some small tweaks regarding filenames and alike). First things first, the needed files:
/home/myuser/Desktop/test/test.py:
from datetime import datetime
from time import sleep
path = "/home/dec13666/Desktop/test/log.txt"
while True:
with open(path, "a") as f:
now = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
f.write(now+"\n")
f.close()
sleep(1)
raise Exception("Error Simulation!!!")
davidcustom.service:
[Unit]
Description=Python Script Made By Me
After=multi-user.target
[Service]
RestartSec=10
Restart=always
ExecStart=python3 /home/myuser/Desktop/test/test.py
[Install]
WantedBy=multi-user.target
Finally, the commands run:
sudo nano /etc/systemd/system/davidcustom.service
sudo systemctl daemon-reload
sudo systemctl enable davidcustom.service
sudo systemctl start davidcustom.service
sudo systemctl status davidcustom.service
The message I am getting:
● davidcustom.service - Python Script Made By Me
Loaded: loaded (/etc/systemd/system/davidcustom.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Sun 2022-09-04 14:12:12 EDT; 8s ago
Process: 3341 ExecStart=python3 /home/myuser/Desktop/test/test.py (code=exited, status=1/FAILURE)
Main PID: 3341 (code=exited, status=1/FAILURE)
CPU: 100ms
Notes:
When I run test.py manually, it works OK, but then that python script is run from a service (as seen here), it generates that error.
I have tried to set User=myusername and Type=simple, in davidcustom.service ([Service]), with no difference in the results.
What am I doing wrong?
ExecStart requires an absolute path to the executable, it doesn't search $PATH. So use
ExecStart=/usr/bin/python3 /home/myuser/Desktop/test/test.py
(assuming that's where python3 is installed -- you can use type python3 to get the actual location).
I inadvertently added some wrong indentation in the original code (corrected by now in the original post). That solved my issue; when running sudo systemctl status davidcustom.service, now the service is active:
● davidcustom.service - Python Script Made By Dave
Loaded: loaded (/etc/systemd/system/davidcustom.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2022-09-04 16:16:02 EDT; 23ms ago
Main PID: 4287 (python3)
Tasks: 1 (limit: 14215)
Memory: 3.0M
CPU: 16ms
CGroup: /system.slice/davidcustom.service
└─4287 python3 /home/dec13666/Desktop/test/test.py
Other suggestions which would worth keeping in mind, but were NOT necessary for my case (thanks #Barmar & #tdelaney):
Use an absolute path at ExecStart (easily obtainable by running which python command)
If you only want to use your User and/or Group, then explicitely write those parameters.
The 2 previous suggestions should be added in [Service], in your service script:
[Service]
RestartSec=10
Restart=always
ExecStart=/usr/bin/python3 /home/myuser/Desktop/test/test.py
User=youruser
Group=yourgroup
Read the comments in the original post, for other options.
Thanks.
So I'm trying to launch a tmux screen from a bash script which will then host a (python) discordbot. That bash script should in turn be launched from a system service so that the discordbot always launches with the physical server itself.
It seems that it gets stuck somewhere for some reason... I've been looking for ages for a solution but haven't been able to find one.
Here is the system service output:
● ubuntubot.service - UbuntuBot
Loaded: loaded (/etc/systemd/system/ubuntubot.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-08-13 18:39:26 CEST; 1min 58s ago
Main PID: 27366 (tmux: server)
Tasks: 2 (limit: 19060)
Memory: 4.9M
CGroup: /system.slice/ubuntubot.service
├─27366 /usr/bin/tmux new-session -d -s ubuntubot
└─27367 -bash
Here is my service file:
[Unit]
Description=UbuntuBot
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/bash /home/flynn/Minecraft/CreativeServer/DiscordBot/launchbot.sh start
Restart=always
RestartSec=5
And this is my bash script:
#!/bin/bash
#This script launches the Ubuntubot in a different tmux screen
#Start tmux ubuntu-session:
/usr/bin/tmux new-session -d -s ubuntubot
#Start UbuntuBot
/usr/bin/tmux send-keys -t ubuntubot "python3 ~/Minecraft/CreativeServer/DiscordBot/Ubuntubot.py" Enter
Maybe this is a horrible way to do it i have honestly no idea... I just need some help cause this is taking me wayyy to long on my own:)
I am running a Python script over SSH on an Ubuntu 18.04.2 server.
When I use ssh to login to the server and run the script, and then terminate the ssh session, the Python script also terminates as expected (I'm not using nohup, &, etc.) However, when I run the same script using Fabric, and then terminate the local Fabric process, the python process on the server gets reaped by systemd. This is what the systemd status looks like:
● session-219.scope - Session 219 of user root
Loaded: loaded (/run/systemd/transient/session-219.scope; transient)
Transient: yes
Active: active (abandoned) since Fri 2019-12-27 00:56:07 PST; 2min 55s ago
Tasks: 1
CGroup: /user.slice/user-0.slice/session-219.scope
└─6872 /root/peacock/bin/python3 -m src.main
Dec 27 00:56:07 master systemd[1]: Started Session 219 of user root.
Dec 27 00:57:52 master sshd[6783]: pam_unix(sshd:session): session closed for user root
Is there a way to prevent systemd from reaping the child process, similar to the behavior of ssh? And why does it only get reaped when using Fabric but not ssh directly?
More details:
The Python script is a simple flask app. The gist of it is:
flask_app = Flask('app')
#flask.route('/')
def index():
# ....
if __name__ == '__main__':
flask_app.run(host='0.0.0.0')
The Fabric script is roughly as follows:
server_conn = fabric.Connection('1.2.3.4')
with server_conn.cd('/root/peacock'):
server_conn.run('/root/peacock/bin/python3 -m src.main')
If you need to run a process as a daemon on the remote box, I would suggest that you make it a systemd unit. This way you can control it with standard commands and access its logs like any other service on the system.
Your config could look like (/etc/systemd/system/peacock.service):
[Unit]
Description=Peacock systemd service.
[Service]
Type=simple
ExecStart=/root/peacock/bin/python3 -m src.main
[Install]
WantedBy=multi-user.target
Remember to sudo chmod 644 /etc/systemd/system/peacock.service. Then your fabric script would look like:
server_conn = fabric.Connection('1.2.3.4')
with server_conn.cd('/root/peacock'):
server_conn.run('systemctl start peacock.service')
Later you can check status on this service. You will also be able to access logs with journalctl -u peacock
The essence:
I have created a daemon to manage some tasks on a remote platform.
It is written in python and accepts start, stop and restart arguments.
While trying to add it to the systemd (so it would start on system startup and be stopped on shutdown, etc.) I encountered a problem:
It seems to see daemon running, but I am not sure if it actually works, because restarting or requesting status returns with an error:
[user#centos ~]# systemctl restart mydaemon
Failed to restart mydaemon.service: Unit mydaemon.service failed to load: No such file or directory.
[user#centos ~]# systemctl status mydaemon
● mydaemon.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
The specifics:
The code itself follows the well-known example by Sander Marechal with very few changes. By itself it works without any problems, and properly reacts to all accepted arguments. The pid is saved in /tmp/my-daemon.pid.
The systemd service file is in the user daemons directory: /usr/lib/systemd/user/mydaemon.service, and the code is as follows:
[Unit]
Description=The user daemon
[Service]
Type=forking
ExecStart=/usr/bin/python /home/frcr/mydaemon_v01.py start
ExecStop=/usr/bin/python /home/frcr/mydaemon_v01.py stop
RestartSec=5
TimeoutSec=60
RuntimeMaxSec=infinity
Restart=always
PIDFile=/tmp/my-daemon.pid
[Install]
WantedBy=multi-user.target
systemctl returns the status of it as active, but only if provided the pid:
[user#centos ~]# systemctl status 9177
● session-481.scope - Session 481 of user user
Loaded: loaded
Drop-In: /run/systemd/system/session-481.scope.d
└─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-SendSIGHUP.conf, 50-Slice.conf
Active: active (running) since Tue 2016-05-17 06:24:51 EDT; 1h 43min ago
CGroup: /user.slice/user-0.slice/session-481.scope
├─8815 sshd: root#pts/0
├─8817 -bash
├─9177 python /home/user/mydaemon_v01.py start
└─9357 systemctl status 9177
I have seen a similar question here on stack overflow, but it doesn't seem to have the solution to my problem.
I assume I am missing something very obvious due to the sheer lack of experience with systemd, and I'd be extremely grateful if somebody could point it out for me or show me the right direction to move. Thanks in advance and please forgive my mad English skillz.
Enabling the daemon with a full path name worked around the issue but there is a better solution.
The issue was the service was in a user directory but was started as a system service. However /usr/lib was not the right place to add new service files anyway. That directory is for files shipped as part of operating system packages. The correct directory to add a new system service is in /etc/systemd/system See related docs about systemd paths.
You still want to enable the service to make sure it gets loaded at boot time.
After some additional googling I found a solution: I actually forgot to add the daemon to systemctl:
[root#centos ~]# systemctl enable /usr/lib/systemd/user/mydaemon.service
Created symlink from /etc/systemd/system/multi-user.target.wants/mydaemon.service to /usr/lib/systemd/user/mydaemon.service.
Created symlink from /etc/systemd/system/mydaemon.service to /usr/lib/systemd/user/mydaemon.service.
It is also worth mentioning that the absolute path is required.
The only thing left is to refresh systemctl:
[root#centos ~]# systemctl daemon-reload
After that the service is added, and
[root#centos ~]# systemctl start mydaemon
[root#centos ~]# systemctl restart mydaemon
[root#centos ~]# systemctl stop mydaemon
all work perfectly.
I have a python service that uses a background scheduler to run different tasks. It keeps our database up to date with other APIs using HTTP GETs and POSTs.
We had this running on heroku without trouble. We recently moved our production box on OVH. When I run it using
python runtasks.py
everything works fine. When I run it in the background using
python runtasks.py &
or using this systemd unit file
[Unit]
...
[Service]
Restart=always
ExecStart=/usr/bin/python /path/to/python/file/runtasks.py
[Install]
WantedBy=multi-user.target
The process runs without errors, however it does not work for more than 1 hour.
I've used the journalctl to get the process logs
Aug 10 13:10:23 ns504338.ip-192-99-1.net bash[23763]:
WARNING:apscheduler.scheduler:Execution of job "Foobar.run
(trigger: interval[0:00:20], next run at: 2015-08-10 13:10:23 EDT)"
skipped: maxim
Aug 10 13:10:43 ns504338.ip-192-99-1.net bash[23763]:
WARNING:apscheduler.scheduler:Execution of job "Foobar.run
(trigger: interval[0:00:20], next run at: 2015-08-10 13:10:43 EDT)"
skipped: maxim
Aug 10 13:11:03 ns504338.ip-192-99-1.net bash[23763]:
WARNING:apscheduler.scheduler:Execution of job "Foobar.run
(trigger: interval[0:00:20], next run at: 2015-08-10 13:11:03 EDT)"
skipped: maxim
...
These warnings are present less frequently when it is running correctly. And they are among other log messages.
My current assumption is a missing env variable or relative path. I'm currently investigating, any help is much appreciated!