How to check whether or not a python script is up? - python

I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running?

Taken from this answer:
A bash script which starts your python script and restarts it if it didn't exit normally :
#!/bin/bash
until script.py; do
echo "'script.py' exited with code $?. Restarting..." >&2
sleep 1
done
Then just start the monitor script in background:
nohup script_monitor.sh &
Edit for multiple scripts:
Monitor script:
cat script_monitor.sh
#!/bin/bash
until ./script1.py
do
echo "'script1.py' exited with code $?. Restarting..." >&2
sleep 1
done &
until ./script2.py
do
echo "'script2.py' exited with code $?. Restarting..." >&2
sleep 1
done &
scripts example:
cat script1.py
#!/usr/bin/python
import time
while True:
print 'script1 running'
time.sleep(3)
cat script2.py
#!/usr/bin/python
import time
while True:
print 'script2 running'
time.sleep(3)
Then start the monitor script:
./script_monitor.sh
This starts one monitor script per python script in the background.

Try this and enter your script name.
ps aux | grep SCRIPT_NAME

Create a script (say check_process.sh) which will
Find the process id for your python script by using ps command.
Save it in a variable say pid
Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again.
If pid not found, then exit the loop and send mail to your mail_id saying that process is not running.
Now call check_process.sh by a nohup so it will run in background continuously.
I implemented it way back and remember it worked fine.

You can use
runit
supervisor
monit
systemd (i think)
Do not hack this with a script

upstart, on Ubuntu, will monitor your process and restart it if it crashes. I believe systemd will do that too. No need to reinvent this.

Related

What is neccesary to daemonize a python script?

I have a python script that needs to run as a daemon on startup. The process detaches from tty(and pdb), but the code doesn't run.
I've narrowed it down to a minimal example
import daemon
from time import sleep
f1 = open('out.txt','a')
with daemon.DaemonContext():
while(1):
f1.write('this is a test')
sleep(5)
I expect the script to keep runiing and adding a line to out.txt every 5 seconds, but the script just detaches from tty(or pdb) and ps -ax shows that the python interpreter isn't running anymore. out.txt is created, but stays empty
You may want to use a process supervisor.
To simplify the process and have a portable solution not depending for example on systemd (Linux only) you could install for example immortal, in FreeBSD just need to do:
pkg install immortal
Then create a your-script.yml with something like this:
cmd: sleep 3
And daemonize it with:
$ immortal -c test.yml
To check the status you could use immortalctl:
$ immortalctl
PID Up Down Name CMD
29993 0.0s test sleep 3
If want to have it always up even when rebooting, just move your script (in FreeBSD) to /usr/local/etc/immortal/your-script.yml, check more about immortaldir
You can add more option for exaple:
cmd: iostat 3
log:
file: /tmp/iostat.log
age: 10 # seconds
num: 7 # int
size: 1 # MegaBytes
require_cmd: test -f /tmp/foo
For more examples check: https://immortal.run/post/run.yml/

Running program/function in background in Python

I'm trying to run a program/function (whichever is possible), from another python program, that will start it and close.
Example. python script "parent.py" will be called with some input, that input data will be passed to function/program "child", which will take around 30 minutes to complete. parent.py should be closed after calling "child".
Is there any way to do that?
Right now I'm able to start the child, but parent will be closed only after completion of child. which, I want to avoid.
As I understand it, your goal is to start a background process in subprocess but not have the main program wait for it to finish. Here is an example program that does that:
$ cat script.py
import subprocess
subprocess.Popen("sleep 3; echo 'Done!';", shell=True)
Here is an example of that program in operation:
$ python script.py
$
$
$ Done!
As you can see, the shell script continued to run after python exited.
subprocess has many options and you will want to customize the subprocess call to your needs.
In some cases, a child process that lives after its parent exits may leave a zombie process. For instructions on how to avoid that, see here.
The alternative: making subprocess wait
If you want the opposite to happen with python waiting for subprocess to complete, look at this program:
$ cat script.py
import subprocess
p = subprocess.Popen("sleep 3; echo 'Done!';", shell=True)
p.wait()
Here is an example its output:
$ python script.py
Done!
$
$
As you can see, because we called wait, python waited until subprocess completed before exiting.
To run any program as background process. This is the easiest solution and works like a charm!
import thread
def yourFunction(param):
print "yourFunction was called"
thread.start_new_thread(yourFunction, (param))

Bash script that starts python script doesn't stop itself

I'm unsure if this actually is the problem, but let me explain: I have a python script that gets started by a bash script. The bash script's job is done then, but when I grep the ps aux the call is still present.
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1
If I grep for ps aux | grep python I get the python -u /home/user/folder/myscript.py -some Parameters as a result. According to the logfile the python script closed properly. (Code to end the script is within the script itself.)
The script gets started every hour and I still see all the calls from the hours before.
Thanks in advance for your help, tips or advice!
The parent bash script will remain as long as the child (python script) is running.
If you start the python script running in background (add & at end of python line) then the parent will exit.
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1 &
If you examine the process list (e.g. 'ps -elf'). It will show the child (if still running). The child PPID (parent PID) will be 1(root PID) instead of the parent PID because the parent doesn't exist any more.
It could eventually be a problem if your python script never exits.
You could make the parent script wait and kill the child, e.g. wait 30 secs and kill child if it is still present:
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1 &
sleep 30
jobs
kill %1
kill -9 %1
jobs

Why won't this Python script run as a startup application in Ubuntu 12.04?

I've written this watchdog script to monitor VLC player and kill it when playback has stopped because VLC continues to inhibit the power management daemon after playback. The script works. I can run it from the command line or through IDLE and it kills VLC when playback stops. I've added many variations of the command to start the script to my startup applications as described here but when I reboot, if it is running at all, it stops as soon as VLC starts. Restarting it from a terminal cause it to stay running and do what it is supposed to do. I don't know if this is a problem with the script or something peculiar about Ubuntu Startup Applications (although I'm leaning towards Ubuntu). Maybe something to do with permissions? (Although I did chmod +x) Should I be executing some other commands to make sure DBus is up before I launch the script? Part of me thinks that something isn't fully loaded when the script starts so I tried sleeping before launching using the *nix sleep command, the X-GNOME-Autostart-Delay, and time.sleep(n) in the python code. The pythonic way seems to have the best chance of success. The *nix ways seem to only make startup take longer and at the end of it I find that the process isn't even running. I'm using the python-setproctitle module to name the process so I can quickly see if it is running with a ps -e from terminal. I'm out of ideas and about ready to just manually run the script whenever I reboot (although in principle I think that the machine should do it for me because I told it to). Some variations of Startup Application command lines that I've tried are:
/path/to/script/vlc_watchdog.py
"/path/to/script/vlc_watchdog.py"
/path/to/script/vlc_watchdog.py &
"/path/to/script/vlc_watchdog.py &"
python /path/to/script/vlc_watchdog.py
python /path/to/script/vlc_watchdog.py &
"python /path/to/script/vlc_watchdog.py"
"python /path/to/script/vlc_watchdog.py &"
bash -c "/path/to/script/vlc_watchdog.py"
sleep 30 ; /path/to/script/vlc_watchdog.py
sleep 30 && /path/to/script/vlc_watchdog.py
etc...
Full script:
#!/usr/bin/env python
import time
time.sleep(30)
import dbus
import os
import subprocess
from subprocess import Popen, PIPE
import daemon
import setproctitle
setproctitle.setproctitle('VLC-Watchdog')
sleeptime = 5
def vlc_killer():
bus = dbus.SessionBus()
vlc_media_player_obj = bus.get_object("org.mpris.MediaPlayer2.vlc", "/org/mpris/MediaPlayer2")
props_iface = dbus.Interface(vlc_media_player_obj, 'org.freedesktop.DBus.Properties')
pb_stat = props_iface.Get('org.mpris.MediaPlayer2.Player', 'PlaybackStatus')
if pb_stat == 'Stopped':
os.system("kill -9 $(pidof vlc)")
else:
time.sleep(sleeptime)
def vlc_is_running():
ps = subprocess.Popen(['ps', '-e'], stdout = PIPE)
out, err = ps.communicate()
for line in out.splitlines():
if 'vlc' in line:
return True
return False
def run():
while True:
if vlc_is_running():
vlc_killer()
else:
time.sleep(sleeptime)
with daemon.DaemonContext():
run()
In the shell script that starts your Python code (the one in the Ubuntu startup/initialization process), use something like:
#!/bin/sh
set -x
exec > /tmp/errors.out 2>&1
/path/to/script/vlc_watchdog.py
Then after things go awry again (that is, after another reboot), inspect /tmp/errors.out to see the error messages related to whatever happened. There should be a Python traceback in there, or at least a shell error.

attatch existing process for sucess or fail commands?

In Bash we can run programs like this
./mybin && echo "OK"
and this
./mybin || echo "fail"
But how do we attach success or fail commands to existing process?
Edit for explaination:
Now we are running ./mybin .
When mybin exits with or without error it will do nothing, but during mybin is running,
how can I achieve the same effect like I started the process like ./mybin && echo ok || echo fail ?
In both shell script and python programming way would be awesome, thanks!
But how do we attatch sucess or fail commands to existing already running process?
Assuming you're referring to a command that is run in the background, you can use wait to wait for that command to finish and retrieve its return status. For example:
./command & # run in background
wait $! && echo "completed successfully"
Here I used $! to get the PID of the backgrounded process, but it can be the PID of any child process of the shell.

Categories

Resources