I have a python script that needs to run as a daemon on startup. The process detaches from tty(and pdb), but the code doesn't run.
I've narrowed it down to a minimal example
import daemon
from time import sleep
f1 = open('out.txt','a')
with daemon.DaemonContext():
while(1):
f1.write('this is a test')
sleep(5)
I expect the script to keep runiing and adding a line to out.txt every 5 seconds, but the script just detaches from tty(or pdb) and ps -ax shows that the python interpreter isn't running anymore. out.txt is created, but stays empty
You may want to use a process supervisor.
To simplify the process and have a portable solution not depending for example on systemd (Linux only) you could install for example immortal, in FreeBSD just need to do:
pkg install immortal
Then create a your-script.yml with something like this:
cmd: sleep 3
And daemonize it with:
$ immortal -c test.yml
To check the status you could use immortalctl:
$ immortalctl
PID Up Down Name CMD
29993 0.0s test sleep 3
If want to have it always up even when rebooting, just move your script (in FreeBSD) to /usr/local/etc/immortal/your-script.yml, check more about immortaldir
You can add more option for exaple:
cmd: iostat 3
log:
file: /tmp/iostat.log
age: 10 # seconds
num: 7 # int
size: 1 # MegaBytes
require_cmd: test -f /tmp/foo
For more examples check: https://immortal.run/post/run.yml/
Related
I wrote a python program and its Dockerfile:
import time
print("Begin")
time.sleep(100);
print("End")
The image for it was created,and it was run using docker run <image-id> and the behaviour that surprises me is,
after giving the run command in the console, it waits for sleep(100) seconds and prints "Begin" and "End" together.
Why are we not getting the intermediate results while running it?
Also how can I write a streaming app (in kafka or so), in this manner if it wont send the data immediately after it produces?
When you run your python script from the console, it displays Begin on stdout right away because it is a tty (interactive) and flushes at the end of each line. But if you redirect stdout and stdin like so python /tmp/a.py < /dev/null | cat, the python script will not notice it is run from a tty and will only flush when it completes.
If you run the same script from a docker container, it does not have a tty by default, you have to explicitly ask for one with --tty , -t Allocate a pseudo-TTY:
docker run -t yourimage
Alternatively, if you do no want the container to run with a tty, you can force the flush to happen regardless by setting the PYTHONUNBUFFERED environment variable, by adding the -u option to the python interpreter or by modifying your script like so:
import sys
import time
print("Begin")
sys.stdout.flush()
time.sleep(100);
print("End")
or with the flush argument (python3 only):
import time
print("Begin", flush=True)
time.sleep(100);
print("End")
When printing to stdout the OS does not guarantee it will be written immediately.
What is guaranteed is that when the file descriptor will be closed the OS will flush the write buffer (this is why when the docker exits you get the output).
In order to ensure OS will flush, add the following code after any important print:
import sys
sys.stdout.flush()
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running?
Taken from this answer:
A bash script which starts your python script and restarts it if it didn't exit normally :
#!/bin/bash
until script.py; do
echo "'script.py' exited with code $?. Restarting..." >&2
sleep 1
done
Then just start the monitor script in background:
nohup script_monitor.sh &
Edit for multiple scripts:
Monitor script:
cat script_monitor.sh
#!/bin/bash
until ./script1.py
do
echo "'script1.py' exited with code $?. Restarting..." >&2
sleep 1
done &
until ./script2.py
do
echo "'script2.py' exited with code $?. Restarting..." >&2
sleep 1
done &
scripts example:
cat script1.py
#!/usr/bin/python
import time
while True:
print 'script1 running'
time.sleep(3)
cat script2.py
#!/usr/bin/python
import time
while True:
print 'script2 running'
time.sleep(3)
Then start the monitor script:
./script_monitor.sh
This starts one monitor script per python script in the background.
Try this and enter your script name.
ps aux | grep SCRIPT_NAME
Create a script (say check_process.sh) which will
Find the process id for your python script by using ps command.
Save it in a variable say pid
Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again.
If pid not found, then exit the loop and send mail to your mail_id saying that process is not running.
Now call check_process.sh by a nohup so it will run in background continuously.
I implemented it way back and remember it worked fine.
You can use
runit
supervisor
monit
systemd (i think)
Do not hack this with a script
upstart, on Ubuntu, will monitor your process and restart it if it crashes. I believe systemd will do that too. No need to reinvent this.
I'm trying to run a program/function (whichever is possible), from another python program, that will start it and close.
Example. python script "parent.py" will be called with some input, that input data will be passed to function/program "child", which will take around 30 minutes to complete. parent.py should be closed after calling "child".
Is there any way to do that?
Right now I'm able to start the child, but parent will be closed only after completion of child. which, I want to avoid.
As I understand it, your goal is to start a background process in subprocess but not have the main program wait for it to finish. Here is an example program that does that:
$ cat script.py
import subprocess
subprocess.Popen("sleep 3; echo 'Done!';", shell=True)
Here is an example of that program in operation:
$ python script.py
$
$
$ Done!
As you can see, the shell script continued to run after python exited.
subprocess has many options and you will want to customize the subprocess call to your needs.
In some cases, a child process that lives after its parent exits may leave a zombie process. For instructions on how to avoid that, see here.
The alternative: making subprocess wait
If you want the opposite to happen with python waiting for subprocess to complete, look at this program:
$ cat script.py
import subprocess
p = subprocess.Popen("sleep 3; echo 'Done!';", shell=True)
p.wait()
Here is an example its output:
$ python script.py
Done!
$
$
As you can see, because we called wait, python waited until subprocess completed before exiting.
To run any program as background process. This is the easiest solution and works like a charm!
import thread
def yourFunction(param):
print "yourFunction was called"
thread.start_new_thread(yourFunction, (param))
My python script takes multiple directories as input from user and I want that the user should input the directories for once only and then the program should run continuously over the same directories even after the system boots without asking the user again. I want to use Supervisor configuration to set this up. Any help ??
Make a cron-job or create a service init script (see the manual for how to write a init-script under the current Ubuntu version).
Doing a cronjob:
EDITOR=nano; crontab -e
#reboot cd /home/user/place_where_script_is; python3 myscript.py param1 param2
There's no efficient way for the user to input data to startup script, mainly because they run in a root environment.
A cron-job always runs in the environment where you invoked crontab -e (hopefully the users environment) but even then..
You can not interact with it because it's run in a separate "shell".
Unix socket?
In your script, add a listening socket on a unix socket.
in your .bashrc script (not sure where Ubuntu has it's post X startup scripts), invoke callMyScript.py that connects to the unix socket and sends instructions there.
This way you can interact with the cronjob/service script.
Here's how you keep track if a service is dead or not
PID files is the key:
#!/usr/bin/python3
pidfile = '/var/run/MyApplication.pid'
def pid_exists(pid):
"""Check whether pid exists in the current process table."""
if pid < 0:
return False
try:
os.kill(pid, 0)
except OSError, e:
return e.errno == errno.EPERM
else:
return True
if isfile(pidfile):
with open(pidfile) as fh:
thepid = fh.read()
pidnr = int(thepid)
if pid_exists(pidnr):
exit(1) # The previous instance is still running
else:
remove(pidfile) # Prev instance is dead, remove pidfile
# Create a pid-file with the active PID written in it
with open(pidfile, 'w') as fh:
fh.write(str(getpid()))
## Your code goes here...
remove(pidfile)
This way you can convert your Cron job to look like:
EDITOR=nano; crontab -e
* * * */1 cd /home/user/place_where_script_is; python3 myscript.py param1 param2
Which will run the script each minute, and if the script is dead or not started the script will start up again.
Same goes for service status-all myapp if you've written a init script, the init script will check the PID file (you'd have to write this in your init script and check the PID yourself much like the above Python code) and see if the process is dead.
You can add it to your crontab
crontab -e
Add the following line, which will get executed when your computer boot up:
#reboot python /path/to/your/script.py
Make sure have at least one empty line at the end.
As to restarting from where your script left off, you have to program that into your application logic.
I've written this watchdog script to monitor VLC player and kill it when playback has stopped because VLC continues to inhibit the power management daemon after playback. The script works. I can run it from the command line or through IDLE and it kills VLC when playback stops. I've added many variations of the command to start the script to my startup applications as described here but when I reboot, if it is running at all, it stops as soon as VLC starts. Restarting it from a terminal cause it to stay running and do what it is supposed to do. I don't know if this is a problem with the script or something peculiar about Ubuntu Startup Applications (although I'm leaning towards Ubuntu). Maybe something to do with permissions? (Although I did chmod +x) Should I be executing some other commands to make sure DBus is up before I launch the script? Part of me thinks that something isn't fully loaded when the script starts so I tried sleeping before launching using the *nix sleep command, the X-GNOME-Autostart-Delay, and time.sleep(n) in the python code. The pythonic way seems to have the best chance of success. The *nix ways seem to only make startup take longer and at the end of it I find that the process isn't even running. I'm using the python-setproctitle module to name the process so I can quickly see if it is running with a ps -e from terminal. I'm out of ideas and about ready to just manually run the script whenever I reboot (although in principle I think that the machine should do it for me because I told it to). Some variations of Startup Application command lines that I've tried are:
/path/to/script/vlc_watchdog.py
"/path/to/script/vlc_watchdog.py"
/path/to/script/vlc_watchdog.py &
"/path/to/script/vlc_watchdog.py &"
python /path/to/script/vlc_watchdog.py
python /path/to/script/vlc_watchdog.py &
"python /path/to/script/vlc_watchdog.py"
"python /path/to/script/vlc_watchdog.py &"
bash -c "/path/to/script/vlc_watchdog.py"
sleep 30 ; /path/to/script/vlc_watchdog.py
sleep 30 && /path/to/script/vlc_watchdog.py
etc...
Full script:
#!/usr/bin/env python
import time
time.sleep(30)
import dbus
import os
import subprocess
from subprocess import Popen, PIPE
import daemon
import setproctitle
setproctitle.setproctitle('VLC-Watchdog')
sleeptime = 5
def vlc_killer():
bus = dbus.SessionBus()
vlc_media_player_obj = bus.get_object("org.mpris.MediaPlayer2.vlc", "/org/mpris/MediaPlayer2")
props_iface = dbus.Interface(vlc_media_player_obj, 'org.freedesktop.DBus.Properties')
pb_stat = props_iface.Get('org.mpris.MediaPlayer2.Player', 'PlaybackStatus')
if pb_stat == 'Stopped':
os.system("kill -9 $(pidof vlc)")
else:
time.sleep(sleeptime)
def vlc_is_running():
ps = subprocess.Popen(['ps', '-e'], stdout = PIPE)
out, err = ps.communicate()
for line in out.splitlines():
if 'vlc' in line:
return True
return False
def run():
while True:
if vlc_is_running():
vlc_killer()
else:
time.sleep(sleeptime)
with daemon.DaemonContext():
run()
In the shell script that starts your Python code (the one in the Ubuntu startup/initialization process), use something like:
#!/bin/sh
set -x
exec > /tmp/errors.out 2>&1
/path/to/script/vlc_watchdog.py
Then after things go awry again (that is, after another reboot), inspect /tmp/errors.out to see the error messages related to whatever happened. There should be a Python traceback in there, or at least a shell error.