I'm unsure if this actually is the problem, but let me explain: I have a python script that gets started by a bash script. The bash script's job is done then, but when I grep the ps aux the call is still present.
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1
If I grep for ps aux | grep python I get the python -u /home/user/folder/myscript.py -some Parameters as a result. According to the logfile the python script closed properly. (Code to end the script is within the script itself.)
The script gets started every hour and I still see all the calls from the hours before.
Thanks in advance for your help, tips or advice!
The parent bash script will remain as long as the child (python script) is running.
If you start the python script running in background (add & at end of python line) then the parent will exit.
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1 &
If you examine the process list (e.g. 'ps -elf'). It will show the child (if still running). The child PPID (parent PID) will be 1(root PID) instead of the parent PID because the parent doesn't exist any more.
It could eventually be a problem if your python script never exits.
You could make the parent script wait and kill the child, e.g. wait 30 secs and kill child if it is still present:
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1 &
sleep 30
jobs
kill %1
kill -9 %1
jobs
Related
I'm writing a stop routine for a start-up service script:
do_stop()
{
rm -f $PIDFILE
pkill -f $DAEMON || return 1
return 0
}
The problem is that pkill (same with killall) also matches the process representing the script itself and it basically terminates itself. How to fix that?
You can explicitly filter out the current PID from the results:
kill $(pgrep -f $DAEMON | grep -v ^$$\$)
To correctly use the -f flag, be sure to supply the whole path to the daemon rather than just a substring. That will prevent you from killing the script (and eliminate the need for the above grep) and also from killing all other system processes that happen to share the daemon's name.
pkill -f accepts a full blown regex. So rather than pkill -f $DAEMON you should use:
pkill -f "^"$DAEMON
To make sure only if process name starts with the given daemon name then only it is killed.
A better solution will be to save pid (Proces Id) of the process in a file when you start the process. And for the stopping the process just read the file to get the process id to be stopped/killed.
Judging by your question, you're not hard over on using pgrep and pkill, so here are some other options commonly used.
1) Use killproc from /etc/init.d/functions or /lib/lsb/init-functions (which ever is appropriate for your distribution and version of linux). If you're writing a service script, you may already be including this file if you used one of the other services as an example.
Usage: killproc [-p pidfile] [ -d delay] {program} [-signal]
The main advantage to using this is that it sends SIGTERM, waits to see if the process terminates and sends SIGKILL only if necessary.
2) You can also use the secret sauce of killproc, which is to find the process ids to kill using pidof which has a -o option for excluding a particular process. The argument for -o could be $$, the current process id, or %PPID, which is a special variable that pidof interprets as the script calling pidof. Finally if the daemon is a script, you'll need the -x so your trying to kill the script by it's name rather than killing bash or python.
for pid in $(pidof -o %PPID -x progd); do
kill -TERM $pid
done
You can see an example of this in the article Bash: How to check if your script is already running
I am aware how to check if some python process is running. I am trying to write a script, which checks whether a python script is running, if it is not it should rerun it.
What I have right now is:
import os
stream = os.popen("ps aux | grep combined.py")
output = stream.read()
print(output[0])
The problem is I can't get the specific process ID this way, because output is a list of characters not a dict, where I could get PID by output["PID"], to check whether there is an PID in the list.
How would I implement such script?
In bash script:
#!/bin/bash
pid=`ps -ef |grep combined.py |grep -v grep |awk '{print $2}'`
echo $pid
You can use crontab to run the bash script and checks every few minutes if the python process is running
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running?
Taken from this answer:
A bash script which starts your python script and restarts it if it didn't exit normally :
#!/bin/bash
until script.py; do
echo "'script.py' exited with code $?. Restarting..." >&2
sleep 1
done
Then just start the monitor script in background:
nohup script_monitor.sh &
Edit for multiple scripts:
Monitor script:
cat script_monitor.sh
#!/bin/bash
until ./script1.py
do
echo "'script1.py' exited with code $?. Restarting..." >&2
sleep 1
done &
until ./script2.py
do
echo "'script2.py' exited with code $?. Restarting..." >&2
sleep 1
done &
scripts example:
cat script1.py
#!/usr/bin/python
import time
while True:
print 'script1 running'
time.sleep(3)
cat script2.py
#!/usr/bin/python
import time
while True:
print 'script2 running'
time.sleep(3)
Then start the monitor script:
./script_monitor.sh
This starts one monitor script per python script in the background.
Try this and enter your script name.
ps aux | grep SCRIPT_NAME
Create a script (say check_process.sh) which will
Find the process id for your python script by using ps command.
Save it in a variable say pid
Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again.
If pid not found, then exit the loop and send mail to your mail_id saying that process is not running.
Now call check_process.sh by a nohup so it will run in background continuously.
I implemented it way back and remember it worked fine.
You can use
runit
supervisor
monit
systemd (i think)
Do not hack this with a script
upstart, on Ubuntu, will monitor your process and restart it if it crashes. I believe systemd will do that too. No need to reinvent this.
I usually use:
nohup python -u myscript.py &> ./mylog.log & # or should I use nohup 2>&1 ? I never remember
to start a background Python process that I'd like to continue running even if I log out, and:
ps aux |grep python
# check for the relevant PID
kill <relevantPID>
It works but it's a annoying to do all these steps.
I've read some methods in which you need to save the PID in some file, but that's even more hassle.
Is there a clean method to easily start / stop a Python script? like:
startpy myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
stoppy myscript.py
Or could this long part nohup python -u myscript.py &> ./mylog.log & be written in the shebang of the script, such that I could start the script easily with ./myscript.py instead of writing the long nohup line?
Note : I'm looking for a one or two line solution, I don't want to have to write a dedicated systemd service for this operation.
As far as I know, there are just two (or maybe three or maybe four?) solutions to the problem of running background scripts on remote systems.
1) nohup
nohup python -u myscript.py > ./mylog.log 2>&1 &
1 bis) disown
Same as above, slightly different because it actually remove the program to the shell job lists, preventing the SIGHUP to be sent.
2) screen (or tmux as suggested by neared)
Here you will find a starting point for screen.
See this post for a great explanation of how background processes works. Another related post.
3) Bash
Another solution is to write two bash functions that do the job:
mynohup () {
[[ "$1" = "" ]] && echo "usage: mynohup python_script" && return 0
nohup python -u "$1" > "${1%.*}.log" 2>&1 < /dev/null &
}
mykill() {
ps -ef | grep "$1" | grep -v grep | awk '{print $2}' | xargs kill
echo "process "$1" killed"
}
Just put the above functions in your ~/.bashrc or ~/.bash_profile and use them as normal bash commands.
Now you can do exactly what you told:
mynohup myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
mykill myscript.py
4) Daemon
This daemon module is very useful:
python myscript.py start
python myscript.py stop
Do you mean log in and out remotely (e.g. via SSH)? If so, a simple solution is to install tmux (terminal multiplexer). It creates a server for terminals that run underneath it as clients. You open up tmux with tmux, type in your command, type in CONTROL+B+D to 'detach' from tmux, and then type exit at the main terminal to log out. When you log back in, tmux and the processes running in it will still be running.
In Bash we can run programs like this
./mybin && echo "OK"
and this
./mybin || echo "fail"
But how do we attach success or fail commands to existing process?
Edit for explaination:
Now we are running ./mybin .
When mybin exits with or without error it will do nothing, but during mybin is running,
how can I achieve the same effect like I started the process like ./mybin && echo ok || echo fail ?
In both shell script and python programming way would be awesome, thanks!
But how do we attatch sucess or fail commands to existing already running process?
Assuming you're referring to a command that is run in the background, you can use wait to wait for that command to finish and retrieve its return status. For example:
./command & # run in background
wait $! && echo "completed successfully"
Here I used $! to get the PID of the backgrounded process, but it can be the PID of any child process of the shell.