attatch existing process for sucess or fail commands? - python

In Bash we can run programs like this
./mybin && echo "OK"
and this
./mybin || echo "fail"
But how do we attach success or fail commands to existing process?
Edit for explaination:
Now we are running ./mybin .
When mybin exits with or without error it will do nothing, but during mybin is running,
how can I achieve the same effect like I started the process like ./mybin && echo ok || echo fail ?
In both shell script and python programming way would be awesome, thanks!

But how do we attatch sucess or fail commands to existing already running process?
Assuming you're referring to a command that is run in the background, you can use wait to wait for that command to finish and retrieve its return status. For example:
./command & # run in background
wait $! && echo "completed successfully"
Here I used $! to get the PID of the backgrounded process, but it can be the PID of any child process of the shell.

Related

Restart python script automatically even when it crashes in Linux

I have a python program that has to be running all the time. If for some reason it was stopped I want to restart it automatically. I thought of having a cron that will run every n number of seconds and check the program is running. My shell script is looks like this:
#!/usr/bin/env bash
CM_COMMAND=`ps aux| grep abc| grep def| grep sudo`
LEN_COMMAND=${#CM_COMMAND}
if[["$LEN_COMMAND" -le "5"]]
then
echo "start the python program"
fi
exit
When I run this script I am getting the error: my_prog.sh: line 4: $'if[[118\r -le 5]]\r': command not found'
What is the alternative of doing this and what is the problem with my script?
Maybe this would be more robust?
1) save the PID of your process when you start it with:
{your_python_command} & echo $! >>/{some_folder}/your_app.pid
2) This script will check and restart if it can't find the PID..
#!/usr/bin/env bash
PID=`cat /{some_folder}/your_app.pid`
if ! ps -p $PID > /dev/null
then
rm /{some_folder}/your_app.pid
{your_python_command} & echo $! >>/{some_folder}/your_app.pid
fi
3) To add it to a cronjob:
crontab -e
choose your text editor and add this row at the end of the file:
*/1 * * * * /{your_path}/{your_script_name}
exit and save
(this will run the script every minute, check crontab manual to set your exact interval)
How about making it a service? A very clean solution, in my opinion.
For more information on how to do it, you can read this article.

How to kill python script with bash script

I run a bash script with which start a python script to run in background
#!/bin/bash
python test.py &
So how i can i kill the script with bash script also?
I used the following command to kill but output no process found
killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }')
I try to check the running processes by ps aux | less and found that the running script having command of python test.py
Please assist, thank you!
Use pkill command as
pkill -f test.py
(or) a more fool-proof way using pgrep to search for the actual process-id
kill $(pgrep -f 'python test.py')
Or if more than one instance of the running program is identified and all of them needs to be killed, use killall(1) on Linux and BSD
killall test.py
You can use the ! to get the PID of the last command.
I would suggest something similar to the following, that also check if the process you want to run is already running:
#!/bin/bash
if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists
python test.py & #+and if so do not run another process.
echo $! > /tmp/test.py.pid
else
echo -n "ERROR: The process is already running with pid "
cat /tmp/test.py.pid
echo
fi
Then, when you want to kill it:
#!/bin/bash
if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the
kill `cat /tmp/test.py.pid` #+the process is not running. Useless
rm /tmp/test.py.pid #+trying to kill it.
else
echo "test.py is not running"
fi
Of course if the killing must take place some time after the command has been launched, you can put everything in the same script:
#!/bin/bash
python test.py & # This does not check if the command
echo $! > /tmp/test.py.pid #+has already been executed. But,
#+would have problems if more than 1
sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would.
#+be overwritten.
if [[ -e /tmp/test.py.pid ]]; then
kill `cat /tmp/test.py.pid`
else
echo "test.py is not running"
fi
If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you!
With something like this you are sure you are killing what you want to kill. Commands like pkill or grepping the ps aux can be risky.
ps -ef | grep python
it will return the "pid" then kill the process by
sudo kill -9 pid
eg output of ps command:
user 13035 4729 0 13:44 pts/10 00:00:00 python (here 13035 is pid)
With the use of bashisms.
#!/bin/bash
python test.py &
kill $!
$! is the PID of the last process started in background. You can also save it in another variable if you start multiple scripts in the background.
killall python3
will interrupt any and all python3 scripts running.

How to check whether or not a python script is up?

I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running?
Taken from this answer:
A bash script which starts your python script and restarts it if it didn't exit normally :
#!/bin/bash
until script.py; do
echo "'script.py' exited with code $?. Restarting..." >&2
sleep 1
done
Then just start the monitor script in background:
nohup script_monitor.sh &
Edit for multiple scripts:
Monitor script:
cat script_monitor.sh
#!/bin/bash
until ./script1.py
do
echo "'script1.py' exited with code $?. Restarting..." >&2
sleep 1
done &
until ./script2.py
do
echo "'script2.py' exited with code $?. Restarting..." >&2
sleep 1
done &
scripts example:
cat script1.py
#!/usr/bin/python
import time
while True:
print 'script1 running'
time.sleep(3)
cat script2.py
#!/usr/bin/python
import time
while True:
print 'script2 running'
time.sleep(3)
Then start the monitor script:
./script_monitor.sh
This starts one monitor script per python script in the background.
Try this and enter your script name.
ps aux | grep SCRIPT_NAME
Create a script (say check_process.sh) which will
Find the process id for your python script by using ps command.
Save it in a variable say pid
Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again.
If pid not found, then exit the loop and send mail to your mail_id saying that process is not running.
Now call check_process.sh by a nohup so it will run in background continuously.
I implemented it way back and remember it worked fine.
You can use
runit
supervisor
monit
systemd (i think)
Do not hack this with a script
upstart, on Ubuntu, will monitor your process and restart it if it crashes. I believe systemd will do that too. No need to reinvent this.

Starting/stopping a background Python process wtihout nohup + ps aux grep + kill

I usually use:
nohup python -u myscript.py &> ./mylog.log & # or should I use nohup 2>&1 ? I never remember
to start a background Python process that I'd like to continue running even if I log out, and:
ps aux |grep python
# check for the relevant PID
kill <relevantPID>
It works but it's a annoying to do all these steps.
I've read some methods in which you need to save the PID in some file, but that's even more hassle.
Is there a clean method to easily start / stop a Python script? like:
startpy myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
stoppy myscript.py
Or could this long part nohup python -u myscript.py &> ./mylog.log & be written in the shebang of the script, such that I could start the script easily with ./myscript.py instead of writing the long nohup line?
Note : I'm looking for a one or two line solution, I don't want to have to write a dedicated systemd service for this operation.
As far as I know, there are just two (or maybe three or maybe four?) solutions to the problem of running background scripts on remote systems.
1) nohup
nohup python -u myscript.py > ./mylog.log 2>&1 &
1 bis) disown
Same as above, slightly different because it actually remove the program to the shell job lists, preventing the SIGHUP to be sent.
2) screen (or tmux as suggested by neared)
Here you will find a starting point for screen.
See this post for a great explanation of how background processes works. Another related post.
3) Bash
Another solution is to write two bash functions that do the job:
mynohup () {
[[ "$1" = "" ]] && echo "usage: mynohup python_script" && return 0
nohup python -u "$1" > "${1%.*}.log" 2>&1 < /dev/null &
}
mykill() {
ps -ef | grep "$1" | grep -v grep | awk '{print $2}' | xargs kill
echo "process "$1" killed"
}
Just put the above functions in your ~/.bashrc or ~/.bash_profile and use them as normal bash commands.
Now you can do exactly what you told:
mynohup myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
mykill myscript.py
4) Daemon
This daemon module is very useful:
python myscript.py start
python myscript.py stop
Do you mean log in and out remotely (e.g. via SSH)? If so, a simple solution is to install tmux (terminal multiplexer). It creates a server for terminals that run underneath it as clients. You open up tmux with tmux, type in your command, type in CONTROL+B+D to 'detach' from tmux, and then type exit at the main terminal to log out. When you log back in, tmux and the processes running in it will still be running.

Bash script that starts python script doesn't stop itself

I'm unsure if this actually is the problem, but let me explain: I have a python script that gets started by a bash script. The bash script's job is done then, but when I grep the ps aux the call is still present.
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1
If I grep for ps aux | grep python I get the python -u /home/user/folder/myscript.py -some Parameters as a result. According to the logfile the python script closed properly. (Code to end the script is within the script itself.)
The script gets started every hour and I still see all the calls from the hours before.
Thanks in advance for your help, tips or advice!
The parent bash script will remain as long as the child (python script) is running.
If you start the python script running in background (add & at end of python line) then the parent will exit.
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1 &
If you examine the process list (e.g. 'ps -elf'). It will show the child (if still running). The child PPID (parent PID) will be 1(root PID) instead of the parent PID because the parent doesn't exist any more.
It could eventually be a problem if your python script never exits.
You could make the parent script wait and kill the child, e.g. wait 30 secs and kill child if it is still present:
#!/bin/bash
export http_proxy='1.2.3.4:1234'
python -u /home/user/folder/myscript.py -some Parameters >> /folder/Logfile_stout.log 2>&1 &
sleep 30
jobs
kill %1
kill -9 %1
jobs

Categories

Resources