I'm not familiar with bash at all, but I'm trying to make a pair of scripts that will detect
If a program is running.
If a python/bash script is running.
My code for nº1 is:
#!/bin/bash
X=$( pidof $1 )
if [ ${#X} -gt 0 ]
then
echo "$1 has already $X been started"
else
echo "$1 not started $X"
fi
Which works great, but won't detect scripts, so I made nº 2 with the change:
X=$( pgrep -f $1 )
At first nº2 it seemed to be working, but when I terminated the python script I still get:
WebsocketServer has 5 length and it's already 11919 started
If I do ps -ax the PID of the processes aren't nowhere to be seen.
But if I write ps -ax | grep websocket:
11921 pts/4 S+ 0:00 grep --color=auto websocket
If I start the python script...
WebsocketServer has 11 length and it's already 11927 11935 started
What is happening? Am I somehow misusing the commands?
Edit: Forgot to mention that writing pgrep -f WebsocketServer in the terminal returns nothing, like it should.
The problem is that the argument to your script is the same as the script name you're searching for, and pgrep -f is finding the script.
Here's a trick you can try: split the name into two arguments.
checkScriptAlive websocket Server
Then in the script, do:
target="$1$2"
x=$(pgrep -f "$target")
Related
I'm writing a stop routine for a start-up service script:
do_stop()
{
rm -f $PIDFILE
pkill -f $DAEMON || return 1
return 0
}
The problem is that pkill (same with killall) also matches the process representing the script itself and it basically terminates itself. How to fix that?
You can explicitly filter out the current PID from the results:
kill $(pgrep -f $DAEMON | grep -v ^$$\$)
To correctly use the -f flag, be sure to supply the whole path to the daemon rather than just a substring. That will prevent you from killing the script (and eliminate the need for the above grep) and also from killing all other system processes that happen to share the daemon's name.
pkill -f accepts a full blown regex. So rather than pkill -f $DAEMON you should use:
pkill -f "^"$DAEMON
To make sure only if process name starts with the given daemon name then only it is killed.
A better solution will be to save pid (Proces Id) of the process in a file when you start the process. And for the stopping the process just read the file to get the process id to be stopped/killed.
Judging by your question, you're not hard over on using pgrep and pkill, so here are some other options commonly used.
1) Use killproc from /etc/init.d/functions or /lib/lsb/init-functions (which ever is appropriate for your distribution and version of linux). If you're writing a service script, you may already be including this file if you used one of the other services as an example.
Usage: killproc [-p pidfile] [ -d delay] {program} [-signal]
The main advantage to using this is that it sends SIGTERM, waits to see if the process terminates and sends SIGKILL only if necessary.
2) You can also use the secret sauce of killproc, which is to find the process ids to kill using pidof which has a -o option for excluding a particular process. The argument for -o could be $$, the current process id, or %PPID, which is a special variable that pidof interprets as the script calling pidof. Finally if the daemon is a script, you'll need the -x so your trying to kill the script by it's name rather than killing bash or python.
for pid in $(pidof -o %PPID -x progd); do
kill -TERM $pid
done
You can see an example of this in the article Bash: How to check if your script is already running
Is it possible to tag a python program run from the command line?
Context: Said command will be run with nohup in the background, and will be killed and restarted at midnight via cron. My intention is to pipe ps into egrep for said tag, grab the pid, and kill -9 before restarting.
minimal, complete, and verifiable example
Start a python web server:
$ nohup python -m http.server 8888 &
Add a tag to the command. Note that -tag is just my imagination at work.. this is what I want:
$ nohup python -m http.server 8888 & -tag "ced72ca0-cd19-11ea-87d0-0242ac130003"
grep for tag:
$ ps aux | egrep "ced72ca0-cd19-11ea-87d0-0242ac130003"
...grab the pid from this, and kill -9
Because you're saying that you want to kill the processes through isolated cron jobs at nidnight, I guess that the $! based solutions in the linked questions (like (How to get the process ID to kill a nohup process?)) are no option for you.
In order to identity your HTTP server processes, your idea is to 'tag' them with a unique ID so the cron jobs will find them.
What you could do in your specific case is to make use of the fact that the listening TCP sockets are unique on your given machine, and retrieve the associated pid through netstat.
A bash script along the lines of:
#!/bin/bash
port=${1:-"8888"}
IP=${2:-"0.0.0.0"}
pid=`netstat -antp 2>/dev/null | grep -E "^(\S+\s+){3}$IP:$port\s+\S+\s+LISTEN" | sed -E 's/ˆ(\S+\s+){6}([0-9]+).*$/\2/'`
[[ -n "$pid" ]] && kill -TERM $pid
... that you parameterize with IP and port through your cronjob.
You can put code in file with name ced72ca0-cd19-11ea-87d0-0242ac130003,
#!/bin/bash
python -m http.server
set it executable
chmod +x ced72ca0-cd19-11ea-87d0-0242ac130003
run it
nohup ced72ca0-cd19-11ea-87d0-0242ac130003 &
and then you can kill it
pkill ced72ca0-cd19-11ea-87d0-0242ac130003
or even using only beginning of filename
pkill ced
EDIT:
Because new script doesn't get any arguments so you can run it with any argument(s) - ie. some tag/word
nohup ced72ca0-cd19-11ea-87d0-0242ac130003 hello_world &
and then you can kill it using -f
pkill -f hello_world
or even using part of word
pkill -f hello
pkill -f world
This way you can even use normal name for script and add tag
nohup my_script ced72ca0-cd19-11ea-87d0-0242ac130003 &
and kill with -f
pkill -f ced72ca0-cd19-11ea-87d0-0242ac130003
or using only part of word
pkill -f ced
I have a Python script that runs in an infinite loop (it's a server).
I want to write an AppleScript that will start this script if it isn't started yet, and otherwise force-quit and restart it. This will make it easy for me to make changes to the server code while programming.
Currently I only know how to start it: do shell script "python server.py"
Note that AppleScript's do shell script starts the shell (/bin/sh) in the root directory (/) by default, so you should specify an explicit path to server.py
In the following examples I'll assume directory path ~/srv.
Here's the shell command:
pid=$(pgrep -fx 'python .*/server\.py'); [ "$pid" ] && kill -9 $pid; python ~/srv/server.py
As an AppleScript statement, wrapped in do shell script - note the \-escaped inner " and \ chars.:
do shell script "pid=$(pgrep -fx 'python .*/server\\.py'); [ \"$pid\" ] && kill -9 $pid; python ~/srv/server.py"
pgrep -fx '^python .*/server\.py$' uses pgrep to find your running command by regex against the full command line (-f), requiring a full match (-x), and returns the PID (process ID), if any.
Note that I've used a more abstract regex to underscore the fact that pgrep (always) treats its search term as a regular expression.
To specify the full launch command line as the regex, use python ~/srv/server\.py - note the \-escaping of . for full robustness.
[ "$pid" ] && kill -9 $pid kills the process, if a PID was found ([ "$pid" ] is short for [ -n "$pid" ] and evaluates to true only if $pid is nonempty); -9 sends signal SIGKILL, which forcefully terminates the process.
python ~/srv/server.py then (re)starts your server.
On the shell, if you do ps aux | grep python\ server.py | head -n1, you'll get the ID of the process running server.py. You can then use kill -9 to kill that process and restart it:
kill -9 `ps aux | grep python\ server.py | head -n1 | python -c 'import sys; print(sys.stdin.read().split()[1])'`
That'll kill it. Al you have to do now is to restart it:
python server.py
You can combine the two with &&:
kill -9 `ps aux | grep python\ server.py | head -n1 | python -c 'import sys; print(sys.stdin.read().split()[1])'` && python server.py
Of course, you already know how to put that in a do shell script!
I run a bash script with which start a python script to run in background
#!/bin/bash
python test.py &
So how i can i kill the script with bash script also?
I used the following command to kill but output no process found
killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }')
I try to check the running processes by ps aux | less and found that the running script having command of python test.py
Please assist, thank you!
Use pkill command as
pkill -f test.py
(or) a more fool-proof way using pgrep to search for the actual process-id
kill $(pgrep -f 'python test.py')
Or if more than one instance of the running program is identified and all of them needs to be killed, use killall(1) on Linux and BSD
killall test.py
You can use the ! to get the PID of the last command.
I would suggest something similar to the following, that also check if the process you want to run is already running:
#!/bin/bash
if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists
python test.py & #+and if so do not run another process.
echo $! > /tmp/test.py.pid
else
echo -n "ERROR: The process is already running with pid "
cat /tmp/test.py.pid
echo
fi
Then, when you want to kill it:
#!/bin/bash
if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the
kill `cat /tmp/test.py.pid` #+the process is not running. Useless
rm /tmp/test.py.pid #+trying to kill it.
else
echo "test.py is not running"
fi
Of course if the killing must take place some time after the command has been launched, you can put everything in the same script:
#!/bin/bash
python test.py & # This does not check if the command
echo $! > /tmp/test.py.pid #+has already been executed. But,
#+would have problems if more than 1
sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would.
#+be overwritten.
if [[ -e /tmp/test.py.pid ]]; then
kill `cat /tmp/test.py.pid`
else
echo "test.py is not running"
fi
If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you!
With something like this you are sure you are killing what you want to kill. Commands like pkill or grepping the ps aux can be risky.
ps -ef | grep python
it will return the "pid" then kill the process by
sudo kill -9 pid
eg output of ps command:
user 13035 4729 0 13:44 pts/10 00:00:00 python (here 13035 is pid)
With the use of bashisms.
#!/bin/bash
python test.py &
kill $!
$! is the PID of the last process started in background. You can also save it in another variable if you start multiple scripts in the background.
killall python3
will interrupt any and all python3 scripts running.
I usually use:
nohup python -u myscript.py &> ./mylog.log & # or should I use nohup 2>&1 ? I never remember
to start a background Python process that I'd like to continue running even if I log out, and:
ps aux |grep python
# check for the relevant PID
kill <relevantPID>
It works but it's a annoying to do all these steps.
I've read some methods in which you need to save the PID in some file, but that's even more hassle.
Is there a clean method to easily start / stop a Python script? like:
startpy myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
stoppy myscript.py
Or could this long part nohup python -u myscript.py &> ./mylog.log & be written in the shebang of the script, such that I could start the script easily with ./myscript.py instead of writing the long nohup line?
Note : I'm looking for a one or two line solution, I don't want to have to write a dedicated systemd service for this operation.
As far as I know, there are just two (or maybe three or maybe four?) solutions to the problem of running background scripts on remote systems.
1) nohup
nohup python -u myscript.py > ./mylog.log 2>&1 &
1 bis) disown
Same as above, slightly different because it actually remove the program to the shell job lists, preventing the SIGHUP to be sent.
2) screen (or tmux as suggested by neared)
Here you will find a starting point for screen.
See this post for a great explanation of how background processes works. Another related post.
3) Bash
Another solution is to write two bash functions that do the job:
mynohup () {
[[ "$1" = "" ]] && echo "usage: mynohup python_script" && return 0
nohup python -u "$1" > "${1%.*}.log" 2>&1 < /dev/null &
}
mykill() {
ps -ef | grep "$1" | grep -v grep | awk '{print $2}' | xargs kill
echo "process "$1" killed"
}
Just put the above functions in your ~/.bashrc or ~/.bash_profile and use them as normal bash commands.
Now you can do exactly what you told:
mynohup myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
mykill myscript.py
4) Daemon
This daemon module is very useful:
python myscript.py start
python myscript.py stop
Do you mean log in and out remotely (e.g. via SSH)? If so, a simple solution is to install tmux (terminal multiplexer). It creates a server for terminals that run underneath it as clients. You open up tmux with tmux, type in your command, type in CONTROL+B+D to 'detach' from tmux, and then type exit at the main terminal to log out. When you log back in, tmux and the processes running in it will still be running.