Starting/stopping a background Python process wtihout nohup + ps aux grep + kill - python

I usually use:
nohup python -u myscript.py &> ./mylog.log & # or should I use nohup 2>&1 ? I never remember
to start a background Python process that I'd like to continue running even if I log out, and:
ps aux |grep python
# check for the relevant PID
kill <relevantPID>
It works but it's a annoying to do all these steps.
I've read some methods in which you need to save the PID in some file, but that's even more hassle.
Is there a clean method to easily start / stop a Python script? like:
startpy myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
stoppy myscript.py
Or could this long part nohup python -u myscript.py &> ./mylog.log & be written in the shebang of the script, such that I could start the script easily with ./myscript.py instead of writing the long nohup line?
Note : I'm looking for a one or two line solution, I don't want to have to write a dedicated systemd service for this operation.

As far as I know, there are just two (or maybe three or maybe four?) solutions to the problem of running background scripts on remote systems.
1) nohup
nohup python -u myscript.py > ./mylog.log 2>&1 &
1 bis) disown
Same as above, slightly different because it actually remove the program to the shell job lists, preventing the SIGHUP to be sent.
2) screen (or tmux as suggested by neared)
Here you will find a starting point for screen.
See this post for a great explanation of how background processes works. Another related post.
3) Bash
Another solution is to write two bash functions that do the job:
mynohup () {
[[ "$1" = "" ]] && echo "usage: mynohup python_script" && return 0
nohup python -u "$1" > "${1%.*}.log" 2>&1 < /dev/null &
}
mykill() {
ps -ef | grep "$1" | grep -v grep | awk '{print $2}' | xargs kill
echo "process "$1" killed"
}
Just put the above functions in your ~/.bashrc or ~/.bash_profile and use them as normal bash commands.
Now you can do exactly what you told:
mynohup myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
mykill myscript.py
4) Daemon
This daemon module is very useful:
python myscript.py start
python myscript.py stop

Do you mean log in and out remotely (e.g. via SSH)? If so, a simple solution is to install tmux (terminal multiplexer). It creates a server for terminals that run underneath it as clients. You open up tmux with tmux, type in your command, type in CONTROL+B+D to 'detach' from tmux, and then type exit at the main terminal to log out. When you log back in, tmux and the processes running in it will still be running.

Related

Interact with docker container in the middle of a bash script execution [in that container]

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"

How to tag to a python command run in a bash shell

Is it possible to tag a python program run from the command line?
Context: Said command will be run with nohup in the background, and will be killed and restarted at midnight via cron. My intention is to pipe ps into egrep for said tag, grab the pid, and kill -9 before restarting.
minimal, complete, and verifiable example
Start a python web server:
$ nohup python -m http.server 8888 &
Add a tag to the command. Note that -tag is just my imagination at work.. this is what I want:
$ nohup python -m http.server 8888 & -tag "ced72ca0-cd19-11ea-87d0-0242ac130003"
grep for tag:
$ ps aux | egrep "ced72ca0-cd19-11ea-87d0-0242ac130003"
...grab the pid from this, and kill -9
Because you're saying that you want to kill the processes through isolated cron jobs at nidnight, I guess that the $! based solutions in the linked questions (like (How to get the process ID to kill a nohup process?)) are no option for you.
In order to identity your HTTP server processes, your idea is to 'tag' them with a unique ID so the cron jobs will find them.
What you could do in your specific case is to make use of the fact that the listening TCP sockets are unique on your given machine, and retrieve the associated pid through netstat.
A bash script along the lines of:
#!/bin/bash
port=${1:-"8888"}
IP=${2:-"0.0.0.0"}
pid=`netstat -antp 2>/dev/null | grep -E "^(\S+\s+){3}$IP:$port\s+\S+\s+LISTEN" | sed -E 's/ˆ(\S+\s+){6}([0-9]+).*$/\2/'`
[[ -n "$pid" ]] && kill -TERM $pid
... that you parameterize with IP and port through your cronjob.
You can put code in file with name ced72ca0-cd19-11ea-87d0-0242ac130003,
#!/bin/bash
python -m http.server
set it executable
chmod +x ced72ca0-cd19-11ea-87d0-0242ac130003
run it
nohup ced72ca0-cd19-11ea-87d0-0242ac130003 &
and then you can kill it
pkill ced72ca0-cd19-11ea-87d0-0242ac130003
or even using only beginning of filename
pkill ced
EDIT:
Because new script doesn't get any arguments so you can run it with any argument(s) - ie. some tag/word
nohup ced72ca0-cd19-11ea-87d0-0242ac130003 hello_world &
and then you can kill it using -f
pkill -f hello_world
or even using part of word
pkill -f hello
pkill -f world
This way you can even use normal name for script and add tag
nohup my_script ced72ca0-cd19-11ea-87d0-0242ac130003 &
and kill with -f
pkill -f ced72ca0-cd19-11ea-87d0-0242ac130003
or using only part of word
pkill -f ced

Restart python script automatically even when it crashes in Linux

I have a python program that has to be running all the time. If for some reason it was stopped I want to restart it automatically. I thought of having a cron that will run every n number of seconds and check the program is running. My shell script is looks like this:
#!/usr/bin/env bash
CM_COMMAND=`ps aux| grep abc| grep def| grep sudo`
LEN_COMMAND=${#CM_COMMAND}
if[["$LEN_COMMAND" -le "5"]]
then
echo "start the python program"
fi
exit
When I run this script I am getting the error: my_prog.sh: line 4: $'if[[118\r -le 5]]\r': command not found'
What is the alternative of doing this and what is the problem with my script?
Maybe this would be more robust?
1) save the PID of your process when you start it with:
{your_python_command} & echo $! >>/{some_folder}/your_app.pid
2) This script will check and restart if it can't find the PID..
#!/usr/bin/env bash
PID=`cat /{some_folder}/your_app.pid`
if ! ps -p $PID > /dev/null
then
rm /{some_folder}/your_app.pid
{your_python_command} & echo $! >>/{some_folder}/your_app.pid
fi
3) To add it to a cronjob:
crontab -e
choose your text editor and add this row at the end of the file:
*/1 * * * * /{your_path}/{your_script_name}
exit and save
(this will run the script every minute, check crontab manual to set your exact interval)
How about making it a service? A very clean solution, in my opinion.
For more information on how to do it, you can read this article.

How to kill python script with bash script

I run a bash script with which start a python script to run in background
#!/bin/bash
python test.py &
So how i can i kill the script with bash script also?
I used the following command to kill but output no process found
killall $(ps aux | grep test.py | grep -v grep | awk '{ print $1 }')
I try to check the running processes by ps aux | less and found that the running script having command of python test.py
Please assist, thank you!
Use pkill command as
pkill -f test.py
(or) a more fool-proof way using pgrep to search for the actual process-id
kill $(pgrep -f 'python test.py')
Or if more than one instance of the running program is identified and all of them needs to be killed, use killall(1) on Linux and BSD
killall test.py
You can use the ! to get the PID of the last command.
I would suggest something similar to the following, that also check if the process you want to run is already running:
#!/bin/bash
if [[ ! -e /tmp/test.py.pid ]]; then # Check if the file already exists
python test.py & #+and if so do not run another process.
echo $! > /tmp/test.py.pid
else
echo -n "ERROR: The process is already running with pid "
cat /tmp/test.py.pid
echo
fi
Then, when you want to kill it:
#!/bin/bash
if [[ -e /tmp/test.py.pid ]]; then # If the file do not exists, then the
kill `cat /tmp/test.py.pid` #+the process is not running. Useless
rm /tmp/test.py.pid #+trying to kill it.
else
echo "test.py is not running"
fi
Of course if the killing must take place some time after the command has been launched, you can put everything in the same script:
#!/bin/bash
python test.py & # This does not check if the command
echo $! > /tmp/test.py.pid #+has already been executed. But,
#+would have problems if more than 1
sleep(<number_of_seconds_to_wait>) #+have been started since the pid file would.
#+be overwritten.
if [[ -e /tmp/test.py.pid ]]; then
kill `cat /tmp/test.py.pid`
else
echo "test.py is not running"
fi
If you want to be able to run more command with the same name simultaneously and be able to kill them selectively, a small edit of the script is needed. Tell me, I will try to help you!
With something like this you are sure you are killing what you want to kill. Commands like pkill or grepping the ps aux can be risky.
ps -ef | grep python
it will return the "pid" then kill the process by
sudo kill -9 pid
eg output of ps command:
user 13035 4729 0 13:44 pts/10 00:00:00 python (here 13035 is pid)
With the use of bashisms.
#!/bin/bash
python test.py &
kill $!
$! is the PID of the last process started in background. You can also save it in another variable if you start multiple scripts in the background.
killall python3
will interrupt any and all python3 scripts running.

Bash - redirect stdout and stderr to files with background process

I've created a simple init script for an application I'm building. The start part of the script looks like this:
user="ec2-user"
name=`basename $0`
pid_file="/var/run/python_worker.pid"
stdout_log="/var/log/worker/worker.log"
stderr_log="/var/log/worker/worker.err"
get_pid() {
cat "$pid_file"
}
is_running() {
[ -f "$pid_file" ] && ps `get_pid` > /dev/null 2>&1
}
case "$1" in
start)
if is_running; then
echo "Already started"
else
echo "Starting $name"
cd /var/lib/worker
. venv/bin/activate
. /etc/profile.d/worker.sh
python run.py >> "$stdout_log" 2>> "$stderr_log" &
echo $! > "$pid_file"
if ! is_running; then
echo "Unable to start, see $stdout_log and $stderr_log"
exit 1
fi
echo "$name running"
fi
I'm having trouble with this line:
python run.py >> "$stdout_log" 2>> "$stderr_log" &
I want to start my application with this code and redirect the outputs to the files specified above. However, when I include the & to make it run in the background, nothing appears in the two log files. BUT, when I remove the & from this line, the log files get data. Why is this happening?
Obviously I need to run the command to make it run as a background process in order to stop the shell waiting.
I am also sure that the process is running when I use the &. I can find it with a ps -aux :
root 11357 7.0 3.1 474832 31828 pts/1 Sl 21:22 0:00 python run.py
Anyone know what I'm doing wrong? :)
Anyone know what I'm doing wrong? :)
Short Answer:
Yes. add -u to the python command and it should work.
python -u run.py >> "$stdout_log" 2>> "$stderr_log" &
Long Answer:
It's a buffering issue (from man python):
-u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout
and stderr in binary mode. Note that there is internal buffering in xreadlines(), readlines() and file-
object iterators ("for line in sys.stdin") which is not influenced by this option. To work around this, you
will want to use "sys.stdin.readline()" inside a "while 1:" loop.

Categories

Resources