How to send signals from a backgrounded process using pexpect - python

Using python3/linux/bash:
gnr#localhost: cat my_script
#!/usr/bin/python3
import time, pexpect
p = pexpect.spawn('sleep 123')
p.sendintr()
time.sleep(1000)
This works fine when run as is (i.e. my_script starts a sleep 123 child process and then sends it a SIGINT which kills sleep 123). However, when I background my_script as a grandchild process, it no longer is able to kill the sleep 123 command:
gnr#localhost: (my_script &> /dev/null &)
Anyone know what's going on here/how to change my_script or pexpect to be able to still send SIGINT to it's child process?
I'm thinking this is has something to do with the backgrounding causing there to be no controlling terminal, and maybe I need to create a new pty?
Update: Never figured out how to create a pty (though ssh'ing into localhost with a -t option worked) - ended up doing an os.fork() to background a child process rather than the (my_script &> /dev/null &) which works because (I'm guessing) the controlling terminal is not immediately closed.

Are you sure the process isn't being killed? I would expect it to show <defunct> in the process list as the process that spawned is now sitting in a sleep and proper cleanup can't complete until sleep finishes. <defunct> processes have been killed, just their parents haven't done the cleanup.
If you can somehow modify your code so that the parent actually goes through the normal processing and shuts down the child (spawn) then it should work. Although clumsy this might work:
import time, pexpect, os
newpid = os.fork()
if newpid == 0:
# Child
p = pexpect.spawn('sleep 123')
p.sendintr()
else:
# parent
time.sleep(1000)
In this case we fork our own child who handles the spawn and does the kill. Since our child isn't blocking on its own sleep it exits gracefully which includes properly cleaning up the process it killed. In the mean time the main (parent) thread is waiting on a sleep
After your comment it occurred to me that although I was placing my script in the background at the bash prompt, I wasn't doing it the same as yours.
I was using
(expecttest.py > /dev/null 2>&1 &)
This redirects stdin and stdout to >/dev/null and puts the process in the background.
If I take your original code and rather than doing a sendintr and instead do a terminate using your invocation from the command shell it works. It seems that sleep 123 doesn't respond to what pexpect is doing in that case.

Related

Killing child processes spawned by a subprocess in Python

I am writing a test for a system that amongst other things has to read a signal from a GPS device. To be able to test that part without actually having to have a GPS device connected, I am using the gpsfake utility from the gpsd-clients package.
The problem I'm having is that when I try to kill the gpsfake process, there is always a left over process that gets detached from the main python process after it exits.
During execution, this is what my process tree looks like:
systemd
python
gpsfake
gpsd
socat # These two processes are also spawned by
python # the test and get terminated without problems
This is what it looks like after exiting the main python process and I kill gpsfake using gpsfake.terminate():
systemd
gpsfake Running
gpsd Sleeping
This is what it looks like after exiting the main python process and I kill gpsfake using gpsfake.send_signal(9):
systemd
gpsd Sleeping
The way my test is written is like this (simplified):
# I start a gpsfake subprocess
gpsfake = subprocess.Popen(['gpsfake', '-S', '-t', '--port', '2947', 'gps_data.nmea')], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# I give it some time to load
time.sleep(1.5)
## Test goes here ##
# This line here for some reason does nothing besides making gpsd sleep while gpsfake keeps running
gpsfake.terminate()
# This line kills the gpsfake subprocess, but not gpsd, which is left sleeping
gpsfake.send_signal(9)
I have tried to kill all of the children that gpsfake may spawn by using psutil like this:
import psutil
proc = psutil.Process(gpsfake.pid)
children = proc.children(recursive=True)
for child in children:
os.kill(child.pid, 9)
But this only kills gpsfake and throws psutil.AccessDenied when trying to kill gpsd. I have tried sending a SIGINT(2) signal instead of a SIGKILL(9) to gpsfake because when exiting via Ctrl-C (when running gpsfake normally in a shell) successfully kills all of its children without further interaction with the process, but that hasn't worked either.
Amongst other things I have also tried to send the signal to the PGID instead of the PID but that also throws an AccessDenied exception.
At this point I don't know what else to try and I can't find any help online. Is there anything I can do to kill this child process? I can't just ignore it because if I run the test once with a working project, it is successful, but if I try to run it again right after the first one finishes, with the same project, and the child process still existing, the second test will fail because the project will not read anything from the GPS, so I need to find a solution to this

Python Script called executed from bash script not handling signals

Problem: When executing a python script from the command line, it catches and handles SIGTERM signals as expected. However, if the script is called from by a bash script, and then bash script then sends the signal to the python script, it does not handle the SIGTERM signal as expected.
The python script in question is extremely simple: it waits for a SIGTERM and then waits for a few seconds before exiting.
#!/usr/bin/env python3
import sys
import signal
import time
# signal handler
def sigterm_handler(signal, frame):
time.sleep(5)
print("dying")
sys.exit()
# register the signal handler
signal.signal(signal.SIGTERM, sigterm_handler)
while True:
time.sleep(1)
If this is called directly and then the signal sent from the command line
i.e.
> ./sigterm_tester.py &
> kill -15 <PID>
the signal handling performs normally (it waits 5 seconds, posts "dying" to stdout, and exits)
However, if it is instead called from a bash script, it no longer seems to catch the SIGTERM and instead exits immediately.
This simple bash script executes the python script and then kills its child (the python script). However, the termination occurs immediately instead of after a 5 second delay, and there is no printing of "dying" to stdout (or to a file when I attempted stdout redirection).
#!/bin/bash
./sigterm_tester.py &
child=$(pgrep -P $$)
kill -15 $child
while true;
do
sleep 1
done
Some additional information: I have also tested this with sh as well as bash and the same behavior occurs. Additionally I have tested this and gotten the same behavior in a MacOS environment as well as a Linux environment. I also tested it with both python2 and python3.
My question is why is the behavior different seemingly dependent on how the program is called, and is there a way to ensure that the python program appropriately handles signals even when called from a bash script?
Summing #Matt Walck comments. In the bash script you were killing the python process right after invoking it, which might not had enough time to register on the sigterm signal. Adding a sleep command between the spawn and the kill command will back the theory up.
#!/bin/bash
./sigterm_tester.py &
child=$(pgrep -P $$)
#DEBUGONLY
sleep 2
kill -15 $child
while true;
do
sleep 1
done

Check if subprocess called with shell=True is still running

I have a process which for certain reasons, I must call with the following (please don't judge...)
process = subprocess.Popen("some_command &", shell=True, executable='/bin/bash')
some_command is supposed to terminate by itself when some external conditions are met.
How can I check when some_command has terminated?
process.poll()
always returns 0
A simple script to demonstrate my situation:
import subprocess
process = subprocess.Popen("sleep 5 &", shell=True, executable='/bin/bash')
while True:
print(process.poll())
some_command & tells bash to run some_command in the background. This means that your shell launches some_command, then promptly exits, severing the tie between the running some_command and your Python process (some_command's parent process no longer exists after all). poll() is accurately reporting that bash itself finished running, exiting with status 0; it has no idea what may or may not be happening with some_command; that's bash's problem (and bash didn't care either).
If you want to be able to poll to check if some_command is still running, don't background it via bash shell metacharacters; without &, bash will continue running until it finishes, so you'll have an indirect indication of when some_command finishes from the fact that bash itself is still running. It's still in the background (in the sense that it runs in parallel with your Python code; the Python process won't stall waiting on it or anything unless you explicitly wait or communicate with process):
process = subprocess.Popen("some_command", shell=True, executable='/bin/bash')
Of course, unless some_command is some bash builtin, bash is just getting in the way here; as noted subprocess.Popen always runs stuff in the background unless you explicitly ask for it to wait, so you didn't need bash's help to background anything:
process = subprocess.Popen(["some_command"])
would get similar behavior, and actually let you examine the return code from some_command directly, with no intermediary bash process involved.

Subprocess reacts differently on SIGINT signals

b.py
import subprocess
f = subprocess.Popen(['python', 'a.py'])
time.sleep(3000)
a.py
import time
time.sleep(1000)
Run python b.py, Press CTRL+C, both processes will terminate.
However send the signal SIGINT to the parent process b.py, kill -2 xxxx, but the child process a.py remains.
Ctrl-C at your terminal typically sends SIGINT to all processes in the foreground process group. Both your parent and your child process are in this process group.
For a more detailed explanation, see for example The TTY demystified or the more technical version by Kirk McKusick at Process Groups and Sessions
If you just kill the parent process, the child is parentless and thus gets reparented to PID 1 (init). You can see this too, in the output of ps. Since your subprocess never receives a signal, it simply continues running.

python signal handler react slow when heavy loading

I wrote a signal handler that can restart the script it self by:
kill -10 $PID
and I register the handler in the beginning of the script.
signal.signal(signal.SIGUSR1, restart_handler)
My script mainly do below things:
download some source code.
unzip it and other stuff.
use os.system(bash -c 'make -j16 > log.txt')
When downloading source code, I use kill -10 to restart it,
it runs the handler quickly and normally as I expected.
However, when it start to make -j16, I use the same kill command,
but need to wait very long time to reach signal handler.
(looks like the signal is not handled immediately,
but if I use kill -9 $PID it can be killed immediately)
How to make my customized signal handler can act as quick as -9?
below picture it the pstree output when make -j16:
https://www.dropbox.com/s/rbfzn0p0f2p55xx/make.png

Categories

Resources