Killing child processes spawned by a subprocess in Python - python

I am writing a test for a system that amongst other things has to read a signal from a GPS device. To be able to test that part without actually having to have a GPS device connected, I am using the gpsfake utility from the gpsd-clients package.
The problem I'm having is that when I try to kill the gpsfake process, there is always a left over process that gets detached from the main python process after it exits.
During execution, this is what my process tree looks like:
systemd
python
gpsfake
gpsd
socat # These two processes are also spawned by
python # the test and get terminated without problems
This is what it looks like after exiting the main python process and I kill gpsfake using gpsfake.terminate():
systemd
gpsfake Running
gpsd Sleeping
This is what it looks like after exiting the main python process and I kill gpsfake using gpsfake.send_signal(9):
systemd
gpsd Sleeping
The way my test is written is like this (simplified):
# I start a gpsfake subprocess
gpsfake = subprocess.Popen(['gpsfake', '-S', '-t', '--port', '2947', 'gps_data.nmea')], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# I give it some time to load
time.sleep(1.5)
## Test goes here ##
# This line here for some reason does nothing besides making gpsd sleep while gpsfake keeps running
gpsfake.terminate()
# This line kills the gpsfake subprocess, but not gpsd, which is left sleeping
gpsfake.send_signal(9)
I have tried to kill all of the children that gpsfake may spawn by using psutil like this:
import psutil
proc = psutil.Process(gpsfake.pid)
children = proc.children(recursive=True)
for child in children:
os.kill(child.pid, 9)
But this only kills gpsfake and throws psutil.AccessDenied when trying to kill gpsd. I have tried sending a SIGINT(2) signal instead of a SIGKILL(9) to gpsfake because when exiting via Ctrl-C (when running gpsfake normally in a shell) successfully kills all of its children without further interaction with the process, but that hasn't worked either.
Amongst other things I have also tried to send the signal to the PGID instead of the PID but that also throws an AccessDenied exception.
At this point I don't know what else to try and I can't find any help online. Is there anything I can do to kill this child process? I can't just ignore it because if I run the test once with a working project, it is successful, but if I try to run it again right after the first one finishes, with the same project, and the child process still existing, the second test will fail because the project will not read anything from the GPS, so I need to find a solution to this

Related

When a parent NodeJS process exits, causing child processes to exit, how is that communicated?

index.js
const childProcess = require("child_process");
childProcess.spawn("python", ["main.py"]);
main.py
import time
while True:
time.sleep(1)
When running the NodeJS process with node index.js, it runs forever since the Python child process it spawns runs forever.
When the NodeJS process is ended by x-ing out of the Command Prompt, the Python process ends as well, which is desired, but how can you run some cleanup code in the Python code before it exits?
Previous attempts
Look in documentation for child_process.spawn for how this termination is communicated from parent to child, perhaps by a signal. Didn't find it.
In Python use signal.signal(signal.SIGTERM, handler) (as well as with signal.SIGINT). Didn't get handler to run (though, ending the NodeJS process with Ctrl-C instead of closing the window did get the SIGINT handler to run even though I'm not explicitly forwarding input from the NodeJS process to the Python child process).
Lastly, though this reproducible example is also valid and much more simple, my real-life use case involves Electron, so in case that introduces a complication, or a solution, figured I'd mention it.
Windows 10, NodeJS 12.7.0, Python 3.8.3

Python: Exiting python.exe after Popen?

Small nagging issue:
I have a python script that is working as expected, except when I select a menu option to Popen another python script:
myPath = r"c:\Python27\myScript.py"
cmd = r"c:\Python27\python.exe '{}'".format(myPath)
py_process = Popen(cmd, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
When I run that snippet (in windows), the child process is kicked-off in the background as expected, but when I attempt to exit the primary script, but leave the child process running in the background:
raise SystemExit
...an empty window "c:\python27\python.exe" remains. I've tried other EXIT methods with a similar result. Note: When I exit the primary script without running that snippet, the python window disappears as desired.
My goal is to leave no trace/window once the primary script is exited in all cases, but child process should remain running in background.
Any suggestions to accomplish this goal?
Thanks!
If you want to first communicate to the started process and then leave it alone to run further, you have a few options:
Handle SIGPIPE in your long-running process, do not die on it. Live without stdin after the launcher process exits.
Pass whatever you wanted using arguments, environment, or a temporary file.
If you want bidirectional communication, consider using a named pipe (man mkfifo) or a socket, or writing a proper server.
Make the long-running process fork after the initial bi-direcional communication phase is done.
It does not create "a completely independent process" (that what python-daemon package does). In other cases you should redirect to os.devnull child's stdin/stdout/stderr to avoid waiting for input and/or a spurious output to the terminal
Source

How to send signals from a backgrounded process using pexpect

Using python3/linux/bash:
gnr#localhost: cat my_script
#!/usr/bin/python3
import time, pexpect
p = pexpect.spawn('sleep 123')
p.sendintr()
time.sleep(1000)
This works fine when run as is (i.e. my_script starts a sleep 123 child process and then sends it a SIGINT which kills sleep 123). However, when I background my_script as a grandchild process, it no longer is able to kill the sleep 123 command:
gnr#localhost: (my_script &> /dev/null &)
Anyone know what's going on here/how to change my_script or pexpect to be able to still send SIGINT to it's child process?
I'm thinking this is has something to do with the backgrounding causing there to be no controlling terminal, and maybe I need to create a new pty?
Update: Never figured out how to create a pty (though ssh'ing into localhost with a -t option worked) - ended up doing an os.fork() to background a child process rather than the (my_script &> /dev/null &) which works because (I'm guessing) the controlling terminal is not immediately closed.
Are you sure the process isn't being killed? I would expect it to show <defunct> in the process list as the process that spawned is now sitting in a sleep and proper cleanup can't complete until sleep finishes. <defunct> processes have been killed, just their parents haven't done the cleanup.
If you can somehow modify your code so that the parent actually goes through the normal processing and shuts down the child (spawn) then it should work. Although clumsy this might work:
import time, pexpect, os
newpid = os.fork()
if newpid == 0:
# Child
p = pexpect.spawn('sleep 123')
p.sendintr()
else:
# parent
time.sleep(1000)
In this case we fork our own child who handles the spawn and does the kill. Since our child isn't blocking on its own sleep it exits gracefully which includes properly cleaning up the process it killed. In the mean time the main (parent) thread is waiting on a sleep
After your comment it occurred to me that although I was placing my script in the background at the bash prompt, I wasn't doing it the same as yours.
I was using
(expecttest.py > /dev/null 2>&1 &)
This redirects stdin and stdout to >/dev/null and puts the process in the background.
If I take your original code and rather than doing a sendintr and instead do a terminate using your invocation from the command shell it works. It seems that sleep 123 doesn't respond to what pexpect is doing in that case.

Process() called from from Pylons creates a fork

I'm trying to create a background process for some heavy calculations from the main Pylons process. Here's the code:
p = Process(target = instance_process, \
args = (instance_tuple.instance, parent_pipe, child_pipe,))
p.start()
The process is created and started, but is seems to be a fork from the main process: it is listening to the same port and the whole application hangs up. What am I doing wrong?
Thanks in advance.
Process IS a fork. If you look through it's implementation you'll find that Process.start() calls a fork. It does NOT, however, call any of the exec variations to change the execution context.
Still, this may have nothing to do with listening on the same port (unless the parent process is multi-threaded). At which point is the program hanging?
I know that when you try shutting down a python program without terminating the child process created through multiprocessing it will hang until the child process terminates.
This might be caused if, for instance, you do not close the pipe between the processes.

Python - Launch a Long Running Process from a Web App

I have a python web application that needs to launch a long running process. The catch is I don't want it to wait around for the process to finish. Just launch and finish.
I'm running on windows XP, and the web app is running under IIS (if that matters).
So far I tried popen but that didn't seem to work. It waited until the child process finished.
Ok, I finally figured this out! This seems to work:
from subprocess import Popen
from win32process import DETACHED_PROCESS
pid = Popen(["C:\python24\python.exe", "long_run.py"],creationflags=DETACHED_PROCESS,shell=True).pid
print pid
print 'done'
#I can now close the console or anything I want and long_run.py continues!
Note: I added shell=True. Otherwise calling print in the child process gave me the error "IOError: [Errno 9] Bad file descriptor"
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying WINAPI CreateProcess function.
Instead of directly starting processes from your webapp, you could write jobs into a message queue. A separate service reads from the message queue and runs the jobs. Have a look at Celery, a Distributed Task Queue written in Python.
This almost works (from here):
from subprocess import Popen
pid = Popen(["C:\python24\python.exe", "long_run.py"]).pid
print pid
print 'done'
'done' will get printed right away. The problem is that the process above keeps running until long_run.py returns and if I close the process it kills long_run.py's process.
Surely there is some way to make a process completely independent of the parent process.
subprocess.Popen does that.

Categories

Resources