I am invoking a subprocess via some Python code resembling the following:
proc = subprocess.Popen(["bash", "-ic", ". foo.bash && some_func_from_foo"])
One of the processes I am opening spawns a bunch of child process (or subprocesses, or forked processes, not sure which), and one of the child processes dies. This dying is OK and expected behavior. However, I do not expect the actual Python program to be interrupted when this node dies, but right after the child process dies the Python program is suspended and I see [1]+ Stopped rosrun my_package my_app.py.
Some more details: I am running a ROS launch file from the command line. When one of its nodes dies it gives the following sort of output:
terminate called after throwing an instance of 'std::runtime_error'
what(): ...
[my_node_name] process has died [pid 30816, exit code -6, ...]
And then the Python program gets sent to the background, putting me back in bash (I have to run fg 1 to resume the Python program).
Is there any way to prevent the Python process from getting backgrounded by a subprocess that spawns a child that dies?
subprocess may thrown an exception. Have you tried wrapping your code in a try except block to deal with the errors?
Related
Edit: I think this is an issue with vscode, as when run from Sublime Text or Powershell the started process does indeed remain running.
I have written a Python library that includes a function to launch an independent process. This is essentially the call (the partial is called in the surrounding function)
return partial(
Popen,
args=args,
creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP,
close_fds=True
)
The launching all works well, but when the process that launched the detached process exits, then the launched process quits.
How do I launch a process that does not exit when it's parent process exits?
I currently have a Python program that calls a MATLAB script as batch like so:
matlab = QProcess()
matlab.start('matlab -noFigureWindows -batch "cd(users/script_directory/); MyScript.m;"')
#^ command to start MATLAB batch process in CMD
The issue I'm running into is that once this batch process starts, there's no way to kill it. So if my Python app gets force-closed, the MATLAB script keeps running and causes all sorts of issues, meaning I need to kill the process on app close.
I'm calling the MATLAB script as a QProcess and get the following message when I force-close the Python app before the MATLAB script finishes executing:
QProcess: Destroyed while process ("matlab") is still running.
With this, how do I stop the batch MATLAB process? Using 'ctrl-c' in CMD works for me sometimes to kill the process but I need it to be consistent to make the Python work right.
Similarly, can I just have it 'force quit' or 'restart' batch MATLAB or anything along those lines to clear all running processes?
A brute force way to kill it would be to just kill any matlab process via the process and system utilities library at the start of your application:
import psutil
for process in psutil.process_iter():
if process.name().lower() == 'matlab.exe':
process.terminate()
I'm writing a Python program that runs under Mac OS and Linux, and I want to run some logic in a multiprocessing.Process. That logic will take a while, and I want it to continue running even after my program is finished and has exited. i.e., I want the main process to not wait for the auxiliary process to finish. I want the main process to exit as soon as it's finished.
I made a few experiments, and it seems that this behavior is the default when using subprocess, but I can't make it happen using multiprocessing.Process, even when I run set_start_method('spawn').
Do you know of a way to get multiprocessing.Process to behave this way?
Looks like starting a new process, and then calling os.fork from it does the trick.
I am writing a Python C extension that contains multiple C pthreads. Eventually these threads are sent a SIGTERM in order for them to exit. When I step through the extension in GDB these threads exit successfully, and I return back to the Python interpreter where I can continue to run commands. It is also working successfully in the Python interpreter.
However, when I try to run a Python file that contains similar behavior, the entire program terminates after the signal is sent to the child thread.
I am confused as to how the signal is propagating up from the threads to the program itself, any guidance is appreciated.
I have an issue with interrupting the subprocess.Popen This is the setup:
I have a Tkinter gui and it is running a another python script using the Popen. This inner script (let's call it running script) is running a shell shell script that is executing a piece of C++ code using a Popen so the hierarchical structure is looking like this:
GUI
\running_script
\shell-script
\c++
running_script works so that if it receives an interrupt, it sends SIGINT to shell-script. If i run a shell_script with my piece of C++ code with the running_script alone and do a CRTL+C everything works like a charm. However if i do it with GUI running the running_script as Popen, SIGINT is sent to the running_script is receives it properly and sends the interrupt signal to shell-script, but instead of terminating the inner process(being c++ code), the shell-script terminates itself and the C++ process continues running, as it was ran in the background, but it was not. When I execute ps -xaf the tree looks like this:
GUI
\running_script
\shell-script <defunct>
c++
So to reiterate, when i run it without the GUI it works like a charm, but with GUI it behaves as explained above. I've tried sending shell-command a SIGTERM instead of SIGINT as well, the result is the same.
You could catch SIGINT in the shell script and make it send SIGINT to the c++ program, e.g:
#!/bin/bash
./cpp_program &
procpid=$!
function killit() {
kill -SIGINT $procpid
}
trap killit SIGINT
......