I create a subprocess using subprocess.Popen() that runs for a long time. It is called from its own thread, and the thread is blocked until the subprocess completes/returns.
I want to be able to interrupt the subprocess so the process terminates when I want.
Any ideas?
I think you're looking for Popen.terminate or .kill function. They were added in python 2.6.
Related
If subprocess.call is invoked N times, I wonder if N subprocess will be created or not.
And when will the subprocess close? Should I kill it manually?
What about subprocess.Popen?
subprocess.call creates a process every time it gets called and waits for the command to complete. So due to the nature of waiting for the command to finish there is no real parallel processing.
subprocess.Popen also creates a process every time it gets called, but does not wait for the command to complete. You can use communicate to wait for the command to complete or call kill.
So sum it up. With subprocess.Popen you have more control about the process you created.
Described in the python documentation
Yes, a new process is spawned every time you call subprocess.call() or any of its relatives, including Popen(). You do not need to explicitly kill the subprocesses normally--you'd just wait for them to exit.
New to multiprocessing in python, consider that you have the following function:
def do_something_parallel(self):
result_operation1 = doit.main(A,B)
do_something_else(C)
Now the point is that I want the doit.main to run in another process and to be non blocking, so the code in do_something_else will run immediately after the first has been launched in another process.
How can I do it using python subprocess module?
Is there a difference between subprocessing and creating new process aside to another one, why would we need a child processes of other process?
Note: I do not want to use multithreaded approach here..
EDIT: I wondered whether using a subprocess module and multiprocess module in the same function is prohibited?
Reason I want this is that I have two things to run: first an exe file, and second a function, each needs it own process.
If you want to run a Python code in a separate process, you could use multiprocessing module:
import multiprocessing
if __name__ == "__main__":
multiprocessing.Process(target=doit.main, args=[A, B]).start()
do_something_else() # this runs immmediately without waiting for main() to return
I wondered whether using a subprocess module and multiprocess module in the same function is prohibited?
No. You can use both subprocess and multiprocessing in the same function (moreover, multiprocessing may use subprocess to start its worker processes internally).
Reason I want this is that I have two things to run: first an exe file, and second a function, each needs it own process.
You don't need multprocessing to run an external command without blocking (obviously, in its own process); subprocess.Popen() is enough:
import subprocess
p = subprocess.Popen(['command', 'arg 1', 'arg 2'])
do_something_else() # this runs immediately without waiting for command to exit
p.wait() # this waits for the command to finish
Subprocess.Popen is definitely what you want if the "worker" process is an executable. Threading is what you need when you need things to happen asynchronously, and multiprocessing is what you need if you want to take advantage of multiple cores for the improved performance (although you will likely find yourself also using threads at the same time as they handle asynchronous output of multiple parallel processes).
The main limitation of multiprocessing is passing information. When a new process is spawned, an entire separate instance of the python interpreter is started with it's own independent memory allocation. The result of this is variables changed by one process won't be changed for other processes. For this functionality you need shared memory objects (also provided by multiprocessing module). One implementation I have done was a parent process that started several worker processes and passed them both an input queue, and an output queue. The function given to the child processes was a loop designed to do some calculations on the inputs pulled from the input queue and then spit them out to the output queue. I then designated a special input that the child would recognize to end the loop and terminate the process.
On your edit - Popen will start the other process in parallel, as will multiprocessing. If you need the child process to communicate with the executable, be sure to pass the file stream handles to the child process somehow.
My application creates subprocesses. Usually, these processeses run and terminate without any problems. However, sometimes, they crash.
I am currently using the python subprocess module to create these subprocesses. I check if a subprocess crashed by invoking the Popen.poll() method. Unfortunately, since my debugger is activated at the time of a crash, polling doesn't return the expected output.
I'd like to be able to see the debugging window(not terminate it) and still be able to detect if a process is crashed in the python code.
Is there a way to do this?
When your debugger opens, the process isn't finished yet - and subprocess only knows if a process is running or finished. So no, there is not a way to do this via subprocess.
I found a workaround for this problem. I used the solution given in another question Can the "Application Error" dialog box be disabled?
Items of consideration:
subprocess.check_output() for your child processes return codes
psutil for process & child analysis (and much more)
threading library, to monitor these child states in your script as well once you've decided how you want to handle the crashing, if desired
import psutil
myprocess = psutil.Process(process_id) # you can find your process id in various ways of your choosing
for child in myprocess.children():
print("Status of child process is: {0}".format(child.status()))
You can also use the threading library to load your subprocess into a separate thread, and then perform the above psutil analyses concurrently with your other process.
If you find more, let me know, it's no coincidence I've found this post.
In Python, is it possible to do a non-blocking system call without forking off a thread? i.e., can I avoid:
import thread
thread.start_new_thread(os.system,('cmd',))
Use the subprocess module (Popen) and have the result written to a file. You can either "wait" for the subprocess to terminate or proceed with other business and poll for the result in the file etc.
I am writing a python program that lauches a subprocess (using Popen).
I am reading stdout of the subprocess, doing some filtering, and writing to
stdout of main process.
When I kill the main process (cntl-C) the subprocess keeps running.
How do I kill the subprocess too? The subprocess is likey to run a long time.
Context:
I'm launching only one subprocess at a time, I'm filtering its stdout.
The user might decide to interrupt to try something else.
I'm new to python and I'm using windows, so please be gentle.
Windows doesn't have signals, so you can't use the signal module. However, you can still catch the KeyboardInterrupt exception when Ctrl-C is pressed.
Something like this should get you going:
import subprocess
try:
child = subprocess.Popen(blah)
child.wait()
except KeyboardInterrupt:
child.terminate()
subprocess.Popen objects come with a kill and a terminate method (differs in which signal you send to the process).
signal.signal allows you install signal handlers, in which you can call the child's kill method.
You can use python atexit module.
For example:
import atexit
def killSubprocess():
mySubprocess.kill()
atexit.register(killSubprocess)