I have a python script that runs another python program and then gathers results from the logs. The only problem is that I want it to run a limited number of seconds. So I want to kill the process after say, 1 minute.
How can I do this?
I'm running an external program with the command os.system("./test.py")
you need more control over your child process than os.system allows for. subprocess, especially Popen and Popen objects give you enough control for managing child processes. For a timer, see again the section in the standard library
Check out the psutil module. It provides a cross-platform interface to retrieving information on all running processes, and allows you to kill processes also. (It can do more, but that's all you should need!)
Here's the basic idea of how you could use it:
import os
import psutil
import time
os.system('./test.py')
# Find the PID for './test.py'.
# psutil has helper methods to make finding the PID easy.
pid = <process id of ./test.py>
time.sleep(60)
p = psutil.Process(pid)
p.kill()
#!/usr/bin/env python
import time, os, subprocess
process = subprocess.Popen(
['yes'], stdout=open(os.devnull,'w'))
time.sleep(60)
process.terminate()
Related
I have a web server that needs to manage a separate multi-process subprocess (i.e. starting it and killing it).
For Unix-based systems, the following works fine:
# save the pid as `pid`
ps = subprocess.Popen(cmd, preexec_fn=os.setsid)
# elsewhere:
os.killpg(os.getpgid(pid), signal.SIGTERM)
I'm doing it this way (with os.setsid) because otherwise killing the progress group will also kill the web server.
On Windows these os functions are not available -- so if I wanted to accomplish something similar on Windows, how would I do this?
I'm using Python 3.5.
THIS ANSWER IS PROVIDED BY eryksun IN COMMENT. I PUT IT HERE TO HIGHLIGHT IT FOR SOMEONE MAY ALSO GET INVOLVED IN THIS PROBLEM。
Here is what he said:
You can create a new process group via ps = subprocess.Popen(cmd, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP). The group ID is the process ID of the lead process. That said, it's only useful for processes in the tree that are attached to the same console (conhost.exe instance) as your process, if your process even has a console. In this case, you can send the group a Ctrl+Break via ps.send_signal(signal.CTRL_BREAK_EVENT). Processes shouldn't ignore Ctrl+Break. They should either exit gracefully or let the default handler execute, which calls ExitProcess(STATUS_CONTROL_C_EXIT)
I tried it with this and succeed:
process = Popen(args=shlex.split(command), shell=shell, cwd=cwd, stdout=PIPE, stderr=PIPE,creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
/*...*/
process .send_signal(signal.CTRL_BREAK_EVENT)
process .kill()
OS: Jessie
Python: 2.7
I want to use psutil to terminate my script I am currently executing. My problem is that I would like to kill it with the ID but I don't know how to get the pid of my script.
I know I can terminate with the names of processes, but I think thats not a pretty solution.
Has anyone an idea how to make this work?
I have set up my Pi with the PiCamera, a GUI and some sensors. I am using the cv2 library and the problem is that the windows won't close.
Therefore, I googled how to close them but there wasn't any solution I could use. Killing the process is ok for me.
EDIT:
import psutil
def on_terminate(proc):
print("process {} terminated with exit code {}".format(proc, proc.returncode))
procs = psutil.Process().children()
for p in procs:
p.terminate()
gone, still_alive = psutil.wait_procs(procs, timeout=3, callback=on_terminate)
for p in still_alive:
p.kill()
I found this snippet in the documentation. How can I make this run with pid's?
os.getpid()
and
How to terminate process from Python using pid?
was the answer.
Currently i can list my processes with a simple python script:
import os
os.system("Tasklist")
I would like to list all the threads associated with those processes, if any. The count of threads per process might be sufficient.
Would someone direct me where i might find this information.
Thank You.
You can use the psutil module (download here) for cross-platform process information delivery.
After installing, use the following code to get the thread count of any process id.
import psutil
for proc in psutil.process_iter():
print proc.name+' ['+str(proc.get_num_threads())+' threads]'
I am trying to start a java process that is meant to take a long time, using python's subprocess module.
What I am actually doing is using the multiprocessing module to start a new Process, and using that process, use subprocess module to run java -jar.
This works fine, but when I start the new process, the java process replaces the python process running python Process. I would like java to run as a child process in a way that when the process that started a new multiprocessing.Process died, the process running java would die too.
Is this possible?
Thanks.
Edit: here's some code to clarify my question:
def run_task():
pargs = ["java -jar app.jar"]
p = Popen(pargs)
p.communicate()[0]
return p
while(True):
a = a_blocking_call()
process = Process(target=run_task)
process.start()
if not a:
break
I want the process running run_task to be killed along with the process running java when the process executing the while loop reaches the break line. Is this possible?
I think you should show some code, it's not clear how you are using subprocess and multiprocessing together.
From the documentation it looks like subprocess should spawn and not replace your new Process-started process. Are you sure that isn't happening ? A test case showing it doesn't would be good.
You may get some hints out of Detach a subprocess started using python multiprocessing module
My application creates subprocesses. Usually, these processeses run and terminate without any problems. However, sometimes, they crash.
I am currently using the python subprocess module to create these subprocesses. I check if a subprocess crashed by invoking the Popen.poll() method. Unfortunately, since my debugger is activated at the time of a crash, polling doesn't return the expected output.
I'd like to be able to see the debugging window(not terminate it) and still be able to detect if a process is crashed in the python code.
Is there a way to do this?
When your debugger opens, the process isn't finished yet - and subprocess only knows if a process is running or finished. So no, there is not a way to do this via subprocess.
I found a workaround for this problem. I used the solution given in another question Can the "Application Error" dialog box be disabled?
Items of consideration:
subprocess.check_output() for your child processes return codes
psutil for process & child analysis (and much more)
threading library, to monitor these child states in your script as well once you've decided how you want to handle the crashing, if desired
import psutil
myprocess = psutil.Process(process_id) # you can find your process id in various ways of your choosing
for child in myprocess.children():
print("Status of child process is: {0}".format(child.status()))
You can also use the threading library to load your subprocess into a separate thread, and then perform the above psutil analyses concurrently with your other process.
If you find more, let me know, it's no coincidence I've found this post.