I want to execute an external program in each thread of a multi-threaded python program.
Let's say max running time is set to 1 second. If started process completes within 1 second, main program capture its output for further processing. If it doesn't finishes in 1 second, main program just terminate it and start another new process.
How to implement this?
You could poll it periodically:
import subprocess, time
s = subprocess.Popen(['foo', 'args'])
timeout = 1
poll_period = 0.1
s.poll()
while s.returncode is None and timeout > 0:
time.sleep(poll_period)
timeout -= poll_period
s.poll()
if timeout <= 0:
s.kill() # timed out
else:
pass # completed
You can then just put the above in a function and start it as a thread.
This is the helper function I use:
def run_with_timeout(command, timeout):
import time
import subprocess
p = subprocess.Popen(command, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while timeout > 0:
if p.poll() is not None:
return p.communicate()
time.sleep(0.1)
timeout -= 0.1
else:
try:
p.kill()
except OSError as e:
if e.errno != 3:
raise
return (None, None)
A nasty hack on linux is to use the timeout program to run the command. You may opt for a nicer all Python solution, however.
here is a solution using the pexpect module (I needed to capture the output of the program before it ran into the timeout, I did not manage to do this with subprocess.Popen):
import pexpect
timeout = ... # timeout in seconds
proc = pexpect.spawn('foo', ['args'], timeout = timeout)
result = proc.expect([ pexpect.EOF, pexpect.TIMEOUT])
if result == 0:
# program terminated by itself
...
else:
# result is 1 here, we ran into the timeout
...
print "program's output:", print proc.before
Related
Is there any argument or options to setup a timeout for Python's subprocess.Popen method?
Something like this:
subprocess.Popen(['..'], ..., timeout=20) ?
I would advise taking a look at the Timer class in the threading module. I used it to implement a timeout for a Popen.
First, create a callback:
def timeout( p ):
if p.poll() is None:
print 'Error: process taking too long to complete--terminating'
p.kill()
Then open the process:
proc = Popen( ... )
Then create a timer that will call the callback, passing the process to it.
t = threading.Timer( 10.0, timeout, [proc] )
t.start()
t.join()
Somewhere later in the program, you may want to add the line:
t.cancel()
Otherwise, the python program will keep running until the timer has finished running.
EDIT: I was advised that there is a race condition that the subprocess p may terminate between the p.poll() and p.kill() calls. I believe the following code can fix that:
import errno
def timeout( p ):
if p.poll() is None:
try:
p.kill()
print 'Error: process taking too long to complete--terminating'
except OSError as e:
if e.errno != errno.ESRCH:
raise
Though you may want to clean the exception handling to specifically handle just the particular exception that occurs when the subprocess has already terminated normally.
subprocess.Popen doesn't block so you can do something like this:
import time
p = subprocess.Popen(['...'])
time.sleep(20)
if p.poll() is None:
p.kill()
print 'timed out'
else:
print p.communicate()
It has a drawback in that you must always wait at least 20 seconds for it to finish.
import subprocess, threading
class Command(object):
def __init__(self, cmd):
self.cmd = cmd
self.process = None
def run(self, timeout):
def target():
print 'Thread started'
self.process = subprocess.Popen(self.cmd, shell=True)
self.process.communicate()
print 'Thread finished'
thread = threading.Thread(target=target)
thread.start()
thread.join(timeout)
if thread.is_alive():
print 'Terminating process'
self.process.terminate()
thread.join()
print self.process.returncode
command = Command("echo 'Process started'; sleep 2; echo 'Process finished'")
command.run(timeout=3)
command.run(timeout=1)
The output of this should be:
Thread started
Process started
Process finished
Thread finished
0
Thread started
Process started
Terminating process
Thread finished
-15
where it can be seen that, in the first execution, the process finished correctly (return code 0), while the in the second one the process was terminated (return code -15).
I haven't tested in windows; but, aside from updating the example command, I think it should work since I haven't found in the documentation anything that says that thread.join or process.terminate is not supported.
You could do
from twisted.internet import reactor, protocol, error, defer
class DyingProcessProtocol(protocol.ProcessProtocol):
def __init__(self, timeout):
self.timeout = timeout
def connectionMade(self):
#defer.inlineCallbacks
def killIfAlive():
try:
yield self.transport.signalProcess('KILL')
except error.ProcessExitedAlready:
pass
d = reactor.callLater(self.timeout, killIfAlive)
reactor.spawnProcess(DyingProcessProtocol(20), ...)
using Twisted's asynchronous process API.
A python subprocess auto-timeout is not built in, so you're going to have to build your own.
This works for me on Ubuntu 12.10 running python 2.7.3
Put this in a file called test.py
#!/usr/bin/python
import subprocess
import threading
class RunMyCmd(threading.Thread):
def __init__(self, cmd, timeout):
threading.Thread.__init__(self)
self.cmd = cmd
self.timeout = timeout
def run(self):
self.p = subprocess.Popen(self.cmd)
self.p.wait()
def run_the_process(self):
self.start()
self.join(self.timeout)
if self.is_alive():
self.p.terminate() #if your process needs a kill -9 to make
#it go away, use self.p.kill() here instead.
self.join()
RunMyCmd(["sleep", "20"], 3).run_the_process()
Save it, and run it:
python test.py
The sleep 20 command takes 20 seconds to complete. If it doesn't terminate in 3 seconds (it won't) then the process is terminated.
el#apollo:~$ python test.py
el#apollo:~$
There is three seconds between when the process is run, and it is terminated.
As of Python 3.3, there is also a timeout argument to the blocking helper functions in the subprocess module.
https://docs.python.org/3/library/subprocess.html
Unfortunately, there isn't such a solution. I managed to do this using a threaded timer that would launch along with the process that would kill it after the timeout but I did run into some stale file descriptor issues because of zombie processes or some such.
No there is no time out. I guess, what you are looking for is to kill the sub process after some time. Since you are able to signal the subprocess, you should be able to kill it too.
generic approach to sending a signal to subprocess:
proc = subprocess.Popen([command])
time.sleep(1)
print 'signaling child'
sys.stdout.flush()
os.kill(proc.pid, signal.SIGUSR1)
You could use this mechanism to terminate after a time out period.
Yes, https://pypi.python.org/pypi/python-subprocess2 will extend the Popen module with two additional functions,
Popen.waitUpTo(timeout=seconds)
This will wait up to acertain number of seconds for the process to complete, otherwise return None
also,
Popen.waitOrTerminate
This will wait up to a point, and then call .terminate(), then .kill(), one orthe other or some combination of both, see docs for full details:
http://htmlpreview.github.io/?https://github.com/kata198/python-subprocess2/blob/master/doc/subprocess2.html
For Linux, you can use a signal. This is platform dependent so another solution is required for Windows. It may work with Mac though.
def launch_cmd(cmd, timeout=0):
'''Launch an external command
It launchs the program redirecting the program's STDIO
to a communication pipe, and appends those responses to
a list. Waits for the program to exit, then returns the
ouput lines.
Args:
cmd: command Line of the external program to launch
time: time to wait for the command to complete, 0 for indefinitely
Returns:
A list of the response lines from the program
'''
import subprocess
import signal
class Alarm(Exception):
pass
def alarm_handler(signum, frame):
raise Alarm
lines = []
if not launch_cmd.init:
launch_cmd.init = True
signal.signal(signal.SIGALRM, alarm_handler)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
signal.alarm(timeout) # timeout sec
try:
for line in p.stdout:
lines.append(line.rstrip())
p.wait()
signal.alarm(0) # disable alarm
except:
print "launch_cmd taking too long!"
p.kill()
return lines
launch_cmd.init = False
I want to use subprocess.Popen to run a process, with the following requirements.
I want to pipe the stdout and stderr back to the caller of Popen as the process runs.
I want to kill the process after timeout seconds if it is still running.
I have come to the conclusion that a flaw in the subprocess API means it cannot fulfill these two requirements at the same time. Consider the following toy programs:
chatty.py
while True:
print 'Hi'
silence.py
while True:
pass
caller.py
import subprocess
import time
def go(command, timeout=60):
proc = subprocess.Popen(command, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
start = time.time()
while proc.poll() is None:
print proc.stdout.read(1024) # <----- Line of interest
if time.time() - start >= timeout:
proc.kill()
break
else:
time.sleep(1)
Consider the marked line above.
If it is included, go('python silence.py') will hang forever - not for just 60 seconds - because read is a blocking call until either 1024 bytes or end of stream, and neither ever comes.
If it is commented, go('python chatty.py') will be printing out 'Hi' over and over, but how can it be streamed back as it is generated? proc.communicate() blocks until end of stream.
I would be happy with a solution that replaces requirement (1) above with "In the case where a timeout did not occur, I want to get stdout and stderr once the algorithm finishes." Even this has been problematic. My implementation attempt is below.
speech.py
for i in xrange(0, 10000):
print 'Hi'
caller2.py
import subprocess
import time
def go2(command, timeout=60):
proc = subprocess.Popen(command, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
start = time.time()
while True:
if proc.poll() is not None:
print proc.communicate()
break
elif time.time() - start >= timeout:
proc.kill()
break
else:
time.sleep(1)
But even this still has problems. Even though python speech.py runs in just a couple seconds, go2('python speech.py') takes the full 60 seconds. This is because the call to print 'Hi' in speech.py is blocking until proc.communicate() is called when the process is killed. Since proc.stdout.read had the problem demonstrated before with silence.py, I'm really at a loss for how to get this working.
How can I get both the stdout and stderr and the timeout behavior?
The trick is to setup a side-band timer to kill the process. I wrote up a program half way between chatty and silent:
import time
import sys
for i in range(10,0,-1):
print i
time.sleep(1)
And then a program to kill it early:
import subprocess as subp
import threading
import signal
proc = subp.Popen(['python', 'longtime.py'], stdout=subp.PIPE,
stderr=subp.PIPE)
timer = threading.Timer(3, lambda proc: proc.send_signal(signal.SIGINT),
args=(proc,))
timer.start()
out, err = proc.communicate()
timer.cancel()
print proc.returncode
print out
print err
and it output:
$ python killer.py
1
10
9
8
Traceback (most recent call last):
File "longtime.py", line 6, in <module>
time.sleep(1)
KeyboardInterrupt
Your timer could be made fancier, like trying increasingly bad signals til the process completes, but you get the idea.
I am trying to make a simple program that will start a child process which writes a string to a pipe while the parent process counts until it gets the string from the pipe. My problem however is that when the program runs it'll either not count or will not stop counting. I want to know how I can check if the child process is still running and depending on that break out of the counting loop.
import os, time
pipein, pipeout = os.pipe()
def child(input, pipeout):
time.sleep(2)
msg = ('child got this %s' % input).encode()
os.write(pipeout, msg)
input = input()
pid = os.fork()
if pid:
i = 0
while True:
print(i)
time.sleep(1)
i += 1
try:
os.kill(pid, 0)
except OSError:
break
line = os.read(pipein, 32)
print(line)
else:
child(input, pipeout)
You should use the subprocess module, and then you can call poll()
use popen.poll()
Explained here
if Popen.poll() is not None:
//child process has terminated
[edit]:
"The only way to control the input and output streams and also retrieve the return codes is to use the subprocess module; these are only available on Unix."
Source
I want to use subprocess to run a program and I need to limit the execution time. For example, I want to kill it if it runs for more than 2 seconds.
For common programs, kill() works well. But if I try to run /usr/bin/time something, kill() can’t really kill the program.
My code below seems doesn’t work well. The program is still running.
import subprocess
import time
exec_proc = subprocess.Popen("/usr/bin/time -f \"%e\\n%M\" ./son > /dev/null", stdout = subprocess.PIPE, stderr = subprocess.STDOUT, shell = True)
max_time = 1
cur_time = 0.0
return_code = 0
while cur_time <= max_time:
if exec_proc.poll() != None:
return_code = exec_proc.poll()
break
time.sleep(0.1)
cur_time += 0.1
if cur_time > max_time:
exec_proc.kill()
If you're using Python 2.6 or later, you can use the multiprocessing module.
from multiprocessing import Process
def f():
# Stuff to run your process here
p = Process(target=f)
p.start()
p.join(timeout)
if p.is_alive():
p.terminate()
Actually, multiprocessing is the wrong module for this task since it is just a way to control how long a thread runs. You have no control over any children the thread may run. As singularity suggests, using signal.alarm is the normal approach.
import signal
import subprocess
def handle_alarm(signum, frame):
# If the alarm is triggered, we're still in the exec_proc.communicate()
# call, so use exec_proc.kill() to end the process.
frame.f_locals['self'].kill()
max_time = ...
stdout = stderr = None
signal.signal(signal.SIGALRM, handle_alarm)
exec_proc = subprocess.Popen(['time', 'ping', '-c', '5', 'google.com'],
stdin=None, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
signal.alarm(max_time)
try:
(stdout, stderr) = exec_proc.communicate()
except IOError:
# process was killed due to exceeding the alarm
finally:
signal.alarm(0)
# do stuff with stdout/stderr if they're not None
do it like so in your command line:
perl -e 'alarm shift #ARGV; exec #ARGV' <timeout> <your_command>
this will run the command <your_command> and terminate it in <timeout> second.
a dummy example :
# set time out to 5, so that the command will be killed after 5 second
command = ['perl', '-e', "'alarm shift #ARGV; exec #ARGV'", "5"]
command += ["ping", "www.google.com"]
exec_proc = subprocess.Popen(command)
or you can use the signal.alarm() if you want it with python but it's the same.
I use os.kill() but am not sure if it works on all OSes.
Pseudo code follows, and see Doug Hellman's page.
proc = subprocess.Popen(['google-chrome'])
os.kill(proc.pid, signal.SIGUSR1)</code>
I'm writing some code for testing multithreaded programs (student homework--likely buggy), and want to be able to detect when they deadlock. When running properly, the programs regularly produce output to stdout, so that makes it fairly straightforward: if no output for X seconds, kill it and report deadlock. Here's the function prototype:
def run_with_watchdog(command, timeout):
"""Run shell command, watching for output. If the program doesn't
produce any output for <timeout> seconds, kill it and return 1.
If the program ends successfully, return 0."""
I can write it myself, but it's a bit tricky to get right, so I would prefer to use existing code if possible. Anyone written something similar?
Ok, see solution below. The subprocess module might also be relevant if you're doing something similar.
You can use expect (tcl) or pexpect (python) to do this.
import pexpect
c=pexpect.spawn('your_command')
c.expect("expected_output_regular_expression", timeout=10)
Here's a very slightly tested, but seemingly working, solution:
import sys
import time
import pexpect
# From http://pypi.python.org/pypi/pexpect/
DEADLOCK = 1
def run_with_watchdog(shell_command, timeout):
"""Run <shell_command>, watching for output, and echoing it to stdout.
If the program doesn't produce any output for <timeout> seconds,
kill it and return 1. If the program ends successfully, return 0.
Note: Assumes timeout is >> 1 second. """
child = pexpect.spawn('/bin/bash', ["-c", shell_command])
child.logfile_read = sys.stdout
while True:
try:
child.read_nonblocking(1000, timeout)
except pexpect.TIMEOUT:
# Child seems deadlocked. Kill it, return 1.
child.close(True)
return DEADLOCK
except pexpect.EOF:
# Reached EOF, means child finished properly.
return 0
# Don't spin continuously.
time.sleep(1)
if __name__ == "__main__":
print "Running with timer..."
ret = run_with_watchdog("./test-program < trace3.txt", 10)
if ret == DEADLOCK:
print "DEADLOCK!"
else:
print "Finished normally"
Another solution:
class Watchdog:
def __init__(self, timeout, userHandler=None): # timeout in seconds
self.timeout = timeout
if userHandler != None:
self.timer = Timer(self.timeout, userHandler)
else:
self.timer = Timer(self.timeout, self.handler)
def reset(self):
self.timer.cancel()
self.timer = Timer(self.timeout, self.handler)
def stop(self):
self.timer.cancel()
def handler(self):
raise self;
Usage if you want to make sure function finishes in less than x seconds:
watchdog = Watchdog(x)
try
... do something that might hang ...
except Watchdog:
... handle watchdog error ...
watchdog.stop()
Usage if you regularly execute something and want to make sure it is executed at least every y seconds:
def myHandler():
print "Watchdog expired"
watchdog = Watchdog(y, myHandler)
def doSomethingRegularly():
...
watchdog.reset()