Subprocess timeout failure - python

I want to use a timeout on a subprocess
from subprocess32 import check_output
output = check_output("sleep 30", shell=True, timeout=1)
Unfortunately, whilst this raises a timeout error, it does so after 30 seconds. It seems that check_output cannot interrupt the shell command.
What can I do on on the Python side to stop this?
I suspect that subprocess32 fails to kill the timed out process.

check_output() with timeout is essentially:
with Popen(*popenargs, stdout=PIPE, **kwargs) as process:
try:
output, unused_err = process.communicate(inputdata, timeout=timeout)
except TimeoutExpired:
process.kill()
output, unused_err = process.communicate()
raise TimeoutExpired(process.args, timeout, output=output)
There are two issues:
[the second] .communicate() may wait for descendant processes, not just for the immediate child, see Python subprocess .check_call vs
.check_output
process.kill() might not kill the whole process tree, see How to terminate a python subprocess launched with shell=True
It leads to the behaviour that you observed: the TimeoutExpired happens in a second, the shell is killed, but check_output() returns only in 30 seconds after the grandchild sleep process exits.
To workaround the issues, kill the whole process tree (all subprocesses that belong to the same group):
#!/usr/bin/env python3
import os
import signal
from subprocess import Popen, PIPE, TimeoutExpired
from time import monotonic as timer
start = timer()
with Popen('sleep 30', shell=True, stdout=PIPE, preexec_fn=os.setsid) as process:
try:
output = process.communicate(timeout=1)[0]
except TimeoutExpired:
os.killpg(process.pid, signal.SIGINT) # send signal to the process group
output = process.communicate()[0]
print('Elapsed seconds: {:.2f}'.format(timer() - start))
Output
Elapsed seconds: 1.00

Update for Python 3.6.
This is still happening but I have tested a lot of combinations of check_output, communicate and run methods and now I have a clear knowledge about where is the bug and how to avoid it in a easy way on Python 3.5 and Python 3.6.
My conclusion: It happens when you mix the use shell=True and any PIPE on stdout, stderr or stdin parameters (used in Popen and run methods).
Be careful: check_output uses PIPE inside.
If you look at the code inside on Python 3.6 it is basically a call to run with stdout=PIPE: https://github.com/python/cpython/blob/ae011e00189d9083dd84c357718264e24fe77314/Lib/subprocess.py#L335
So, to solve #innisfree problem on Python 3.5 or 3.6 just do this:
check_output(['sleep', '30'], timeout=1)
And for other cases, just avoid mixing shell=True and PIPE, keeping in mind that check_output uses PIPE.

Related

Python Subprocess.run() does it auto close on completion?

I apologize if this is a dumb question, however, I am not very fluent in Python yet.
In regards to the Python Subprocess function...
I've seen that when you use sp = subprocess.Popen(...) people close/terminate it when it's finished running the command. Example:
sp = subprocess.Popen(['powershell.exe', '-ExecutionPolicy', 'Unrestricted', 'cp', '-r', 'ui', f'..\\{name}'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd='UI Boiler')
sp.wait()
sp.terminate()
However, my question is, do you need to close any subprocess.run() functions? Or do those processes close automatically once they are finished running their commands?
The project I am working on requires a lot of those to be run and I do not wish to have 10+ shells/powershells/processes open because I didn't close them.
**Yes, on both windows and posix implementations, subprocess.run() as well as subprocess.call() will both block until completion e.g. via Process.wait() internally. Since this is a blocking call it will wait until process completion to return, so you should not need to do anything special to close processes.
To wit, here's the relevant snippets from subprocess source in cpython-3.10 (amended for brevity):
def call(*popenargs, timeout=None, **kwargs):
"""..."""
with Popen(*popenargs, **kwargs) as p:
try:
return p.wait(timeout=timeout)
except: # Including KeyboardInterrupt, wait handled that.
p.kill()
raise
# ...
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""..."""
# ...
with Popen(*popenargs, **kwargs) as process:
# communicate (as well as the with statement will both wait() internally
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
# ... additional handling here
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
return CompletedProcess(process.args, retcode, stdout, stderr)
If however you want to have more control over if and when the subprocess blocks, e.g. such that you can run other code on the same thread while the other process is running, then you should use the internal sp = supbrocess.Popen() directly
As to the call to terminate() - note this would always be a no-op in your example of waiting first and then terminating without a catch. Reason being, terminate as implemented will never even bother sending a TERM signal to your subprocess because wait() is a blocking call that will not exit until the child process completes or throws an exception (e.g. on timeout). Again, if you are calling subprocesses that might hang and you want to run in the background, e.g. so you can terminate yourself if it hasn't completed after a certain amount of time e.g. you will need to manage the subprocess yourself and subprocess.run() is probably not suitable for your needs.
A note on terminate(): subprocess.run and subprocess.call do both properly support automatically sending a kill to an erroring or timed out process, so if that's all you need, you can stick with one of those. In fact, on windows, kill() and terminate() are identical. On posix, a SIGKILL will be sent if the subprocess throws or times out.
If on POSIX, you would want to send a SIGTERM instead so that you give the subprocess the opportunity to try to terminate gracefully or cleanup then again it's best to interact with the Process object directly via Popen

Kill children of Python subprocess after subprocess.TimeoutExpired is thrown

I am calling a shell script fom within Python, that spawns multiple child processes. I want to terminate that process and all of its children, if it did not finish after two minutes.
Is there any way I can do that with subprocess.run or do I have to go back to using Popen? Since run is blocking, I am not able to save the pid somewhere to kill the children in an extra command. A short code example:
try:
subprocess.run(["my_shell_script"], stderr=subprocess.STDOUT, timeout=120)
except subprocess.TimeoutExpired:
print("Timeout during execution")
This problem was reported as a bug to the Python developers. It seems to happen specifically when stderr or stdout is redirected.
Here is a more correct version of #Tanu's code.
import subprocess as sp
try:
proc = sp.Popen(['ls', '-l'], stdout=sp.PIPE, stderr=sp.PIPE)
outs, errs = proc.communicate(timeout=120)
except sp.TimeoutExpired:
proc.terminate()
Popen doesn't accept timeout as a parameter. It must be passed to communicate.
On Posix OSs, terminate is more gentle than kill, in that it reduces the risk of creating zombie processes.
Quoting from the docs:
subprocess.run - This does not capture stdout or stderr by default. To do so, pass PIPE for the stdout and/or stderr arguments.
Don't have to use Popen() if you don't want to. The other functions in the module, such as .call(), .Popen().
There are three 'file' streams: stdin for input, and stdout and stderr for output. The application decides what to write where; usually error and diagnostic information to stderr, the rest to stdout. To capture the output for either of these outputs, specify the subprocess.PIPE argument so that the 'stream' is redirected into your program.
To kill the child process after timeout:
import os
import signal
import subprocess
try:
proc = subprocess.Popen(["ls", "-l"], stdout=PIPE, stderr=PIPE, timeout=120)
except subprocess.TimeoutExpired:
os.kill(proc.pid, signal.SIGTERM)

Subprocess.check_output timeout not working when using strace [duplicate]

I want to use a timeout on a subprocess
from subprocess32 import check_output
output = check_output("sleep 30", shell=True, timeout=1)
Unfortunately, whilst this raises a timeout error, it does so after 30 seconds. It seems that check_output cannot interrupt the shell command.
What can I do on on the Python side to stop this?
I suspect that subprocess32 fails to kill the timed out process.
check_output() with timeout is essentially:
with Popen(*popenargs, stdout=PIPE, **kwargs) as process:
try:
output, unused_err = process.communicate(inputdata, timeout=timeout)
except TimeoutExpired:
process.kill()
output, unused_err = process.communicate()
raise TimeoutExpired(process.args, timeout, output=output)
There are two issues:
[the second] .communicate() may wait for descendant processes, not just for the immediate child, see Python subprocess .check_call vs
.check_output
process.kill() might not kill the whole process tree, see How to terminate a python subprocess launched with shell=True
It leads to the behaviour that you observed: the TimeoutExpired happens in a second, the shell is killed, but check_output() returns only in 30 seconds after the grandchild sleep process exits.
To workaround the issues, kill the whole process tree (all subprocesses that belong to the same group):
#!/usr/bin/env python3
import os
import signal
from subprocess import Popen, PIPE, TimeoutExpired
from time import monotonic as timer
start = timer()
with Popen('sleep 30', shell=True, stdout=PIPE, preexec_fn=os.setsid) as process:
try:
output = process.communicate(timeout=1)[0]
except TimeoutExpired:
os.killpg(process.pid, signal.SIGINT) # send signal to the process group
output = process.communicate()[0]
print('Elapsed seconds: {:.2f}'.format(timer() - start))
Output
Elapsed seconds: 1.00
Update for Python 3.6.
This is still happening but I have tested a lot of combinations of check_output, communicate and run methods and now I have a clear knowledge about where is the bug and how to avoid it in a easy way on Python 3.5 and Python 3.6.
My conclusion: It happens when you mix the use shell=True and any PIPE on stdout, stderr or stdin parameters (used in Popen and run methods).
Be careful: check_output uses PIPE inside.
If you look at the code inside on Python 3.6 it is basically a call to run with stdout=PIPE: https://github.com/python/cpython/blob/ae011e00189d9083dd84c357718264e24fe77314/Lib/subprocess.py#L335
So, to solve #innisfree problem on Python 3.5 or 3.6 just do this:
check_output(['sleep', '30'], timeout=1)
And for other cases, just avoid mixing shell=True and PIPE, keeping in mind that check_output uses PIPE.

How to run a process with timeout and still get stdout at runtime

The need:
Timeout after X seconds, and kill the process (and all the processes it opened) if timeout reached before the process ends gracefully.
Read ongoing output at runtime.
Work with processes that produce output, ones that don't, and ones that produce output, and then stop producing it (e.g. get
stuck).
Run on Windows.
Run on Python 3.5.2.
Python 3 subprocess module has timeout built in, and I've also tried and implemented timeout myself using timer and using threads, but it doesn't work with the output. readline() is blocking or not? readlines() is definitely waiting for the process to end before spitting out all the output, which is not what I need (I need ongoing).
I am close to switching to node.js :-(
I would use asyncio for this kind of task.
Read IO from the process like in this accepted anwser:
How to stream stdout/stderr from a child process using asyncio, and obtain its exit code after?
(I don't want to fully copy it here)
Wrap it in a timeout:
async def killer(trans, timeout):
await asyncio.sleep(timeout)
trans.kill()
print ('killed!!')
trans, *other_stuff = loop.run_until_complete(
loop.subprocess_exec(
SubprocessProtocol, 'py', '-3', '-c', 'import time; time.sleep(6); print("Yay!")' ,
)
)
asyncio.ensure_future(killer(trans, 5)) # 5 seconds timeout for the kill
loop.run_forever()
Have fun ...
Use the 2 python script below.
The Master.py will use Popen to start a new process and will start a watcher thread that will kill the process after 3.0 seconds.
The slave must call the flush method if no newline in the data written to the stdout, (on windows the '\n' also cause a flush).
Be careful the time module is not a high precision timer.
The load time of the process can be longer than 3.0 seconds in extreme cases (reading an executable from a flash drive having USB 1.0)
Master.py
import subprocess, threading, time
def watcher(proc, delay):
time.sleep(delay)
proc.kill()
proc = subprocess.Popen('python Slave.py', stdout = subprocess.PIPE)
threading.Thread(target = watcher, args = (proc, 3.0)).start()
data = bytearray()
while proc:
chunk = proc.stdout.read(1)
if not chunk:
break
data.extend(chunk)
print(data)
Slave.py
import time, sys
while True:
time.sleep(0.1)
sys.stdout.write('aaaa')
sys.stdout.flush()
On Python 3.7+, use subprocess.run() with capture_output=True and timeout=<your_timeout>. If the command doesn't return before <your_timetout> seconds pass, it will kill the process and raise a subprocess.TimeoutExpired exception, which will have .stdout and .stderr properties:
import subprocess
try:
result = subprocess.run(["sleep", "3"], timeout=2, capture_output=True)
except subprocess.TimeoutExpired as e:
print("process timed out")
print(e.stdout)
print(e.stderr)
You might also want to pass text=True (or universal_newlines=True on Python <3.7) so that stdout and stderr are strs instead of bytes.
On older versions of Python you need to replace capture_output=True with stdout=subprocess.PIPE, stderr=subprocess.PIPE, in your call to subprocess.run() and the rest should be the same.
Edit: this isn't what you wanted because you need to wait for the process to terminate to read the output, but this is what I wanted when I came upon this question.

is there a way to start/stop linux processes with python?

I want to be able to start a process and then be able to kill it afterwards
Here's a little python script that starts a process, checks if it is running, waits a while, kills it, waits for it to terminate, then checks again. It uses the 'kill' command. Version 2.6 of python subprocess has a kill function. This was written on 2.5.
import subprocess
import time
proc = subprocess.Popen(["sleep", "60"], shell=False)
print 'poll =', proc.poll(), '("None" means process not terminated yet)'
time.sleep(3)
subprocess.call(["kill", "-9", "%d" % proc.pid])
proc.wait()
print 'poll =', proc.poll()
The timed output shows that it was terminated after about 3 seconds, and not 60 as the call to sleep suggests.
$ time python prockill.py
poll = None ("None" means process not terminated yet)
poll = -9
real 0m3.082s
user 0m0.055s
sys 0m0.029s
Have a look at the subprocess module.
You can also use low-level primitives like fork() via the os module.
http://docs.python.org/library/os.html#process-management
A simple function that uses subprocess module:
def CMD(cmd) :
p = subprocess.Popen(cmd, shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=False)
return (p.stdin, p.stdout, p.stderr)
see docs for primitive fork() and modules subprocess, multiprocessing, multithreading
If you need to interact with the sub process at all, I recommend the pexpect module (link text). You can send input to the process, receive (or "expect") output in return, and you can close the process (with force=True to send SIGKILL).

Categories

Resources