I would like to launch a process, let it run for some time and then read its output (the last output is ok, I don't need everything). I tried to use the following code:
def connect(interface):
proc = subprocess.Popen(['mycommand', interface], stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
time.sleep(10)
proc.terminate()
output, err = proc.communicate()
print(output)
Unfortunately, every time it gets stuck when reading the output. I also tried to use proc.read() instead of communicate() but it didn't solve the problem.
What's the best way to handle output in this case?
Many thanks in advance!
After some research, I found that the issue came from the buffer. As indicated in the subprocess module documentation:
This will deadlock when using stdout=PIPE or stderr=PIPE and the child
process generates enough output to a pipe such that it blocks waiting
for the OS pipe buffer to accept more data. Use Popen.communicate()
when using pipes to avoid that.
There are two solutions:
Use the bufsize argument to set the buffer to a size large enought to store all the output generated during the time you wait.
Use readline() to read the output and flush the buffer during the waiting time. If you don't need it just discard the output.
I choose the second approach ; my code is as follow:
def connect(interface):
stdout = []
timeout = 60
start_time = time.time()
proc = subprocess.Popen(['mycommand', interface], stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1, universal_newlines=True)
while time.time() < start_time + timeout:
line = proc.stdout.readline()
stdout.append(line)
proc.terminate()
print(output)
Related
I want to get the output of my subprocess. As it runs indefinitely I want to terminate it when certain conditions are fulfilled.
When I start the subprocess by using check_output, I get the output but no handle to terminate the process:
output = subprocess.check_output(cmd, shell=True)
When I start the subprocess by using Popen or run, I get a handle to terminate the process, but no output.
p = subprocess.Popen(cmd, shell=True, preexec_fn=os.setsid)
How can I get both?
when can you know that you've got the full process output? when the process terminates. So no need to terminate it manually. Just wait for it to end, and using check_output is the way.
Now if you want to wait for a given pattern to appear, then terminate, now that's something else. Just read line by line and if some pattern matches, break the loop and end the process
p = subprocess.Popen(cmd, shell=True, preexec_fn=os.setsid) # add stderr=subprocess.PIPE) to merge output & error
for line in p.stdout:
if b"some string" in line: # output is binary
break
p.kill() # or p.terminate()
You'll need to tell Popen you want to read the standard output, but it can be a little tricky.
p = subprocess.Popen(cmd, shell=True, preexec_fn=os.setsid, stdout=subprocess.PIPE)
while True:
chunk = p.stdout.read(1024) # this will hang forever if there's nothing to read
p.terminate()
try this :
import subprocess
p = subprocess.Popen("ls -a", shell=True, stdout=subprocess.PIPE,stderr=subprocess.PIPE)
print((p.stdout.read()))
if p.stdout.read() or p.stderr.read():
p.terminate()
I am using Python and it's subprocess library to check output from calls using strace, something in the matter of:
subprocess.check_output(["strace", str(processname)])
However, this only gives me the output after the called subprocess already finished, which is very limiting for my use-case.
I need a kind of "stream" or live-output from the process, so I need to read the output while the process is still running instead of only after it finished.
Is there a convenient way to achieve this using the subprocess library?
I'm thinking of a kind of poll every x seconds, but did not find any hints regarding on how to implement this in the documentation.
Many thanks in advance.
As of Python 3.2 (when context manager support was added to Popen), I have found this to be the most straightforward way to continuously stream output from a subprocess:
import subprocess
def run(args):
with subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as process:
for line in process.stdout:
print(line.decode('utf8'))
Had some problems referencing the selected answer for streaming output from a test runner. The following worked better for me:
import subprocess
from time import sleep
def stream_process(process):
go = process.poll() is None
for line in process.stdout:
print(line)
return go
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while stream_process(process):
sleep(0.1)
According to the documentation:
Popen.poll()
Check if child process has terminated. Set and return returncode attribute.
So based on this you can:
process = subprocess.Popen('your_command_here',stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if process.poll() is not None and output == '':
break
if output:
print (output.strip())
retval = process.poll()
This will loop, reading the stdout, and display the output in real time.
This does not work in current versions of python. (At least) for Python 3.8.5 and newer you should replace output == '' with output == b''
Recently I've been messing around with Popen. I spawned a process in a background that writes the output to a TemporaryFile:
f = tempfile.TemporaryFile()
p = subprocess.Popen(["gatttool"], stdin = subprocess.PIPE, stdout = f)
Now it works in a way that I send a command to the process via stdin and read a bit later the temporary file. And it's non-blocking, so I can perform other tasks.
The problem is that gatttool sometimes generates some output byt itself (for example notifications). And I'm looking for a way to read this output without blocking the TemporaryFile.
My question:
1) Is it safe to read output from a TemporaryFile (50 lines) and hope that subprocess gracefully waits for me to read that data or will it terminate?
2) Is there an elegant way to create a callback function that will be called on every event on TemporaryFile (instead of having a thread that will run every one second and read the data)?
Actually the resolution is very simple. Create a pipe, use the gatttool output as the input. The output of that pipe goes to a thread, that reads that output, line by line and each of that line is parsed. Checked it and it works. Please lock this question down.
# Create a pipe. "gatt_in" ins where the "gatttool" will be dumping it's output.
# We read that output from the other end of pipe, "gatt_out"
gatt_out, gatt_in = os.pipe()
gatt_process = subprocess.Popen(["gatttool", "your parametres"], stdin = subprocess.PIPE,
stdout = gatt_in)
Now every time I want to send a command to gatttool I do this:
gatt_process.stdin.write("Some commands\n")
The result of this command will apear in gatt_out. In my case, this is handled in another thread.
To provide input/get output from a child process, you could use subprocess.PIPE:
from subprocess import Popen, PIPE
p = Popen(['gatttool', 'arg 1', 'arg 2'], stdin=PIPE, stdout=PIPE, bufsize=1)
# provide input
p.stdin.write(b'input data')
p.stdin.close()
# read output incrementally (in "real-time")
for line in iter(p.stdout.readline, b''):
print line,
p.stdout.close()
p.wait()
I'm using the following command to run a shell command (creating a subprocess):
cmd = "ls"
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
Then, I want to get its return code when it's finished. I should use wait() or poll()? It seems to me wait() is equal to a poll() enclosed in a busy wait. Something like:
while process.poll() == None:
time.sleep(0.5)
I read that wait() could generate a deadlock if stdout/stderr buffer is filled. process.poll() used like above also could generate a deadlock? If this is true,
I should use process.comunicate() to solve the problem? Nowadays, I only use
process.comunicate() when I'm interested in the subprocess stdout/stderr.
Thanks in advance.
Yes. subprocess.poll, when used in a loop like that, will cause a deadlock if the stdout is piped into your process and you aren't reading from it. If you don't pipe stdout or you're consistently reading from it, neither poll nor wait will deadlock. subprocess.communicate will solve the deadlock in the cases it would occur. However, if you just want to run a command, check its return code, and get its output, use subprocess.check_output, which wraps all of that.
I am executing the python command,
proc = subprocess.Popen(cmd,
shell=False,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
after executing command i want to read the stderr and stdout
res = proc.stderr.read()
in res i am expecting any error or ' '
but the reading the stderr is taking infinite time is get hang not reading the values what ever the result it.it goes in infinite time.
Some time back same code is working fine but not idea why its not reading stderr now.
Any Hint, thanks.
Instead of explicitly calling stderr.read(), just do a communicate on the proc.
output, error = proc.communicate()
That way you would get the output and error by communicating with the process.