I'm using python 2.7 and trying to check if stderr in the first 10 seconds of execution returned any errors, and if not, to continue the process, I'm not waiting for the process to end since it's a loop.
I got this code:
p = subprocess.Popen(
["python", os.getcwd() + "/start.py", 1hr, 30min, 0, 0],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if len(p.stderr.readlines()) is not None:
print p.stderr.readlines()
return jsonify(status="failed",
error_details=p.stderr.readlines()
)
The thing is that after reading the length of stderr, stderr clears itself so I cannot check the value and post the message, so the print p.stderr.readlines() becomes an empty list.
{
"error_details": [],
"status": "failed"
}
I don't want to display the output, I just want to run start.py and check for errors, if no errors turn up in 10 seconds or so, it means everything went well, and can continue and run the loop started by subprocess in background without checking on it. Can I somehow continue and if somehow the p process failed to return an error even after 1 hour or so, to execute a callback? Can I store stderr somehow to reply with the message? Also, how do you suggest I make a counter for that 10 seconds to check if an error popped up? Using p.pool() does not return any error code, even if the file doesn't exist.
of course you could store the result of readlines so you don't consume the buffer, but your code has a more serious issue.
Using subprocess.PIPE for both stdout and stderr and try to read stderr after that can result in a deadlock.
Besides, in some cases, the process can silently fail and nothing is written in stderr. This isn't the proper way to get return code.
The only (and easier way) to do this is to use communicate:
p = subprocess.Popen(
["python", os.getcwd() + "/start.py", "1hr", "30min", "0", "0"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out,err = p.communicate()
rc = p.wait()
if rc: # non-zero return code
return jsonify(status="failed",
error_details=err.decode().splitlines())
you get output & error read in 2 separate threads transparently (no deadlock possible), and you test the return code. If not zero, then an error has occurred. Split the lines of the decoder error output (python 3) and return your error.
Note that splitlines() removes the lines terminations (as opposed to readlines() on a stream.
That approach above cannot handle the "wait 10 seconds then test error output" approach. This can:
p = subprocess.Popen(
["python", os.getcwd() + "/start.py", "1hr", "30min", "0", "0"],
stdout=subprocess.DEVNULL, # don't read stdout, avoids deadlock
# stdout=devnull, # python 2, where devnull = open(os.devnull, 'w')
stderr=subprocess.PIPE)
# now wait 10 seconds
time.sleep(10)
if p.poll() is not None and p.poll() != 0:
return jsonify(status="failed",
error_details=p.stderr.readlines() # add .decode() for python 3
)
that above approach wait 10 seconds, then tests if process has ended with error, returns the error / reads error output.
Related
I'm trying to run "docker-compose pull" from inside a Python automation script and to incrementally display the same output that Docker command would print if it was run directly from the shell. This command prints a line for each Docker image found in the system, incrementally updates each line with the Docker image's download progress (a percentage) and replaces this percentage with a "done" when the download has completed. I first tried getting the command output with subprocess.poll() and (blocking) readline() calls:
import shlex
import subprocess
def run(command, shell=False):
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)
while True:
# print one output line
output_line = p.stdout.readline().decode('utf8')
error_output_line = p.stderr.readline().decode('utf8')
if output_line:
print(output_line.strip())
if error_output_line:
print(error_output_line.strip())
# check if process finished
return_code = p.poll()
if return_code is not None and output_line == '' and error_output_line == '':
break
if return_code > 0:
print("%s failed, error code %d" % (command, return_code))
run("docker-compose pull")
The code gets stuck in the first (blocking) readline() call. Then I tried to do the same without blocking:
import select
import shlex
import subprocess
import sys
import time
def run(command, shell=False):
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)
io_poller = select.poll()
io_poller.register(p.stdout.fileno(), select.POLLIN)
io_poller.register(p.stderr.fileno(), select.POLLIN)
while True:
# poll IO for output
io_events_list = []
while not io_events_list:
time.sleep(1)
io_events_list = io_poller.poll(0)
# print new output
for event in io_events_list:
# must be tested because non-registered events (eg POLLHUP) can also be returned
if event[1] & select.POLLIN:
if event[0] == p.stdout.fileno():
output_str = p.stdout.read(1).decode('utf8')
print(output_str, end="")
if event[0] == p.stderr.fileno():
error_output_str = p.stderr.read(1).decode('utf8')
print(error_output_str, end="")
# check if process finished
# when subprocess finishes, iopoller.poll(0) returns a list with 2 select.POLLHUP events
# (one for stdout, one for stderr) and does not enter in the inner loop
return_code = p.poll()
if return_code is not None:
break
if return_code > 0:
print("%s failed, error code %d" % (command, return_code))
run("docker-compose pull")
This works, but only the final lines (with "done" at the end) are printed to the screen, when all Docker images downloads have been completed.
Both methods work fine with a command with simpler output such as "ls". Maybe the problem is related with how this Docker command prints incrementally to screen, overwriting already written lines ? Is there a safe way to incrementally show the exact output of a command in the command line when running it via a Python script?
EDIT: 2nd code block was corrected
Always openSTDIN as a pipe, and if you are not using it, close it immediately.
p.stdout.read() will block until the pipe is closed, so your polling code does nothing useful here. It needs modifications.
I suggest not to use shell=True
Instead of *.readline(), try with *.read(1) and wait for "\n"
Of course you can do what you want in Python, the question is how. Because, a child process might have different ideas about how its output should look like, that's when trouble starts. E.g. the process might want explicitly a terminal at the other end, not your process. Or a lot of such simple nonsense. Also, a buffering may also cause problems. You can try starting Python in unbuffered mode to check. (/usr/bin/python -U)
If nothing works, then use pexpect automation library instead of subprocess.
I have found a solution, based on the first code block of my question:
def run(command,shell=False):
p = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=shell)
while True:
# read one char at a time
output_line = p.stderr.read(1).decode("utf8")
if output_line != "":
print(output_line,end="")
else:
# check if process finished
return_code = p.poll()
if return_code is not None:
if return_code > 0:
raise Exception("Command %s failed" % command)
break
return return_code
Notice that docker-compose uses stderr to print its progress instead of stdout. #Dalen has explained that some applications do it when they want that their results are pipeable somewhere, for instance a file, but also want to be able to show their progress.
I want to run a Python script (or any executable, for that manner) from a python script and get the output in real time. I have followed many tutorials, and my current code looks like this:
import subprocess
with open("test2", "w") as f:
f.write("""import time
print('start')
time.sleep(5)
print('done')""")
process = subprocess.Popen(['python3', "test2"], stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print(output.strip())
rc = process.poll()
The first bit just creates the file that will be run, for clarity's sake.
I have two problems with this code:
It does not give the output in real time. It waits untill the process has finished.
It does not terminate the loop once the process has finished.
Any help would be very welcome.
EDIT: Thanks to #JohnAnderson for the fix to the first problem: replacing if output == '' and process.poll() is not None: with if output == b'' and process.poll() is not None:
Last night I've set out to do this using a pipe:
import os
import subprocess
with open("test2", "w") as f:
f.write("""import time
print('start')
time.sleep(2)
print('done')""")
(readend, writeend) = os.pipe()
p = subprocess.Popen(['python3', '-u', 'test2'], stdout=writeend, bufsize=0)
still_open = True
output = ""
output_buf = os.read(readend, 1).decode()
while output_buf:
print(output_buf, end="")
output += output_buf
if still_open and p.poll() is not None:
os.close(writeend)
still_open = False
output_buf = os.read(readend, 1).decode()
Forcing buffering out of the picture and reading one character at the time (to make sure we do not block writes from the process having filled a buffer), closing the writing end when process finishes to make sure read catches the EOF correctly. Having looked at the subprocess though that turned out to be a bit of an overkill. With PIPE you get most of that for free and I ended with this which seems to work fine (call read as many times as necessary to keep emptying the pipe) with just this and assuming the process finished, you do not have to worry about polling it and/or making sure the write end of the pipe is closed to correctly detect EOF and get out of the loop:
p = subprocess.Popen(['python3', '-u', 'test2'],
stdout=subprocess.PIPE, bufsize=1,
universal_newlines=True)
output = ""
output_buf = p.stdout.readline()
while output_buf:
print(output_buf, end="")
output += output_buf
output_buf = p.stdout.readline()
This is a bit less "real-time" as it is basically line buffered.
Note: I've added -u to you Python call, as you need to also make sure your called process' buffering does not get in the way.
I'm using Popen to read a program's output and automatically fill in data, but running into an issue.
I have a program that prompts a user for output. It is a licensing mechanism for a proprietary product. It asks basic things like Does the current date match XX-YY-ZZZZ? or Machine limit?
The issue is that these prompts do not contain a newline character, so trying to use proc.stdout.read() doesn't seem to work.
When this application is executed manually, we can obviously read the text in order to respond to it, so if I understand things correctly, this is an indicator of the buffer being flushed. Is this correct?
How can I read stdout even though the prompt from the application does not have a newline in it?
def license_code(self, code):
os.chdir(self.sql_dir)
print('Current directory: {}'.format(os.getcwd()))
proc = subprocess.Popen(
shlex.split('bin/dlm -i'),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
bufsize=0
)
# Give it 1 seconds
_now = datetime.datetime.now()
_response = b''
while (datetime.datetime.now() - _now).total_seconds() < 1:
_len = len(_response)
print('test')
_response += proc.stdout.read()
print(_response)
if len(_response) != _len:
_now = datetime.datetime.now()
print(_response.decode())
The line containing proc.stdout.read() hangs and print(_response) is never reached.
I'm using subprocess to run a script , get the output of the script on a pipe and process on the output .
I experience a weird problem where in sometimes it reads till the end of the script and someother time it does not go till the end.
I suspect this could be a problem with the buffer size .. tried few alternatives but haven't been succesful yet..
def main():
x = subprocess.Popen('./autotest', bufsize = 1, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, cwd = '/home/vijay/run/bin', shell = True)
with open("out.txt",'wb') as f:
for line in x.stdout:
if 'Press \'q\' to quit scheduler' in line:
print line.strip()
f.write(line.strip())
x.stdin.write('q')
f.write('\n')
x.stdin.close()
x.stdout.flush()
try:
x.stdout.read()
except:
print 'Exception Occured !!!'
os._exit(1)
else:
print line.strip()
f.write(line.strip())
f.write('\n')
x.stdout.flush()
if __name__ == '__main__':
main()
You should keep trying to read from stdout until the process terminates not until stdout ends, use poll() to check if the process terminated and if not, try to read again.
From the Subprocess manual:
[ http://docs.python.org/library/subprocess.html ]
Warning Use communicate() rather than .stdin.write, .stdout.read or
.stderr.read to avoid deadlocks due to any of the other OS pipe
buffers filling up and blocking the child process.
This sounds like it may be the problem you are experiencing. For example, if stderr filled up, I believe that could cause the process to block, preventing it from producing further output on stdout.
I'd like to use the subprocess module in the following way:
create a new process that potentially takes a long time to execute.
capture stdout (or stderr, or potentially both, either together or separately)
Process data from the subprocess as it comes in, perhaps firing events on every line received (in wxPython say) or simply printing them out for now.
I've created processes with Popen, but if I use communicate() the data comes at me all at once, once the process has terminated.
If I create a separate thread that does a blocking readline() of myprocess.stdout (using stdout = subprocess.PIPE) I don't get any lines with this method either, until the process terminates. (no matter what I set as bufsize)
Is there a way to deal with this that isn't horrendous, and works well on multiple platforms?
Update with code that appears not to work (on windows anyway)
class ThreadWorker(threading.Thread):
def __init__(self, callable, *args, **kwargs):
super(ThreadWorker, self).__init__()
self.callable = callable
self.args = args
self.kwargs = kwargs
self.setDaemon(True)
def run(self):
try:
self.callable(*self.args, **self.kwargs)
except wx.PyDeadObjectError:
pass
except Exception, e:
print e
if __name__ == "__main__":
import os
from subprocess import Popen, PIPE
def worker(pipe):
while True:
line = pipe.readline()
if line == '': break
else: print line
proc = Popen("python subprocess_test.py", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout_worker = ThreadWorker(worker, proc.stdout)
stderr_worker = ThreadWorker(worker, proc.stderr)
stdout_worker.start()
stderr_worker.start()
while True: pass
stdout will be buffered - so you won't get anything till that buffer is filled, or the subprocess exits.
You can try flushing stdout from the sub-process, or using stderr, or changing stdout on non-buffered mode.
It sounds like the issue might be the use of buffered output by the subprocess - if a relatively small amount of output is created, it could be buffered until the subprocess exits. Some background can be found here:
Here's what worked for me:
cmd = ["./tester_script.bash"]
p = subprocess.Popen( cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE )
while p.poll() is None:
out = p.stdout.readline()
do_something_with( out, err )
In your case you could try to pass a reference to the sub-process to your Worker Thread, and do the polling inside the thread. I don't know how it will behave when two threads poll (and interact with) the same subprocess, but it may work.
Also note thate the while p.poll() is None: is intended as is. Do not replace it with while not p.poll() as in python 0 (the returncode for successful termination) is also considered False.
I've been running into this problem as well. The problem occurs because you are trying to read stderr as well. If there are no errors, then trying to read from stderr would block.
On Windows, there is no easy way to poll() file descriptors (only Winsock sockets).
So a solution is not to try and read from stderr.
Using pexpect [http://www.noah.org/wiki/Pexpect] with non-blocking readlines will resolve this problem. It stems from the fact that pipes are buffered, and so your app's output is getting buffered by the pipe, therefore you can't get to that output until the buffer fills or the process dies.
This seems to be a well-known Python limitation, see
PEP 3145 and maybe others.
Read one character at a time: http://blog.thelinuxkid.com/2013/06/get-python-subprocess-output-without.html
import contextlib
import subprocess
# Unix, Windows and old Macintosh end-of-line
newlines = ['\n', '\r\n', '\r']
def unbuffered(proc, stream='stdout'):
stream = getattr(proc, stream)
with contextlib.closing(stream):
while True:
out = []
last = stream.read(1)
# Don't loop forever
if last == '' and proc.poll() is not None:
break
while last not in newlines:
# Don't loop forever
if last == '' and proc.poll() is not None:
break
out.append(last)
last = stream.read(1)
out = ''.join(out)
yield out
def example():
cmd = ['ls', '-l', '/']
proc = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
# Make all end-of-lines '\n'
universal_newlines=True,
)
for line in unbuffered(proc):
print line
example()
Using subprocess.Popen, I can run the .exe of one of my C# projects and redirect the output to my Python file. I am able now to print() all the information being output to the C# console (using Console.WriteLine()) to the Python console.
Python code:
from subprocess import Popen, PIPE, STDOUT
p = Popen('ConsoleDataImporter.exe', stdout = PIPE, stderr = STDOUT, shell = True)
while True:
line = p.stdout.readline()
print(line)
if not line:
break
This gets the console output of my .NET project line by line as it is created and breaks out of the enclosing while loop upon the project's termination. I'd imagine this would work for two python files as well.
I've used the pexpect module for this, it seems to work ok. http://sourceforge.net/projects/pexpect/