I found a number of questions which looks like mine, but which did not produce a solution I can use (closest is: subprocess output to stdout and to PIPE)
The problem: I want to start a process using subprocess which takes a long time. After running the command I need to parse the stdout-output and the stderr-output.
Currently I do it as follows:
p = subprocess.Popen( command_list, stdout=subprocess.PIPE,
stderr=subprocess.PIPE )
out, error_msg = p.communicate()
print out + "\n\n" + error_msg
#next comes code in which I check out and error_msg
But the drawback of this method is that the user does not see the output of the process while it is running. Only at the end the output is printed.
Is there a way that the output is printed while the command is running (as if I gave the command without stdout/stderr=subprocess.PIPE) and still have the output via p.communicate in the end?
Note: I'm currently developing on python 2.5 (old software release which uses this python version).
This snippet has helped me once in a similar situation:
process = subprocess.Popen(cmd, bufsize=1, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(process.stdout.readline, ''):
print line,
sys.stdout.flush() # please see comments regarding the necessity of this line
process.wait()
errcode = process.returncode
Related
I want to write a function that will execute multiple shell commands one at a time and print what the shell returns in real time.
I currently have the following code which does not print the shell (I am using Windows 10 and python 3.6.2):
commands = ["foo", "foofoo"]
p = subprocess.Popen("cmd.exe", shell=True, stdin=subprocess.PIPE, \
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for command in commands:
p.stdin.write((command + "\n").encode("utf-8"))
p.stdin.close()
p.stdout.read()
How can I see what the shell returns in real time ?
Edit : This question is not a duplicate of the two first links in the comments, they do not help printing in real time.
It is possible to handle stdin and stdout in different threads. That way one thread can be handling printing the output from stdout and another one writing new commands on stdin. However, since stdin and stdout are independent streams, I do not think this can guarantee the order between the streams. For the current example it seems to work as intended, though.
import subprocess
import threading
def stdout_printer(p):
for line in p.stdout:
print(line.rstrip())
commands = ["foo", "foofoo"]
p = subprocess.Popen("cmd.exe", stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
universal_newlines=True)
t = threading.Thread(target=stdout_printer, args=(p,))
t.start()
for command in commands:
p.stdin.write((command + "\n"))
p.stdin.flush()
p.stdin.close()
t.join()
Also, note that I am writing stdout line by line, which is normally OK, since it tends to be buffered and being generated a line (or more) at a time. I guess it is possible to handle an unbuffered stdout stream (or e.g. stderr) character-by-character instead, if that is preferable.
I believe you need something like this
commands = ["foo", "foofoo"]
p = subprocess.Popen("cmd.exe", shell=True, stdin=subprocess.PIPE, \
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for command in commands:
p.stdin.write((command + "\n").encode("utf-8"))
out, err = p.communicate()
print("{}".format(out))
print("{}".format(err))
Assuming you want control of the output in your python code you might need to do something like this
import subprocess
def run_process(exe):
'Define a function for running commands and capturing stdout line by line'
p = subprocess.Popen(exe.split(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
if __name__ == '__main__':
commands = ["foo", "foofoo"]
for command in commands:
for line in run_process(command):
print(line)
I want to execute a python subprocess in a new console. Once started, I want the user to be able to answer questions asked by this new process on stdin.
I tried the following code:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd, creationflags=subprocess.CREATE_NEW_CONSOLE)
(o, e) = p.communicate()
As soon as the subprocess asks for input on stdin the following error message is displayed:
EOFError: EOF when reading a line
Is it the good way to achieve this ?
As i'm not really interested in the stdout/stderr redirection, i tried this way:
subprocess.Popen(cmd, cwd=cwd, creationflags=subprocess.CREATE_NEW_CONSOLE)
It works fine now. I guess that it's not compatible to redirect standard input/outputs and to create a new console.
When i am running a program in the console, i get some text output.
When i am running the same program in Popen(..), with the same parameters, stdout and stderr are empty.
I tried everything i could imagine like shell=False and shell=True, set stdout=subprocess.PIPE, did a os.chdir() to change into the directory of this program, try p.wait() and p.communicate(), set the command as a list and as a string, but nothing works.
example:
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
out, err = p.communicate()
--> out and err are empty strings, but if i ran this command in console i get a real output. Command is with fullpath, so its regardless where the command will be started.
My question is, are there mechanisms for programms to detect they weren't run in a real console? If so, how can i cheat.
Or miss i something?
(Python 2.7.8. x32 in Win7 x64)
from subprocess import Popen, STDOUT, PIPE
p = Popen(command, shell=True, stdout=PIPE, stderr=STDOUT, stdin=PIPE)
while p.poll() is None:
print(p.stdout.read())
p.stdout.close()
p.stdin.close()
Try this and see if it makes any difference. Also make sure command is a string and not a list/touple, shell=True for whatever reason works better or only with strings.
Also note that shell=True will get you hanged because it's insecure etc.
Also skipping .communicate() you'll need to tap off stdout otherwise the buffer will get full and you might hang both yours and the child process.
If this doesn't work, please provide more information. Such as the command used and the expected output (at least first few lines)
I am calling an external program within Python script using subprocess. The external program produces a lot of output. I need to capture the output of this program. The current code is something like this:
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=None)
process.stdin.write('gams "indus89.gms"\r\n')
while process.poll() != None:
line = process.stdout.readline()
print line
The error I am getting with this code is
The process tried to write to a nonexistent pipe.
If I use the following code:
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=None)
process.stdin.write('gams "indus89.gms"\r\n')
o, e = process.communicate()
print o
then the output of the program is not captured.
How should I alter my code so that I can capture the output of the third party program while it runs?
Popen is overkill.
Try:
output = subprocess.check_output('gams "indus89.gms"\r\n', shell=True)
Hopefully that will work in your environment.
I am executing the python command,
proc = subprocess.Popen(cmd,
shell=False,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
after executing command i want to read the stderr and stdout
res = proc.stderr.read()
in res i am expecting any error or ' '
but the reading the stderr is taking infinite time is get hang not reading the values what ever the result it.it goes in infinite time.
Some time back same code is working fine but not idea why its not reading stderr now.
Any Hint, thanks.
Instead of explicitly calling stderr.read(), just do a communicate on the proc.
output, error = proc.communicate()
That way you would get the output and error by communicating with the process.