I am using the following code to run a subprocess. The command 'cmd' might at times fail and I wish to save the stderr output to a variable for further examination.
def exec_subprocess(cmd)
with open('f.txt', 'w') as f:
p = Popen(cmd, stderr=f)
p.wait()
Right now as you can see I am saving stderr to file. I then later save the file content to a list using readlines() which seems inefficient. What I would like instead is something like:
def exec_subprocess(cmd)
err = []
p = Popen(cmd, stderr=err)
p.wait()
return err
How do I efficiently save stderr to list?
You should use:
p=Popen(cmd, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
outs, errs = p.communicate()
if you want to assign the output of stderr to a variable.
Popen.communicate
using the subprocess module
Related
I'm trying to run a sub processes and watching his stdout until I find desirable string.
this is my code:
def waitForAppOutput(proc, word):
for stdout_line in iter(proc.stdout.readline, b''):
print stdout_line
if word in stdout_line.rstrip():
return;
p = Popen(["./app.sh"], shell=True, stdin=PIPE ,stdout=PIPE, stderr=PIPE)
waitForAppOutput(p,"done!")
the issue here is that for some reason the function waitForAppOutput stop printing stdout few lines before the "done!" which is the last line that should appear in the stdout . I assume iter(proc.stdout.readline, b'') is blocking and readline is not able to read the last lines of the stdout.
any idea what is the issue here?
You have a misspelling: it should be waitForAppOutput instead of waitForAppOutout. How does this even run at all? And when you are invoking a command using a shell, you should not be passing an array of strings but rather one single string.
Normally one should use the communicate method on the return subprocess object from the Popen call to prevent potential deadlocks (which seems to be what you are experiencing). This returns a tuple: (stdout, stderr), the stdout and stderr output strings:
from subprocess import Popen, PIPE
def waitForAppOutput(stdout_lines, word):
for stdout_line in stdout_lines:
print stdout_line
if word in stdout_line.rstrip():
return;
p = Popen("./app.sh", shell=True, stdout=PIPE, stderr=PIPE, stdin=PIPE, universal_newlines=True)
expected_input = "command line 1\ncommand line 2\n"
stdout, stderr = p.communicate(expected_input)
stdout_lines = stdout.splitlines()
waitForAppOutput(stdout_lines, "done!")
The only issue is if the output strings are large (whatever your definition of large might be), for it might be memory-inefficient or even prohibitive to read the entire output into memory. If this is your situation, then I would try to avoid the deadlock by piping only stdout.
from subprocess import Popen, PIPE
def waitForAppOutput(proc, word):
for stdout_line in iter(proc.stdout.readline, ''):
print stdout_line
if word in stdout_line.rstrip():
return;
p = Popen("./app.sh", shell=True, stdout=PIPE, stdin=PIPE, universal_newlines=True)
expected_input = "command line 1\ncommand line 2\n"
p.stdin.write(expected_input)
p.stdin.close()
waitForAppOutput(p, "done!")
for stdout_line in iter(p.stdout.readline, ''):
pass # read rest of output
p.wait() # wait for termination
Update
Here is an example using both techniques that runs the Windows sort command to sort a bunch of input lines. This works particularly well both ways because the sort command does not start output until all the input has been read, so it's a very simple protocol in which deadlocking is easy to avoid. Try running this with USE_COMMUNICATE set alternately to True and False:
from subprocess import Popen, PIPE
USE_COMMUNICATE = False
p = Popen("sort", shell=True, stdout=PIPE, stdin=PIPE, universal_newlines=True)
expected_input = """q
w
e
r
t
y
u
i
o
p
"""
if USE_COMMUNICATE:
stdout_lines, stderr_lines = p.communicate(expected_input)
output = stdout_lines
else:
p.stdin.write(expected_input)
p.stdin.close()
output = iter(p.stdout.readline, '')
for stdout_line in output:
print stdout_line,
p.wait() # wait for termination
Prints:
e
i
o
p
q
r
t
u
w
y
I have a run_cmd function which returns output from a command that i give:
def run_cmd(exe):
p = subprocess.Popen(exe, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
return p.communicate()[0]
I can run commands like bcdedit /v, format, etc, but when i call vol C:, like:
run_cmd('vol C:) i will get error:
WindowsError: [Error 2] The system cannot find the file specified.
But if i run in a cmd command vol c: it's works.
So, what i'm doing wrong? Thanks!
It's because subprocess.Popen by default expects an executable or a list representing the argv for the called process, which in your case it will look for an executable actually called "vol C:" (and not an executable called vol.exe or similar). That is unless you specify shell=True (which means that the shell will be used to parse the cmdline):
def run_cmd(cmdline):
p = subprocess.Popen(cmdline, stdout = subprocess.PIPE, stderr = subprocess.STDOUT, shell=True)
return p.communicate()[0]
run_cmd('vol C:')
Otherwise you have to supply the command line in list form:
def run_cmd(argv):
p = subprocess.Popen(argv, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
return p.communicate()[0]
run_cmd(['vol', 'C:'])
Please do the small modification.
import subprocess
p = subprocess.Popen("vol c:", stdout = subprocess.PIPE, stderr = subprocess.STDOUT, shell=True)
print p.communicate()[0]
Ouput
C:\Users\Administrator\Desktop>python chk.py
Volume in drive C has no label.
Volume Serial Number is 2A3D-7B34
I just want to do something like this:
>>bar, err_value = subprocess.check_output("cat foo.txt", shell=True)
>>print bar
>>Hello, world.
>>print err_value
>>0
But I can't seem to be able to do it. I can either get the stdout, the error code (via .call) or maybe both but needing to use some kind of pipe. What am I missing here? The documentation is very sparse about this (to me) obvious functionality. Sorry if this is a simplistic question.
I take it that you want stdout, sterr and the return code? In that case, you could do this:
import subprocess
PIPE = subprocess.PIPE
proc = subprocess.Popen(cmd, stdout=PIPE, stderr=PIPE)
output, err = proc.communicate()
errcode = proc.returncode
I ended up going with this. Thanks for your help!
def subprocess_output_and_error_code(cmd, shell=True):
import subprocess
PIPE=subprocess.PIPE
STDOUT=subprocess.STDOUT
proc = subprocess.Popen(cmd, stdout=PIPE, stderr=STDOUT, shell=shell)
stdout, stderr = proc.communicate()
err_code = proc.returncode
return stdout, int(err_code)
subprocess.check_output reutrn a value (output string), and does not return exit status. Use following form:
import subprocess
try:
out = subprocess.check_output('cat foo.txt', shell=True)
print out
except subprocess.CalledProcessError, e:
print e
This is a follow up to this question, but if I want to pass an argument to stdin to subprocess, how can I get the output in real time? This is what I currently have; I also tried replacing Popen with call from the subprocess module and this just leads to the script hanging.
from subprocess import Popen, PIPE, STDOUT
cmd = 'rsync --rsh=ssh -rv --files-from=- thisdir/ servername:folder/'
p = Popen(cmd.split(), stdout=PIPE, stdin=PIPE, stderr=STDOUT)
subfolders = '\n'.join(['subfolder1','subfolder2'])
output = p.communicate(input=subfolders)[0]
print output
In the former question where I did not have to pass stdin I was suggested to use p.stdout.readline, there there is no room there to pipe anything to stdin.
Addendum: This works for the transfer, but I see the output only at the end and I would like to see the details of the transfer while it's happening.
In order to grab stdout from the subprocess in real time you need to decide exactly what behavior you want; specifically, you need to decide whether you want to deal with the output line-by-line or character-by-character, and whether you want to block while waiting for output or be able to do something else while waiting.
It looks like it will probably suffice for your case to read the output in line-buffered fashion, blocking until each complete line comes in, which means the convenience functions provided by subprocess are good enough:
p = subprocess.Popen(some_cmd, stdout=subprocess.PIPE)
# Grab stdout line by line as it becomes available. This will loop until
# p terminates.
while p.poll() is None:
l = p.stdout.readline() # This blocks until it receives a newline.
print l
# When the subprocess terminates there might be unconsumed output
# that still needs to be processed.
print p.stdout.read()
If you need to write to the stdin of the process, just use another pipe:
p = subprocess.Popen(some_cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
# Send input to p.
p.stdin.write("some input\n")
p.stdin.flush()
# Now start grabbing output.
while p.poll() is None:
l = p.stdout.readline()
print l
print p.stdout.read()
Pace the other answer, there's no need to indirect through a file in order to pass input to the subprocess.
something like this I think
from subprocess import Popen, PIPE, STDOUT
p = Popen('c:/python26/python printingTest.py', stdout = PIPE,
stderr = PIPE)
for line in iter(p.stdout.readline, ''):
print line
p.stdout.close()
using an iterator will return live results basically ..
in order to send input to stdin you would need something like
other_input = "some extra input stuff"
with open("to_input.txt","w") as f:
f.write(other_input)
p = Popen('c:/python26/python printingTest.py < some_input_redirection_thing',
stdin = open("to_input.txt"),
stdout = PIPE,
stderr = PIPE)
this would be similar to the linux shell command of
%prompt%> some_file.o < cat to_input.txt
see alps answer for better passing to stdin
If you pass all your input before starting reading the output and if by "real-time" you mean whenever the subprocess flushes its stdout buffer:
from subprocess import Popen, PIPE, STDOUT
cmd = 'rsync --rsh=ssh -rv --files-from=- thisdir/ servername:folder/'
p = Popen(cmd.split(), stdout=PIPE, stdin=PIPE, stderr=STDOUT, bufsize=1)
subfolders = '\n'.join(['subfolder1','subfolder2'])
p.stdin.write(subfolders)
p.stdin.close() # eof
for line in iter(p.stdout.readline, ''):
print line, # do something with the output here
p.stdout.close()
rc = p.wait()
So I'm trying to move away from os.popen to subprocess.popen as recommended by the user guide. The only trouble I'm having is I can't seem to find a way of making readlines() work.
So I used to be able to do
list = os.popen('ls -l').readlines()
But I can't do
list = subprocess.Popen(['ls','-l']).readlines()
ls = subprocess.Popen(['ls','-l'], stdout=subprocess.PIPE)
out = ls.stdout.readlines()
or, if you want to read line-by-line (maybe the other process is more intensive than ls):
for ln in ls.stdout:
# whatever
With subprocess.Popen, use communicate to read and write data:
out, err = subprocess.Popen(['ls','-l'], stdout=subprocess.PIPE).communicate()
Then you can always split the string from the processes' stdout with splitlines().
out = out.splitlines()
Making a system call that returns the stdout output as a string:
lines = subprocess.check_output(['ls', '-l']).splitlines()
list = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE).communicate()[0].splitlines()
straight from the help(subprocess)
A more detailed way of using subprocess.
# Set the command
command = "ls -l"
# Setup the module object
proc = subprocess.Popen(command,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# Communicate the command
stdout_value,stderr_value = proc.communicate()
# Once you have a valid response, split the return output
if stdout_value:
stdout_value = stdout_value.split()