(Using python 3.2 currently)
I need to be able to:
Run a command using subprocess
Both stdout/stderr of that command need be printed to the terminal in real-time (it doesn't matter if they both come out on stdout or stderr or whatever
At the same time, I need a way to know if the command printed anything to stderr (and preferably what it printed).
I've played around with subprocess pipes as well as doing strange pipe redirects in bash, as well as using tee, but as of yet haven't found anything that would work. Is this something that's possible?
My solution:
import subprocess
process = subprocess.Popen("my command", shell=True,
stdout=None, # print to terminal
stderr=subprocess.PIPE)
duplicator = subprocess.Popen("tee /dev/stderr", shell=True, # duplicate input stream
stdin=process.stderr,
stdout=subprocess.PIPE, # catch error stream of first process
stderr=None) # print to terminal
error_stream = duplicator.stdout
print('error_stream.read() = ' + error_stream.read())
Try something like this:
import os
cmd = 'for i in 1 2 3 4 5; do sleep 5; echo $i; done'
p = os.popen(cmd)
while True:
output = p.readline()
print(output)
if not output: break
In python2 you can catch stderr easily as well by using popen3 like this:
i, o, err = os.popen3(cmd)
but there seem to be no such function in python3. If you don find the way around this, try using subprocess.Popen directly, as described here: http://www.saltycrane.com/blog/2009/10/how-capture-stdout-in-real-time-python/
Related
I want to mimic the below using python subprocess:
cat /tmp/myscript.sh | sh
The /tmp/myscript.sh contains:
ls -l
sleep 5
pwd
Behaviour: stdout shows the result of "ls" and the results of "pwd" are shown after 5 seconds.
What I have done is:
import subprocess
f = open("/tmp/myscript.sh", "rb")
p = subprocess.Popen("sh", shell=True, stdin=f,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
f.close()
p.stdout.read()
This waits until ALL the processing is done and shows the results all at once. The desired effect is to fill in the stdout pipe in realtime.
Note: This expectation seems non sense but this is sample from a bigger and complex situation which I cannot describe here.
Another Note: I can't use p.communicate. This whole thing is inside a select.select statement so I need stdout to be in a pipe.
The problem is that when you don't give an argument to read(), it reads until EOF, which means it has to wait until the subprocess exits and the pipe is closed.
If you call it with a small argument it will return immediately after it has read that many characters
import subprocess
f = open("/tmp/myscript.sh", "rb")
p = subprocess.Popen("sh", shell=True, stdin=f,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8')
f.close()
while True:
c = p.stdout.read(1)
if not c:
break
print(c, end='')
print()
Note that some many buffer their output when stdout is connected to a pipe, so this might not solve the problem for everything. The shell doesn't buffer its own output, but ls probably does. But since ls is producing all its output at once, it won't be a problem in this case.
To solve the more general problem you may need to use a pty instead of a pipe. The pexpect library is useful for this.
How can I hide the return code after calling a command using subprocess?
from netaddr import IPNetwork
import subprocess
for ip in IPNetwork('1.1.1.1/19'):
print subprocess.call(["host", str(ip)]
If I then pipe this to a file I get the host + ip but with a return code of 0 after each line.
subprocess.call returns the return code, and you are seeing it because you are printing it. The other output is being printed to stdout from the host program, since it isn't being redirected from the subprocess.call method.
If you want to capture the output of the process you are calling you should take a look at Popen's communicate method
Here is an example on how to capture stdout and stderr.
>>> proc = subprocess.Popen(['ls', '-alh', '/tmp/foo'],
... stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... shell=False)
>>>
>>> stdout, stderr = proc.communicate()
>>> proc.returncode
0
Probably super dead by now, but figured I'd add my two cents since I was just looking into this.
I thought the same way that I'd have to encapsulate in print. I was getting error standard out plus the 0 exit code (standard error) after each line as well.
When I removed it from the print statement, it removed the exit code (standard error) but still printed the standard output (the output of the command).
Try this:
from netaddr import IPNetwork
import subprocess
for ip in IPNetwork('1.1.1.1/19'):
subprocess.call(["host", str(ip)])
I don't have the netaddr module installed so I can't tell you if it works or not.
Is there a way that I can execute a shell program from Python, which prints its output to the screen, and read its output to a variable without displaying anything on the screen?
This sounds a little bit confusing, so maybe I can explain it better by an example.
Let's say I have a program that prints something to the screen when executed
bash> ./my_prog
bash> "Hello World"
When I want to read the output into a variable in Python, I read that a good approach is to use the subprocess module like so:
my_var = subprocess.check_output("./my_prog", shell=True)
With this construct, I can get the program's output into my_var (here "Hello World"), however it is also printed to the screen when I run the Python script. Is there any way to suppress this? I couldn't find anything in the subprocess documentation, so maybe there is another module I could use for this purpose?
EDIT:
I just found out that commands.getoutput() lets me do this. But is there also a way to achieve similar effects in subprocess? Because I was planning to make a Python3 version at some point.
EDIT2: Particular Example
Excerpt from the python script:
oechem_utils_path = "/soft/linux64/openeye/examples/oechem-utilities/"\
"openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/"\
"oechem-utilities/"
rmsd_path = oechem_utils_path + "rmsd"
for file in lMol2:
sReturn = subprocess.check_output("{rmsd_exe} {rmsd_pars}"\
" -in {sIn} -ref {sRef}".format(rmsd_exe=sRmsdExe,\
rmsd_pars=sRmsdPars, sIn=file, sRef=sReference), shell=True)
dRmsds[file] = sReturn
Screen Output (Note that not "everything" is printed to the screen, only a part of
the output, and if I use commands.getoutput everything works just fine:
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd - RMSD utility [OEChem 1.7.2]
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
To add to Ryan Haining's answer, you can also handle stderr to make sure nothing is printed to the screen:
p = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, close_fds=True)
out,err = p.communicate()
If subprocess.check_ouput is not working for you, use a Popen object and a PIPE to capture the program's output in Python.
prog = subprocess.Popen('./myprog', shell=True, stdout=subprocess.PIPE)
output = prog.communicate()[0]
the .communicate() method will wait for a program to finish execution and then return a tuple of (stdout, stderr) which is why you'll want to take the [0] of that.
If you also want to capture stderr then add stderr=subprocess.PIPE to the creation of the Popen object.
If you wish to capture the output of prog while it is running instead of waiting for it to finish, you can call line = prog.stdout.readline() to read one line at a time. Note that this will hang if there are no lines available until there is one.
I always used Subprocess.Popen, which gives you no output normally
I was looking to implement a python script that called another script and captured its stdout. The called script will contain some input and output messages eg
print ("Line 1 of Text")
variable = raw_input("Input 1 :")
print "Line 2 of Text Input: ", vairable
The section of the code I'm running is
import subprocess
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
so, se = p.communicate()
print(so)
The problem that is occurring is that the stdout is not printing until after the script has been executed. This leaves a blank prompt waiting for the user input. Is there a way to get stdout to print while the called script is still running?
Thanks,
There are two problems here.
Firstly, python is buffering output to stdout and you need to prevent this. You could insert a call to sys.stdout.flush() in testfile.py as Ilia Frenkel has suggested, or you could use python -u to execute testfile.py with unbuffered I/O. (See the other stack overflow question that Ilia linked to.)
You need a way of asynchronously reading data from the sub-process and then, when it is ready for input, printing the data you've read so that the prompt for the user appears. For this, it would be very helpful to have an asynchronous version of the subprocess module.
I downloaded the asynchronous subprocess and re-wrote your script to use it, along with using python -u to get unbuffered I/O:
import async_subprocess as subprocess
cmd = ['python', '-u', 'testfile.py']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
so = p.asyncread()
print so,
(so, se) = p.communicate()
print so
When I run this script using python -u I get the following results:
$ python -u script.py
Line 1 of Text
Input 1:
and the script pauses, waiting for input. This is the desired result.
If I then type something (e.g. "Hullo") I get the following:
$ python -u script.py
Line 1 of Text
Input 1:Hullo
Line 2 of Text Input: Hullo
You don't need to capture it's stdout really, just have the child program print out its stuff and quit, instead of feeding the output into your parent program and printing it there. If you need variable output, just use a function instead.
But anyways, that's not what you asked.
I actually got this from another stackoverflow question:
import subprocess, sys
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
out = p.stdout.read(20)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
First, it opens up your process: then it continually reads the output from p and prints it onto the screen using sys.stdout.write. The part that makes this all work is sys.stdout.flush(), which will continually "flush out" the output of the program.
I'm trying to run a shell command from within a python script which needs to do several things
1. The shell command is 'hspice tran.deck >! tran.lis'
2. The script should wait for the shell command to complete before proceeding
3. I need to check the return code from the command and
4. Capture STDOUT if it completed successfully else capture STDERR
I went through the subprocess module and tried out a couple of things but couldn't find a way to do all of the above.
- with subprocess.call() I could check the return code but not capture the output.
- with subprocess.check_output() I could capture the output but not the code.
- with subprocess.Popen() and Popen.communicate(), I could capture STDOUT and STDERR but not the return code.
I'm not sure how to use Popen.wait() or the returncode attribute. I also couldn't get Popen to accept '>!' or '|' as arguments.
Can someone please point me in the right direction? I'm using Python 2.7.1
EDIT: Got things working with the following code
process = subprocess.Popen('ls | tee out.txt', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
if(process.returncode==0):
print out
else:
print err
Also, should I use a process.wait() after the process = line or does it wait by default?
Just use .returncode after .communicate(). Also, tell Popen that what you're trying to run is a shell command, rather than a raw command line:
p = subprocess.Popen('ls | tee out.txt', shell=True, ...)
p.communicate()
print p.returncode
From the docs:
Popen.returncode
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t terminated yet.
A negative value -N indicates that the child was terminated by signal N (Unix only).
Here is example how to interact with shell:
>>> process = subprocess.Popen(['/bin/bash'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> process.stdin.write('echo it works!\n')
>>> process.stdout.readline()
'it works!\n'
>>> process.stdin.write('date\n')
>>> process.stdout.readline()
'wto, 13 mar 2012, 17:25:35 CET\n'
>>>