Is there a way that I can execute a shell program from Python, which prints its output to the screen, and read its output to a variable without displaying anything on the screen?
This sounds a little bit confusing, so maybe I can explain it better by an example.
Let's say I have a program that prints something to the screen when executed
bash> ./my_prog
bash> "Hello World"
When I want to read the output into a variable in Python, I read that a good approach is to use the subprocess module like so:
my_var = subprocess.check_output("./my_prog", shell=True)
With this construct, I can get the program's output into my_var (here "Hello World"), however it is also printed to the screen when I run the Python script. Is there any way to suppress this? I couldn't find anything in the subprocess documentation, so maybe there is another module I could use for this purpose?
EDIT:
I just found out that commands.getoutput() lets me do this. But is there also a way to achieve similar effects in subprocess? Because I was planning to make a Python3 version at some point.
EDIT2: Particular Example
Excerpt from the python script:
oechem_utils_path = "/soft/linux64/openeye/examples/oechem-utilities/"\
"openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/"\
"oechem-utilities/"
rmsd_path = oechem_utils_path + "rmsd"
for file in lMol2:
sReturn = subprocess.check_output("{rmsd_exe} {rmsd_pars}"\
" -in {sIn} -ref {sRef}".format(rmsd_exe=sRmsdExe,\
rmsd_pars=sRmsdPars, sIn=file, sRef=sReference), shell=True)
dRmsds[file] = sReturn
Screen Output (Note that not "everything" is printed to the screen, only a part of
the output, and if I use commands.getoutput everything works just fine:
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd - RMSD utility [OEChem 1.7.2]
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
To add to Ryan Haining's answer, you can also handle stderr to make sure nothing is printed to the screen:
p = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, close_fds=True)
out,err = p.communicate()
If subprocess.check_ouput is not working for you, use a Popen object and a PIPE to capture the program's output in Python.
prog = subprocess.Popen('./myprog', shell=True, stdout=subprocess.PIPE)
output = prog.communicate()[0]
the .communicate() method will wait for a program to finish execution and then return a tuple of (stdout, stderr) which is why you'll want to take the [0] of that.
If you also want to capture stderr then add stderr=subprocess.PIPE to the creation of the Popen object.
If you wish to capture the output of prog while it is running instead of waiting for it to finish, you can call line = prog.stdout.readline() to read one line at a time. Note that this will hang if there are no lines available until there is one.
I always used Subprocess.Popen, which gives you no output normally
Related
I would like to be able to run a subprocess from python code and both see the output in real time and once the process is finished have the output in a variable
Right now I do one of either two things
1) Run subprocess using subprocess.call in that case I get the output in real time but I don't have at the end the output in a variable (I want to parse it and extract values from it)
2) Run subprocess using subprocess.check_output in that case I have the output in a variable but if I want to see it then I have to print it "manually"
Is there a way to get both things "together" ?
Hope it is clear, I can add my code if you need
Thanks !!!
EDIT:
This is my current code
I added a timeout optional parameter (Default value is 1200 and also deal with shell (For some reason same commands that work in Linux do not work in Windows if I don't have the shell=True) the "mode" parameter is the one that I use to differentiate the cases where I want the output in "real time" and I don't have to parse it and the other cases
I was wondering if there is a cleaner and better way to achieve same results
Assuming you are trying to run some command your_command You can use the following:
some_proc = subprocess.Popen(['your_command'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
The stdout=subprocess.PIPE does stdout the result on success. Afterwards, you can access the output as follows:
store_in_var = some_proc.stdout
Now you can parse your store_in_var
import subprocess
from subprocess import PIPE
comd = input('command here : ')
comds = comd.split(' ')
f = subprocess.run(comds, shell= True,stdout=PIPE, stderr=PIPE)
result = f.stdout.decode()
errors = f.stderr.decode()
I am trying to retrieve some information from a Perl script using Python and subprocess:
command = ["perl","script.perl","arg1.txt","<","arg2.txt"]
print " ".join(command)
p = subprocess.Popen(command,stdout=subprocess.PIPE,shell=True)
text = p.stdout.read()
The join-statement simply prints the command as I would enter it in the terminal for double-checking the quality of the command. That one always works... But within Python, it hangs at the subprocess.Popen() (at p= ... ).
I also tried several other methods such as call() but to no avail.
It only outputs one line of text, so I don't know how that could be the problem.
There's no need to involve the shell if you only want a simple input redirection. Open the file in Python, and pass the file handle to Popen via the stdin argument.
with open("arg2.txt") as infile:
command = ["perl", "script.perl", "arg1.txt"]
p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=infile)
text = p.stdout.read()
or
command = "perl script.perl arg1.txt < arg2.txt"
p = subprocess.Popen(command,stdout=subprocess.PIPE,shell=True)
text = p.stdout.read()
With a list and shell=True, it's not clear to me why it the call to perl blocks. When I try something like
subprocess.call("cat < .bashrc".split(), shell=True)
it blocks as if it is still trying to read from the inherited standard input. If I provide it with input using
subprocess.call("cat < .bashrc".split(), shell=True, stdin=open("/dev/null"))
the call returns immediately. In either case, it appears that cat is ignoring its further arguments.
(Using python 3.2 currently)
I need to be able to:
Run a command using subprocess
Both stdout/stderr of that command need be printed to the terminal in real-time (it doesn't matter if they both come out on stdout or stderr or whatever
At the same time, I need a way to know if the command printed anything to stderr (and preferably what it printed).
I've played around with subprocess pipes as well as doing strange pipe redirects in bash, as well as using tee, but as of yet haven't found anything that would work. Is this something that's possible?
My solution:
import subprocess
process = subprocess.Popen("my command", shell=True,
stdout=None, # print to terminal
stderr=subprocess.PIPE)
duplicator = subprocess.Popen("tee /dev/stderr", shell=True, # duplicate input stream
stdin=process.stderr,
stdout=subprocess.PIPE, # catch error stream of first process
stderr=None) # print to terminal
error_stream = duplicator.stdout
print('error_stream.read() = ' + error_stream.read())
Try something like this:
import os
cmd = 'for i in 1 2 3 4 5; do sleep 5; echo $i; done'
p = os.popen(cmd)
while True:
output = p.readline()
print(output)
if not output: break
In python2 you can catch stderr easily as well by using popen3 like this:
i, o, err = os.popen3(cmd)
but there seem to be no such function in python3. If you don find the way around this, try using subprocess.Popen directly, as described here: http://www.saltycrane.com/blog/2009/10/how-capture-stdout-in-real-time-python/
I was looking to implement a python script that called another script and captured its stdout. The called script will contain some input and output messages eg
print ("Line 1 of Text")
variable = raw_input("Input 1 :")
print "Line 2 of Text Input: ", vairable
The section of the code I'm running is
import subprocess
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
so, se = p.communicate()
print(so)
The problem that is occurring is that the stdout is not printing until after the script has been executed. This leaves a blank prompt waiting for the user input. Is there a way to get stdout to print while the called script is still running?
Thanks,
There are two problems here.
Firstly, python is buffering output to stdout and you need to prevent this. You could insert a call to sys.stdout.flush() in testfile.py as Ilia Frenkel has suggested, or you could use python -u to execute testfile.py with unbuffered I/O. (See the other stack overflow question that Ilia linked to.)
You need a way of asynchronously reading data from the sub-process and then, when it is ready for input, printing the data you've read so that the prompt for the user appears. For this, it would be very helpful to have an asynchronous version of the subprocess module.
I downloaded the asynchronous subprocess and re-wrote your script to use it, along with using python -u to get unbuffered I/O:
import async_subprocess as subprocess
cmd = ['python', '-u', 'testfile.py']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
so = p.asyncread()
print so,
(so, se) = p.communicate()
print so
When I run this script using python -u I get the following results:
$ python -u script.py
Line 1 of Text
Input 1:
and the script pauses, waiting for input. This is the desired result.
If I then type something (e.g. "Hullo") I get the following:
$ python -u script.py
Line 1 of Text
Input 1:Hullo
Line 2 of Text Input: Hullo
You don't need to capture it's stdout really, just have the child program print out its stuff and quit, instead of feeding the output into your parent program and printing it there. If you need variable output, just use a function instead.
But anyways, that's not what you asked.
I actually got this from another stackoverflow question:
import subprocess, sys
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
out = p.stdout.read(20)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
First, it opens up your process: then it continually reads the output from p and prints it onto the screen using sys.stdout.write. The part that makes this all work is sys.stdout.flush(), which will continually "flush out" the output of the program.
In a script , I want to run a .exe with some command line parameters as "-a",and then
redirect the standard output of the program to a file?
How can I implement that?
You can redirect directly to a file using subprocess.
import subprocess
with open('output.txt', 'w') as output_f:
p = subprocess.Popen('Text/to/execute with-arg',
stdout=output_f,
stderr=output_f)
Easiest is os.system("the.exe -a >thefile.txt"), but there are many other ways, for example with the subprocess module in the standard library.
You can do something like this
e.g. to read output of ls -l (or any other command)
p = subprocess.Popen(["ls","-l"],stdout=subprocess.PIPE)
print p.stdout.read() # or put it in a file
you can do similar thing for stderr/stdin
but as Alex mentioned if you just want it in a file, just redirect the cmd output to a file
If you just want to run the executable and wait for the results, Anurag's solution is probably the best. I needed to respond to each line of output as it arrived, and found the following worked:
1) Create an object with a write(text) method. Redirect stdout to it (sys.stdout = obj). In your write method, deal with the output as it arrives.
2) Run a method in a seperate thread with something like the following code:
p = subprocess.Popen('Text/to/execute with-arg', stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
while p.poll() is None:
print p.stdout.readline().strip()
Because you've redirected stdout, PIPE will send the output to your write method line by line. If you're not certain you're going to get line breaks, read(amount) works too, I believe.
3) Remember to redirect stdout back to the default: sys.stdout = __sys.stdout__
Although the title (.exe) sounds like it's a problem on Windows.
I had to share that the accepted answer (subprocess.Popen() with stdout/stderr arguments) didn't work for me on Mac OS X (10.8) with python 2.7.
I had to use subprocess.check_output() (python 2.7 and above) to make it work. Example:
import subprocess
cmd = 'ls -l'
out = subprocess.check_output(cmd, shell=True)
with open('my.log', 'w') as f:
f.writelines(out)
f.close()
Note that this solution writes all the accumulated output out when the program finishes.
If you want to monitor the log file during the run. You may want to try something else.
In my own case, I only cared about the end result.