I wrote a simple python script for my application and predefined some fast commands like make etc.
I've written a function for running system commands (linux):
def runCommand(commandLine):
print('############## Running command: ' + commandLine)
p = subprocess.Popen(commandLine, shell = True, stdout = subprocess.PIPE)
print (p.stdout.read().decode('utf-8'))
Everything works well except a few things:
I'm using cmake and it's output is colored. Any chances to save colors in output?
I can look at output after process has finished. For example, make runs for a long period of time but I can see the output only after full compilation. How to do it asynchronously?
I'm not sure about colors, but here's how to poll the subprocess's stdout one line at a time:
import subprocess
proc = subprocess.Popen('cmake', shell=True, stdout=subprocess.PIPE)
while proc.poll() is None:
output = proc.stdout.readline()
print output
Don't forget to read from stderr as well, as I'm sure cmake will emit information there.
You're not getting color because cmake detects if its stdout is a terminal, if it's not it doesn't color its own output. Some programs give you an option to force coloring output. Unfortunately cmake does not, so you're out of luck there. Unless you want to patch cmake yourself.
Lots of programs do this, for example grep:
# grep test test.txt
test
^
|
|------- this word is red
Now pipe it to cat:
# grep test test.txt | cat
test
^
|
|------- no longer red
grep option --color=always to force color:
# grep test test.txt --color=always | cat
test
^
|
|------- red again
Regarding how to get the output of your process before it finishes, it should be possible to do that replacing:
p.stdout.read
with:
for line in p.stdout:
Regarding how to save colored output, there isn't anything special about that. For example, if the row output is saved to a file, then next time cat <logfile> is executed the console will interpret the escape sequences and displaye the colors as expected.
For CMake specifically, you can force color output using the option CLICOLOR_FORCE=1:
command = 'make CLICOLOR_FORCE=1'
args = shlex.split(command)
proc = subprocess.Popen(args, stdout=subprocess.PIPE)
Then print as in the accepted answer:
while proc.poll() is None:
output = proc.stdout.readline()
print(output.decode('utf-8'))
If you decode to utf-8 before printing, you should see colored output.
If you print the result as a byte literal (i.e. without decoding), you should see the escape sequences for the various colors.
Consider trying the option universal_newlines=True:
proc = subprocess.Popen(args, stdout=subprocess.PIPE, universal_newlines=True)
This causes the call to proc.stdout.readline() to return a string instead of a byte literal so you can/must skip the call to decode().
To do async output do something like: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/440554
Not sure if you can capture the coloured output. If you can get the escaped colour codes you may be able to.
Worth noting here is the use of the script command as a pseudo terminal and be detected as a tty instead of redirect(pipe) file descriptor, see:
bash command preserve color when piping
Works like a charm...
As per the example in the question simply let script execute cmake:
import subprocess
proc = subprocess.Popen('script cmake', shell=True, stdout=subprocess.PIPE)
while proc.poll() is None:
output = proc.stdout.readline()
print output
This tricks cmake into thinking it's being executed from a terminal and will produce the ANSI candy you're after.
nJoy!
Related
I keep trying to run this piece of code but everytime I print, it seems to net me nothing in regards to what I see with my output.
p = subprocess.run(["powershell.exe", "C://Users//xxxxx//Documents//betterNetstatOut.ps1"], shell=True, capture_output=True, text=True)
output = p.stdout
print(output)
My PowerShell command is a very basic println at this point:
Write-Output 'Hello world'
but running print on my out seems to return an empty string. I also tried running subprocess.Popen() and subprocess.call() and they all seem to return an empty string instead of 'Hello World'. Eventually, I would like to parse many lines and move them to a dataframe but I am stuck on this one line first.
Your PowerShell command likely produced only stderr output, which is why you saw no output given that you only printed p.stdout - also printing p.stderr would surface any stderr output (which typically contains error messages).
Assuming your script file path is correct, the likeliest explanation for receiving only stderr output is that your effective PowerShell execution policy prevents execution of scripts (.ps1 files), which you can bypass with -ExecutionPolicy Bypass in a call to the Windows PowerShell CLI, powershell.exe.
Additionally:
There's no need for double slashes (//) in your script path; while it still works, / is sufficient.
It's better to use the -File parameter rather than -Command (which is implied) for invoking scripts via the PowerShell CLI - see this answer for more information.
For a predictably execution environment and to avoid unnecessary overhead from loading profiles, using -NoProfile is advisable.
You don't need shell=True, which, due to calling via cmd.exe, only slows your command down.
To put it all together:
import subprocess
p = subprocess.run(
["powershell.exe",
"-NoProfile",
"-ExecutionPolicy", "Bypass",
"-File", "C:/Users/xxxxx/Documents/betterNetstatOut.ps1"],
capture_output=True, text=True
)
print('--- stdout --')
print(p.stdout)
print('--- stderr --')
print(p.stderr)
import subprocess
p = subprocess.run(["powershell.exe", "powershell -ExecutionPolicy Bypass -File", "C://Users//xxxxx//Documents//betterNetstatOut.ps1"], shell=True, capture_output=True, text=True)
print(p.stdout)
I want to execute bash command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set shell=True:
command = ['/bin/echo', '</verbosegc>']
with open('/tmp/jruby.log',"a") as f:
subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT)
The first argument to subprocess.Popen is the array ['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']. When the first argument to subprocess.Popen is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting >> /tmp/jruby.log to mean "write output to jruby.log".
In order to make the >> redirection work in this command, you'll need to pass command directly to subprocess.Popen() without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
Consider the following:
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
The first argument is a shell script referring to $1 and $2, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
Of course, don't actually do anything like this in Python -- the native primitives for file IO are far more appropriate.
Have you tried without splitting the command and using shell=True? My usual format is:
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()
Is there a way that I can execute a shell program from Python, which prints its output to the screen, and read its output to a variable without displaying anything on the screen?
This sounds a little bit confusing, so maybe I can explain it better by an example.
Let's say I have a program that prints something to the screen when executed
bash> ./my_prog
bash> "Hello World"
When I want to read the output into a variable in Python, I read that a good approach is to use the subprocess module like so:
my_var = subprocess.check_output("./my_prog", shell=True)
With this construct, I can get the program's output into my_var (here "Hello World"), however it is also printed to the screen when I run the Python script. Is there any way to suppress this? I couldn't find anything in the subprocess documentation, so maybe there is another module I could use for this purpose?
EDIT:
I just found out that commands.getoutput() lets me do this. But is there also a way to achieve similar effects in subprocess? Because I was planning to make a Python3 version at some point.
EDIT2: Particular Example
Excerpt from the python script:
oechem_utils_path = "/soft/linux64/openeye/examples/oechem-utilities/"\
"openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/"\
"oechem-utilities/"
rmsd_path = oechem_utils_path + "rmsd"
for file in lMol2:
sReturn = subprocess.check_output("{rmsd_exe} {rmsd_pars}"\
" -in {sIn} -ref {sRef}".format(rmsd_exe=sRmsdExe,\
rmsd_pars=sRmsdPars, sIn=file, sRef=sReference), shell=True)
dRmsds[file] = sReturn
Screen Output (Note that not "everything" is printed to the screen, only a part of
the output, and if I use commands.getoutput everything works just fine:
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd - RMSD utility [OEChem 1.7.2]
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: mols in: 1 out: 0
/soft/linux64/openeye/examples/oechem-utilities/openeye/toolkits/1.7.2.4/redhat-RHEL5-g++4.3-x64/examples/oechem-utilities/rmsd: confs in: 1 out: 0
To add to Ryan Haining's answer, you can also handle stderr to make sure nothing is printed to the screen:
p = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, close_fds=True)
out,err = p.communicate()
If subprocess.check_ouput is not working for you, use a Popen object and a PIPE to capture the program's output in Python.
prog = subprocess.Popen('./myprog', shell=True, stdout=subprocess.PIPE)
output = prog.communicate()[0]
the .communicate() method will wait for a program to finish execution and then return a tuple of (stdout, stderr) which is why you'll want to take the [0] of that.
If you also want to capture stderr then add stderr=subprocess.PIPE to the creation of the Popen object.
If you wish to capture the output of prog while it is running instead of waiting for it to finish, you can call line = prog.stdout.readline() to read one line at a time. Note that this will hang if there are no lines available until there is one.
I always used Subprocess.Popen, which gives you no output normally
I was looking to implement a python script that called another script and captured its stdout. The called script will contain some input and output messages eg
print ("Line 1 of Text")
variable = raw_input("Input 1 :")
print "Line 2 of Text Input: ", vairable
The section of the code I'm running is
import subprocess
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
so, se = p.communicate()
print(so)
The problem that is occurring is that the stdout is not printing until after the script has been executed. This leaves a blank prompt waiting for the user input. Is there a way to get stdout to print while the called script is still running?
Thanks,
There are two problems here.
Firstly, python is buffering output to stdout and you need to prevent this. You could insert a call to sys.stdout.flush() in testfile.py as Ilia Frenkel has suggested, or you could use python -u to execute testfile.py with unbuffered I/O. (See the other stack overflow question that Ilia linked to.)
You need a way of asynchronously reading data from the sub-process and then, when it is ready for input, printing the data you've read so that the prompt for the user appears. For this, it would be very helpful to have an asynchronous version of the subprocess module.
I downloaded the asynchronous subprocess and re-wrote your script to use it, along with using python -u to get unbuffered I/O:
import async_subprocess as subprocess
cmd = ['python', '-u', 'testfile.py']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
so = p.asyncread()
print so,
(so, se) = p.communicate()
print so
When I run this script using python -u I get the following results:
$ python -u script.py
Line 1 of Text
Input 1:
and the script pauses, waiting for input. This is the desired result.
If I then type something (e.g. "Hullo") I get the following:
$ python -u script.py
Line 1 of Text
Input 1:Hullo
Line 2 of Text Input: Hullo
You don't need to capture it's stdout really, just have the child program print out its stuff and quit, instead of feeding the output into your parent program and printing it there. If you need variable output, just use a function instead.
But anyways, that's not what you asked.
I actually got this from another stackoverflow question:
import subprocess, sys
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
out = p.stdout.read(20)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
First, it opens up your process: then it continually reads the output from p and prints it onto the screen using sys.stdout.write. The part that makes this all work is sys.stdout.flush(), which will continually "flush out" the output of the program.
I have a problem piping a simple subprocess.Popen.
Code:
import subprocess
cmd = 'cat file | sort -g -k3 | head -20 | cut -f2,3' % (pattern,file)
p = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE)
for line in p.stdout:
print(line.decode().strip())
Output for file ~1000 lines in length:
...
sort: write failed: standard output: Broken pipe
sort: write error
Output for file >241 lines in length:
...
sort: fflush failed: standard output: Broken pipe
sort: write error
Output for file <241 lines in length is fine.
I have been reading the docs and googling like mad but there is something fundamental about the subprocess module that I'm missing ... maybe to do with buffers. I've tried p.stdout.flush() and playing with the buffer size and p.wait(). I've tried to reproduce this with commands like 'sleep 20; cat moderatefile' but this seems to run without error.
From the recipes on subprocess docs:
# To replace shell pipeline like output=`dmesg | grep hda`
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
This is because you shouldn't use "shell pipes" in the command passed to subprocess.Popen, you should use the subprocess.PIPE like this:
from subprocess import Popen, PIPE
p1 = Popen('cat file', stdout=PIPE)
p2 = Popen('sort -g -k 3', stdin=p1.stdout, stdout=PIPE)
p3 = Popen('head -20', stdin=p2.stdout, stdout=PIPE)
p4 = Popen('cut -f2,3', stdin=p3.stdout)
final_output = p4.stdout.read()
But i have to say that what you're trying to do could be done in pure python instead of calling a bunch of shell commands.
I have been having the same error. Even put the pipe in a bash script and executed that instead of the pipe in Python. From Python it would get the broken pipe error, from bash it wouldn't.
It seems to me that perhaps the last command prior to the head is throwing an error as it's (the sort) STDOUT is closed. Python must be picking up on this whereas with the shell the error is silent. I've changed my code to consume the entire input and the error went away.
Would make sense also with smaller files working as the pipe probably buffers the entire output before head exits. This would explain the breaks on larger files.
e.g., instead of a 'head -1' (in my case, I was only wanting the first line), I did an awk 'NR == 1'
There are probably better ways of doing this depending on where the 'head -X' occurs in the pipe.
You don't need shell=True. Don't invoke the shell. This is how I would do it:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
stdout_value = p.communicate()[0]
stdout_value # the output
See if you face the problem about the buffer after using this?
try using communicate(), rather than reading directly from stdout.
the python docs say this:
"Warning Use communicate() rather than
.stdin.write, .stdout.read or
.stderr.read to avoid deadlocks due to
any of the other OS pipe buffers
filling up and blocking the child
process."
http://docs.python.org/library/subprocess.html#subprocess.Popen.stdout
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output = p.communicate[0]
for line in output:
# do stuff