Quick and simple: I'm calling a process with subprocess.popen which will run continuously in cmd/terminal. I want to call a function if a certain string of text is displayed in cmd/terminal. How would I do this? I'm rather new to python, so the simplest solution will probably be the best one.
Thank you very much!
The output you are looking for is either coming from stdout or stderr. Let's assume it's coming from stdout. (If it is coming from stderr the solution is analogous; just change stdout to stderr, below.)
import subprocess
PIPE = subprocess.PIPE
proc = subprocess.Popen(['ls'], stdout=PIPE)
for line in iter(proc.stdout.readline, ''):
if line.startswith('a'):
print(line)
Replace line.startswith('a') with whatever condition is appropriate, and (of course) replace ['ls'] with whatever command you desire.
The only problem with #unutbu code is (but that doesn't apply to your case, it's more in general) is that it blocks your application on every readline call. So you cannot do anything else.
If you want a nonblocking solution you should create a file, open it in the write and read mode, pass the writer to Popen and read from the reader. Reading from the reader will always be non-blocking.
import io
import time
import subprocess
filename = 'temp.file'
with io.open(filename, 'wb') as writer, io.open(filename, 'rb', 1) as reader:
process = subprocess.Popen(command, stdout=writer)
# Do whatever you want in your code
data = reader.read() # non-blocking read
# Do more stuff
data = reader.read() # another non-blocking read...
# ...
And so on...
This doesn't apply for your particular case in which the #unutbu's solution works perfectly well. I added it just for completeness sake...
Related
I want to read the content of the file which was written to the file by different function
from subprocess import *
import os
def compile():
f=open("reddy.txt","w+")
p=Popen("gcc -c rahul.c ",stdout=f,shell=True,stderr=STDOUT) #i have even tried with with open but it is not working,It is working with r+ but it is appending to file.
f.close()
def run():
p1=Popen("gcc -o r.exe rahul.c",stdout=PIPE,shell=True,stderr=PIPE)
p2=Popen("r.exe",stdout=PIPE,shell=True,stderr=PIPE)
print(p2.stdout.read())
p2.kill()
compile()
f1=open("reddy.txt","w+")
first_char=f1.readline() #unable to read here ….!!!!!!
print(first_char)
#run()
first_char must have first line of file reddy.txt but it is showing null
You are assuming that Popen finishes the process, but it doesn't; Popen will merely start a process - and unless the compilation is extremely fast, it's quite likely that reddy.txt will be empty when you try to read it.
With Python 3.5+ you want subprocess.run().
# Don't import *
from subprocess import run as s_run, PIPE, STDOUT
# Remove unused import
#import os
def compile():
# Use a context manager
with open("reddy.txt", "w+") as f:
# For style points, avoid shell=True
s_run(["gcc", "-c", "rahul.c "], stdout=f, stderr=STDOUT,
# Check that the compilation actually succeeds
check=True)
def run():
compile() # use the function we just defined instead of repeating youself
p2 = s_run(["r.exe"], stdout=PIPE, stderr=PIPE,
# Check that the process succeeds
check = True,
# Decode output from bytes() to str()
universal_newlines=True)
print(p2.stdout)
compile()
# Open file for reading, not writing!
with open("reddy.txt", "r") as f1:
first_char = f1.readline()
print(first_char)
(I adapted the run() function along the same lines, though it's not being used in any of the code you posted.)
first_char is misleadingly named; readline() will read an entire line. If you want just the first byte, try
first_char = f1.read(1)
If you need to be compatible with older Python versions, try check_output or check_call instead of run. If you are on 3.7+ you can use text=True instead of the older and slightly misleadingly named universal_newlines=True.
For more details about the changes I made, maybe see also this.
If you have a look at the documentation on open you can see that when you use w to open a file, it will first truncate that files contents. Meaning there will be no output as you describe.
Since you only want to read the file you should use r in the open statement:
f1 = open("reddy.txt", "r")
I've a snippet like this:
my_string = "foo bar"
def print_string(fd=sys.stdout):
print(my_string, file=fd)
How do I get to pipe the output of print_string to a pager, say, less?
I'm aware of using subprocess.Popen with stdin=PIPE, and then using proc.communicate(), but then I can only write my_string directly, not redirect from an existing descriptor.
Although a bit silly, but I tried the below; I'm not surprised that it doesn't work:
proc = subprocess.Popen("less -".split(), stdin=sys.stdout)
print_string()
proc.wait()
Git commands seems to do the same thing, effectively: pipes its output through a pager; and I was trying to achieve a similar effect.
Less needs to read from the "real" stdin to get key presses, otherwise it can't react to user input. Instead you can create a temporary file and let less read that:
import subprocess
import tempfile
with tempfile.NamedTemporaryFile("w") as f:
f.write("hello world!")
f.flush() // flush or otherwise the content
// might not be written when less tries to read
p = subprocess.Popen(["/usr/bin/less", f.name])
p.wait()
This might have security consequences, best to read the documentation on tempfile before using it on something super secure.
I'm also not sure how git does it or if there is a better way, but it worked in my short tests.
Is there a way in Python to do the equivalent of the UNIX command line tee? I'm doing a typical fork/exec pattern, and I'd like the stdout from the child to appear in both a log file and on the stdout of the parent simultaneously without requiring any buffering.
In this python code for instance, the stdout of the child ends up in the log file, but not in the stdout of the parent.
pid = os.fork()
logFile = open(path,"w")
if pid == 0:
os.dup2(logFile.fileno(),1)
os.execv(cmd)
edit: I do not wish to use the subprocess module. I'm doing some complicated stuff with the child process that requires me call fork manually.
Here you have a working solution without using the subprocess module. Although, you could use it for the tee process while still using the exec* functions suite for your custom subprocess (just use stdin=subprocess.PIPE and then duplicate the descriptor to your stdout).
import os, time, sys
pr, pw = os.pipe()
pid = os.fork()
if pid == 0:
os.close(pw)
os.dup2(pr, sys.stdin.fileno())
os.close(pr)
os.execv('/usr/bin/tee', ['tee', 'log.txt'])
else:
os.close(pr)
os.dup2(pw, sys.stdout.fileno())
os.close(pw)
pid2 = os.fork()
if pid2 == 0:
# Replace with your custom process call
os.execv('/usr/bin/yes', ['yes'])
else:
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
pass
Note that the tee command, internally, does the same thing as Ben suggested in his answer: reading input and looping over output file descriptors while writing to them. It may be more efficient because of the optimized implementation and because it's written in C, but you have the overhead of the different pipes (don't know for sure which solution is more efficient, but in my opinion, reassigning a custom file-like object to stdout is a more elegant solution).
Some more resources:
How do I duplicate sys.stdout to a log file in python?
http://www.shallowsky.com/blog/programming/python-tee.html
In the following, SOMEPATH is the path to the child executable, in a format suitable for subprocess.Popen (see its docs).
import sys, subprocess
f = open('logfile.txt', 'w')
proc = subprocess.Popen(SOMEPATH, stdout=subprocess.PIPE)
while True:
out = proc.stdout.read(1)
if out == '' and proc.poll() != None:
break
if out != '':
# CR workaround since chars are read one by one, and Windows interprets
# both CR and LF as end of lines. Linux only has LF
if out != '\r': f.write(out)
sys.stdout.write(out)
sys.stdout.flush()
Would an approach like this do what you want?
import sys
class Log(object):
def __init__(self, filename, mode, buffering):
self.filename = filename
self.mode = mode
self.handle = open(filename, mode, buffering)
def write(self, thing):
self.handle.write(thing)
sys.stdout.write(thing)
You'd probably need to implement more of the file interface for this to be really useful (and I've left out properly defaulting mode and buffering, if you want it). You could then do all your writes in the child process to an instance of Log. Or, if you wanted to be really magic, and you're sure you implement enough of the file interface that things won't fall over and die, you could potentially assign sys.stdout to be an instance of this class. Then I think any means of writing to stdout, including print, will go via the log class.
Edit to add: Obviously if you assign to sys.stdout you will have to do something else in the write method to echo the output to stdout!! I think you could use sys.__stdout__ for that.
Oh, you. I had a decent answer all prettied-up before I saw the last line of your example: execv(). Well, poop. The original idea was replacing each child process' stdout with an instance of this blog post's tee class, and split the stream into the original stdout, and the log file:
http://www.shallowsky.com/blog/programming/python-tee.html
But, since you're using execv(), the child process' tee instance would just get clobbered, so that won't work.
Unfortunately for you, there is no "out of the box" solution to your problem that I can find. The closest thing would be to spawn the actual tee program in a subprocess; if you wanted to be more cross-platform, you could fork a simple Python substitute.
First thing to know when coding a tee substitute: tee really is a simple program. In all the true C implementations I've seen, it's not much more complicated than this:
while((character = read()) != EOF) {
/* Write to all of the output streams in here, then write to stdout. */
}
Unfortunately, you can't just join two streams together. That would be really useful (so that the input of one stream would automatically be forwarded out of another), but we've no such luxury without coding it ourselves. So, Eli and I are going to have very similar answers. The difference is that, in my answer, the Python 'tee' is going to run in a separate process, via a pipe; that way, the parent thread is still useful!
(Remember: copy the blog post's tee class, too.)
import os, sys
# Open it for writing in binary mode.
logFile=open("bar", "bw")
# Verbose names, but I wanted to get the point across.
# These are file descriptors, i.e. integers.
parentSideOfPipe, childSideOfPipe = os.pipe()
# 'Tee' subprocess.
pid = os.fork()
if pid == 0:
while True:
char = os.read(parentSideOfPipe, 1)
logFile.write(char)
os.write(1, char)
# Actual command
pid = os.fork()
if pid == 0:
os.dup2(childSideOfPipe, 1)
os.execv(cmd)
I'm sorry if that's not what you wanted, but it's the best solution I can find.
Good luck with the rest of your project!
The first obvious answer is to fork an actual tee process but that is probably not ideal.
The tee code (from coreutils) merely reads each line and writes to each file in turn (effectively buffering).
I have an app that reads in stuff from stdin and returns, after a newline, results to stdout
A simple (stupid) example:
$ app
Expand[(x+1)^2]<CR>
x^2 + 2*x + 1
100 - 4<CR>
96
Opening and closing the app requires a lot of initialization and clean-up (its an interface to a Computer Algebra System), so I want to keep this to a minimum.
I want to open a pipe in Python to this process, write strings to its stdin and read out the results from stdout. Popen.communicate() doesn't work for this, as it closes the file handle, requiring to reopen the pipe.
I've tried something along the lines of this related question:
Communicate multiple times with a process without breaking the pipe? but I'm not sure how to wait for the output. It is also difficult to know a priori how long it will take the app to finish to process for the input at hand, so I don't want to make any assumptions. I guess most of my confusion comes from this question: Non-blocking read on a subprocess.PIPE in python where it is stated that mixing high and low level functions is not a good idea.
EDIT:
Sorry that I didn't give any code before, got interrupted. This is what I've tried so far and it seems to work, I'm just worried that something goes wrong unnoticed:
from subprocess import Popen, PIPE
pipe = Popen(["MathPipe"], stdin=PIPE, stdout=PIPE)
expressions = ["Expand[(x+1)^2]", "Integrate[Sin[x], {x,0,2*Pi}]"] # ...
for expr in expressions:
pipe.stdin.write(expr)
while True:
line = pipe.stdout.readline()
if line != '':
print line
# output of MathPipe is always terminated by ';'
if ";" in line:
break
Potential problems with this?
Using subprocess, you can't do this reliably. You might want to look at using the pexpect library. That won't work on Windows - if you're on Windows, try winpexpect.
Also, if you're trying to do mathematical stuff in Python, check out SAGE. They do a lot of work on interfacing with other open-source maths software, so there's a chance they've already done what you're trying to.
Perhaps you could pass stdin=subprocess.PIPE as an argument to subprocess.Popen. This will make the process' stdin available as a general file-like object:
import sys, subprocess
proc = subprocess.Popen(["mathematica <args>"], stdin=subprocess.PIPE,
stdout=sys.stdout, shell=True)
proc.stdin.write("Expand[ (x-1)^2 ]") # Write whatever to the process
proc.stdin.flush() # Ensure nothing is left in the buffer
proc.terminate() # Kill the process
This directs the subprocess' output directly to your python process' stdout. If you need to read the output and do some editing first, that is possible as well. Check out http://docs.python.org/library/subprocess.html#popen-objects.
I have some data that I would like to gzip, uuencode and then print to standard out. What I basically have is:
compressor = Popen("gzip", stdin = subprocess.PIPE, stdout = subprocess.PIPE)
encoder = Popen(["uuencode", "dummy"], stdin = compressor.stdout)
The way I feed data to the compressor is through compressor.stdin.write(stuff).
What I really need to do is to send an EOF to the compressor, and I have no idea how to do it.
At some point, I tried compressor.stdin.close() but that doesn't work -- it works well when the compressor writes to a file directly, but in the case above, the process doesn't terminate and stalls on compressor.wait().
Suggestions? In this case, gzip is an example and I really need to do something with piping the output of one process to another.
Note: The data I need to compress won't fit in memory, so communicate isn't really a good option here. Also, if I just run
compressor.communicate("Testing")
after the 2 lines above, it still hangs with the error
File "/usr/lib/python2.4/subprocess.py", line 1041, in communicate
rlist, wlist, xlist = select.select(read_set, write_set, [])
I suspect the issue is with the order in which you open the pipes. UUEncode is funny is that it will whine when you launch it if there's no incoming pipe in just the right way (try launching the darn thing on it's own in a Popen call to see the explosion with just PIPE as the stdin and stdout)
Try this:
encoder = Popen(["uuencode", "dummy"], stdin=PIPE, stdout=PIPE)
compressor = Popen("gzip", stdin=PIPE, stdout=encoder.stdin)
compressor.communicate("UUencode me please")
encoded_text = encoder.communicate()[0]
print encoded_text
begin 644 dummy
F'XL(`%]^L$D``PL-3<U+SD])5<A-52C(24TL3#4`;2O+"!(`````
`
end
You are right, btw... there is no way to send a generic EOF down a pipe. After all, each program really defines its own EOF. The way to do it is to close the pipe, as you were trying to do.
EDIT: I should be clearer about uuencode. As a shell program, it's default behaviour is to expect console input. If you run it without a "live" incoming pipe, it will block waiting for console input. By opening the encoder second, before you had sent material down the compressor pipe, the encoder was blocking waiting for you to start typing. Jerub was right in that there was something blocking.
This is not the sort of thing you should be doing directly in python, there are eccentricities regarding the how thing work that make it a much better idea to do this with a shell. If you can just use subprocess.Popen("foo | bar", shell=True), then all the better.
What might be happening is that gzip has not been able to output all of its input yet, and the process will no exit until its stdout writes have been finished.
You can look at what system call a process is blocking on if you use strace. Use ps auxwf to discover which process is the gzip process, then use strace -p $pidnum to see what system call it is performing. Note that stdin is FD 0 and stdout is FD 1, you will probably see it reading or writing on those file descriptors.
if you just want to compress and don't need the file wrappers consider using the zlib module
import zlib
compressed = zlib.compress("text")
any reason why the shell=True and unix pipes suggestions won't work?
from subprocess import *
pipes = Popen("gzip | uuencode dummy", stdin=PIPE, stdout=PIPE, shell=True)
for i in range(1, 100):
pipes.stdin.write("some data")
pipes.stdin.close()
print pipes.stdout.read()
seems to work