I've a snippet like this:
my_string = "foo bar"
def print_string(fd=sys.stdout):
print(my_string, file=fd)
How do I get to pipe the output of print_string to a pager, say, less?
I'm aware of using subprocess.Popen with stdin=PIPE, and then using proc.communicate(), but then I can only write my_string directly, not redirect from an existing descriptor.
Although a bit silly, but I tried the below; I'm not surprised that it doesn't work:
proc = subprocess.Popen("less -".split(), stdin=sys.stdout)
print_string()
proc.wait()
Git commands seems to do the same thing, effectively: pipes its output through a pager; and I was trying to achieve a similar effect.
Less needs to read from the "real" stdin to get key presses, otherwise it can't react to user input. Instead you can create a temporary file and let less read that:
import subprocess
import tempfile
with tempfile.NamedTemporaryFile("w") as f:
f.write("hello world!")
f.flush() // flush or otherwise the content
// might not be written when less tries to read
p = subprocess.Popen(["/usr/bin/less", f.name])
p.wait()
This might have security consequences, best to read the documentation on tempfile before using it on something super secure.
I'm also not sure how git does it or if there is a better way, but it worked in my short tests.
Related
As an example I am trying to "imitate" the behaviour of the following sets of commands is bash:
mkfifo named_pipe
/challenge/embryoio_level103 < named_pipe &
cat > named_pipe
In Python I have tried the following commands:
import os
import subprocess as sp
os.mkfifo("named_pipe",0777) #equivalent to mkfifo in bash..
fw = open("named_pipe",'w')
#at this point the system hangs...
My idea it was to use subprocess.Popen and redirect stdout to fw...
next open named_pipe for reading and giving it as input to cat (still using Popen).
I know it is a simple (and rather stupid) example, but I can't manage to make it work..
How would you implement such simple scenario?
Hello fellow pwn college user! I just solved this level :)
open(path, flags) blocks execution. There are many similar stackoverflow Q&As, but I'll reiterate here. A pipe will not pass data until both ends are opened, which is why the process hangs (only 1 end was opened).
If you want to open without blocking, you may do so on certain operating systems (Unix works, Windows doesn't as far as I'm aware) using os.open with the flag os.O_NONBLOCK. I don't know what consequences there are, but be cautious of opening with nonblocking because you may try reading prematurely and there will be nothing to read (possibly leading to error, etc.).
Also, note that using the integer literal 0777 causes a syntax error, so I assume you mean 0o777 (max permissions), where the preceding 0o indicates octal. The default for os.mkfifo is 0o666, which is identical to 0o777 except for the execute flags, which are useless because pipes cannot be executed. Also, be aware that these permissions might not all be granted and when trying to set to 0o666, the permissions may actually be 0o644 (like in my case). I believe this is due to the umask, which can be changed and is used simply for security purposes, but more info can be found elsewhere.
For the blocking case, you can use the package multiprocessing like so:
import os
import subprocess as sp
from multiprocessing import Process
path='named_pipe'
os.mkfifo(path)
def read(): sp.run("cat", stdin=open(path, "r"))
def write(): sp.run(["echo", "hello world"], stdout=open(path, "w"))
if __name__ == "__main__":
p_read = Process(target=read)
p_write = Process(target=write)
p_read.start()
p_write.start()
p_read.join()
p_write.join()
os.remove(path)
output:
hello world
Quick and simple: I'm calling a process with subprocess.popen which will run continuously in cmd/terminal. I want to call a function if a certain string of text is displayed in cmd/terminal. How would I do this? I'm rather new to python, so the simplest solution will probably be the best one.
Thank you very much!
The output you are looking for is either coming from stdout or stderr. Let's assume it's coming from stdout. (If it is coming from stderr the solution is analogous; just change stdout to stderr, below.)
import subprocess
PIPE = subprocess.PIPE
proc = subprocess.Popen(['ls'], stdout=PIPE)
for line in iter(proc.stdout.readline, ''):
if line.startswith('a'):
print(line)
Replace line.startswith('a') with whatever condition is appropriate, and (of course) replace ['ls'] with whatever command you desire.
The only problem with #unutbu code is (but that doesn't apply to your case, it's more in general) is that it blocks your application on every readline call. So you cannot do anything else.
If you want a nonblocking solution you should create a file, open it in the write and read mode, pass the writer to Popen and read from the reader. Reading from the reader will always be non-blocking.
import io
import time
import subprocess
filename = 'temp.file'
with io.open(filename, 'wb') as writer, io.open(filename, 'rb', 1) as reader:
process = subprocess.Popen(command, stdout=writer)
# Do whatever you want in your code
data = reader.read() # non-blocking read
# Do more stuff
data = reader.read() # another non-blocking read...
# ...
And so on...
This doesn't apply for your particular case in which the #unutbu's solution works perfectly well. I added it just for completeness sake...
I would like process a file line by line. However I need to sort it first which I normally do by piping:
sort --key=1,2 data |./script.py.
What's the best to call sort from within python? Searching online I see subprocess or the sh module might be possibilities? I don't want to read the file into memory and sort in python as the data is very big.
Its easy. Use subprocess.Popen to run sort and read its stdout to get your data.
import subprocess
myfile = 'data'
sort = subprocess.Popen(['sort', '--key=1,2', myfile],
stdout=subprocess.PIPE)
for line in sort.stdout:
your_code_here
sort.wait()
assert sort.returncode == 0, 'sort failed'
I think this page will answer your question
The answer I prefer, from #Eli Courtwright is (all quoted verbatim):
Here's a summary of the ways to call external programs and the advantages and disadvantages of each:
os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example,
os.system("some_command < input_file | another_command > output_file")
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
http://docs.python.org/lib/os-process.html
stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything.
http://docs.python.org/lib/os-newstreams.html
The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say
print Popen("echo Hello World", stdout=PIPE, shell=True).stdout.read()
instead of
print os.popen("echo Hello World").read()
but it is nice to have all of the options there in one unified class instead of 4 different popen functions.
http://docs.python.org/lib/node528.html
The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply wait until the command completes and gives you the return code. For example:
return_code = call("echo Hello World", shell=True)
http://docs.python.org/lib/node529.html
The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.
The subprocess module should probably be what you use.
I believe sort will read all data in memory, so I'm not sure you will won anything but you can use shell=True in subprocess and use pipeline
>>> subprocess.check_output("ls", shell = True)
'1\na\na.cpp\nA.java\na.php\nerase_no_module.cpp\nerase_no_module.cpp~\nWeatherSTADFork.cpp\n'
>>> subprocess.check_output("ls | grep j", shell = True)
'A.java\n'
Warning
Invoking the system shell with shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.
I have an app that reads in stuff from stdin and returns, after a newline, results to stdout
A simple (stupid) example:
$ app
Expand[(x+1)^2]<CR>
x^2 + 2*x + 1
100 - 4<CR>
96
Opening and closing the app requires a lot of initialization and clean-up (its an interface to a Computer Algebra System), so I want to keep this to a minimum.
I want to open a pipe in Python to this process, write strings to its stdin and read out the results from stdout. Popen.communicate() doesn't work for this, as it closes the file handle, requiring to reopen the pipe.
I've tried something along the lines of this related question:
Communicate multiple times with a process without breaking the pipe? but I'm not sure how to wait for the output. It is also difficult to know a priori how long it will take the app to finish to process for the input at hand, so I don't want to make any assumptions. I guess most of my confusion comes from this question: Non-blocking read on a subprocess.PIPE in python where it is stated that mixing high and low level functions is not a good idea.
EDIT:
Sorry that I didn't give any code before, got interrupted. This is what I've tried so far and it seems to work, I'm just worried that something goes wrong unnoticed:
from subprocess import Popen, PIPE
pipe = Popen(["MathPipe"], stdin=PIPE, stdout=PIPE)
expressions = ["Expand[(x+1)^2]", "Integrate[Sin[x], {x,0,2*Pi}]"] # ...
for expr in expressions:
pipe.stdin.write(expr)
while True:
line = pipe.stdout.readline()
if line != '':
print line
# output of MathPipe is always terminated by ';'
if ";" in line:
break
Potential problems with this?
Using subprocess, you can't do this reliably. You might want to look at using the pexpect library. That won't work on Windows - if you're on Windows, try winpexpect.
Also, if you're trying to do mathematical stuff in Python, check out SAGE. They do a lot of work on interfacing with other open-source maths software, so there's a chance they've already done what you're trying to.
Perhaps you could pass stdin=subprocess.PIPE as an argument to subprocess.Popen. This will make the process' stdin available as a general file-like object:
import sys, subprocess
proc = subprocess.Popen(["mathematica <args>"], stdin=subprocess.PIPE,
stdout=sys.stdout, shell=True)
proc.stdin.write("Expand[ (x-1)^2 ]") # Write whatever to the process
proc.stdin.flush() # Ensure nothing is left in the buffer
proc.terminate() # Kill the process
This directs the subprocess' output directly to your python process' stdout. If you need to read the output and do some editing first, that is possible as well. Check out http://docs.python.org/library/subprocess.html#popen-objects.
I want my program by default to stdout, but give the option of writing it to a file. Should I create my own print function and call that testing that there is an output file or is there a better way? That seems inefficient to me, but every way I can think of calls an additional if test for every print call. I know this really doesn't matter in the long run probably, at least of this script, but I'm just trying to learn good habits.
Just write to standard out using print. If the user wants to redirect the output to a file they can do that:
python foo.py > output.txt
Write to a file object, and when the program starts either have that object point to sys.stdout or to a file specified by the user.
Mark Byers' answer is more unix-like, where most command line tools just use stdin and stdout and let the user do redirection as they see fit.
No, you don't need to create separate print function. In Python 2.6 you have this syntax:
# suppose f is an open file
print >> f, "hello"
# now sys.stdout is file too
print >> sys.stdout, "hello"
In Python 3.x:
print("hello", file=f)
# or
print("hello", file=sys.stdout)
So you really don't have to differentiate files and stdout. They are the same.
A toy example, which outputs "hello" the way you want:
#!/usr/bin/env python3
import sys
def produce_output(fobj):
print("hello", file=fobj)
# this can also be
# fobj.write("hello\n")
if __name__=="__main__":
if len(sys.argv) > 2:
print("Too many arguments", file=sys.stderr)
exit(1)
f = open(argv[1], "a") if len(argv)==2 else sys.stdout
produce_output(f)
Note that the printing procedure is abstracted of whether it is working with stdout or a file.
I recommend you using the logging module and logging.handlers... stream, output files, etc..
If you using subprocess module, then based on an option you take from your command line, you can have the stdout option to an open file object. This way, from within the program you can redirect to a file.
import subprocess
with open('somefile','w') as f:
proc = subprocess.Popen(['myprog'],stdout=f,stderr=subprocess.PIPE)
out,err = proc.communicate()
print 'output redirected to somefile'
My reaction would be to output to a temp file, then either dump that to stdio, or move it to where they requested.