Communicating with a running process - python

We have:
A Python based server (A)
A running command-line application (on the same Linux machine) which is able to read stdin, computes something and provides the output into stdout (B)
What is the best (most elegant) way how to send an input from (A) to stdin of (B), and wait for an answer from (B), i.e read its stdout?

If you spawn (B) using Python's subprocess module from the standard library, you can set up (B)'s stdin and stdout as byte buffers readable and writable by (A).
b = Popen(["b.exe"], stdin=PIPE, stdout=PIPE)
b.stdin.write("OHAI\n")
print(b.stdout.readline())
For your given example, it's easiest to use communicate, as that takes care to avoid deadlocks for you:
b = Popen(["b.exe"], stdin=PIPE, stdout=PIPE)
b_out = b.communicate("OHAI\n")[0]
print(b_out)
http://docs.python.org/release/3.1.3/library/subprocess.html
http://docs.python.org/release/3.1.3/library/subprocess.html#subprocess.Popen.communicate
If there's a lot of 2-way communication, you should take care to avoid deadlocks because of full buffers. If your communication pattern gives this type of problem, you should consider using socket communication instead.

As #Deestan pointed subprocess,module, is an elegant and proven one. We use subprocess a lot when we have to run commands from python.
Ours mostly involves running a command, mostly in-house built, and capturing its output. Our wrapper to run such commands looks thus.
import subprocess
def _run_command( _args, input=[],withShell=False):
"""
Pass args as array, like ['echo', 'hello']
Waits for completion and returns
tuple (returncode, stdout, stderr)
"""
p = subprocess.Popen(_args, shell = withShell,
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
[p.stdin.write(v) for v in input]
stdout, stderr = p.communicate()
return p.returncode, stdout, stderr
_,op,er = _run_command(['cat'],["this","is","for","testing"])
value="".join(op)
print value
_,op,er = _run_command(['ls',"/tmp"])
value="".join(op)
print value
If your input to B is minimal then subprocess is a yes.

Related

python subprocess.Popen - write to stderr

I have a c program (I'm not the author) that reads from stderr. I call it using subprocess.Popen as below. Is there any way to write to stderr of the subprocess.
proc = subprocess.Popen(['./std.bin'],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Yes, maybe, but you should be aware of the irregularity of writing to the standard output or standard error output of a subprocess. The vast majority of processes only writes to these and almost none is actually trying to read (because in almost all cases there's nothing to read).
What you could try is to open a socket and supply that as the stderr argument.
What you most probably want to do is the opposite, to read from the stderr from the subprocess (the subprocesses writes, you read). That can be done by just setting it to subprocess.PIPE and then access the stderr attribute of the subprocess:
proc subprocess(['./std.bin'], stderr=subprocess.PIPE)
for l in proc.stderr:
print(l)
Note that you could specify more than one of stdin, stdout and stderr as being subprocess.PIPE. This will not mean that they will be connected to the same pipe (subprocess.PIPE is no actuall file, but just a placeholder to indicate that a pipe should be created). If you do this however you should take care to avoid deadlocks, this can for example be done by using the communicate method (you can inspect the source of the subprocess module to see what communicate does if you want to do it yourself).
If the child process reads from stderr (note: normally stderr is opened for output):
#!/usr/bin/env python
"""Read from *stderr*, write to *stdout* reversed bytes."""
import os
os.write(1, os.read(2, 512)[::-1])
then you could provide a pseudo-tty (so that all streams point to the same place), to work with the child as if it were a normal subprocess:
#!/usr/bin/env python
import sys
import pexpect # $ pip install pexpect
child = pexpect.spawnu(sys.executable, ['child.py'])
child.sendline('abc') # write to the child
child.expect(pexpect.EOF)
print(repr(child.before))
child.close()
Output
u'abc\r\n\r\ncba'
You could also use subprocess + pty.openpty() instead pexpect.
Or you could write a code specific to the weird stderr behavior:
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE
r, w = os.pipe()
p = Popen([sys.executable, 'child.py'], stderr=r, stdout=PIPE,
universal_newlines=True)
os.close(r)
os.write(w, b'abc') # write to subprocess' stderr
os.close(w)
print(repr(p.communicate()[0]))
Output
'cba'
for line in proc.stderr:
sys.stdout.write(line)
This is write the stderr of the subprocess. Hope it answers your question.

Proper way to close all files after subprocess Popen and communicate

We are having some problems with the dreaded "too many open files" on our Ubuntu Linux machine rrunning a python Twisted application. In many places in our program, we are using subprocess Popen, something like this:
Popen('ifconfig ' + iface, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = process.stdout.read()
while in other places we use subprocess communicate:
process = subprocess.Popen(['/usr/bin/env', 'python', self._get_script_path(script_name)],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
close_fds=True)
out, err = process.communicate(data)
What exactly do I need to do in both cases in order to close any open file descriptors? Python documentation is not clear on this. From what I gather (which could be wrong) both communicate() and wait() will indeed clean up any open fds on their own. But what about Popen? Do I need to close stdin, stdout, and stderr explicitly after calling Popen if I don't call communicate or wait?
According to this source for the subprocess module (link) if you call communicate you should not need to close the stdout and stderr pipes.
Otherwise I would try:
process.stdout.close()
process.stderr.close()
after you are done using the process object.
For instance, when you call .read() directly:
output = process.stdout.read()
process.stdout.close()
Look in the above module source for how communicate() is defined and you'll see that it closes each pipe after it reads from it, so that is what you should also do.
If you're using Twisted, don't use subprocess. If you were using spawnProcess instead, you wouldn't need to deal with annoying resource-management problems like this.

How do subprocess.Popen pipes work in Python?

I am fully aware of the existence of related questions, but all of them are very fuzzy and fail to clearly explain what is going on every step of the way. The examples provided are often not tested and provide no information on how to adapt them to different scenarios. Here are the questions as well as Python's documentation:
How do I use subprocess.Popen to connect multiple processes by pipes?
link several Popen commands with pipes
https://docs.python.org/2/library/subprocess.html#popen-objects
This is essentially what I am trying to achieve, but in Python:
curl http://asdf.com/89asdf.gif | convert -resize 80x80 - - | icat -k -
Here is what I have after hours for frankensteining together bits and parts of the aforementioned answers:
import requests
import subprocess
from subprocess import Popen, PIPE, STDOUT
url = 'http://asdf.com/89asdf.gif'
img = requests.get(url)
p1 = subprocess.Popen(['convert', '-resize', '80x80', '-', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p2 = subprocess.Popen(['icat', '-k', '-'], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.communicate(img.content)
print p2.stdout
Here is my code again, this time improved based on your answers:
import requests
from subprocess import Popen, PIPE, STDOUT
url = 'http://asdf.com/89asdf.gif'
img = requests.get(url)
p1 = Popen(['convert', '-resize', '80x80', '-', '-'], stdin=PIPE, stdout=PIPE)
p2 = Popen(['icat', '-k', '-'], stdin=p1.stdout, stdout=PIPE)
p1.stdin.write(img.content)
p1.stdin.close()
p1.stdout.close()
output = p2.communicate()[0]
print output
Note: icat outputs images in 256-color capable terminals. I already managed to print its output successfully in another question. A typical image looks like this in a terminal once processed by icat:
Please correct me where I am wrong, but this is my current understanding:
p1.communicate(img.content): This sets p1's STDIN.
p1's STDIN is subprocess.PIPE, which is what p1.communicate(mg.content) provides.
p2's STDIN is p1's STDOUT, which is the image resized by convert.
I then print the p2's STDOUT, which should be the image with ASCII colors provided by icat.
Questions:
I have seen both stdin=subprocess.PIPE and stdin=PIPE. What is the difference?
Why is the first STDIN provided after the other subprocesses are defined?
Could someone explain what is actually going on in the process and what is wrong with my code? This whole piping business must be simpler than it seems, and I'd love to understand how it works.
Q1. In either case of subprocess.PIPE or just PIPE, they are referencing the same symbol, which is PIPE from the subprocess module. The following are identical:
# Version 1
import subprocess
proc = subprocess.Popen(['ls'], stdout=subprocess.PIPE)
output = proc.communicate()[0]
# Version 2
from subprocess import PIPE, Popen
proc = Popen(['ls'], stdout=PIPE)
output = proc.communicate()[0]
Q2. What communicate() is actually doing is sending input into p1's STDIN stream. Although the subprocesses are indeed alive and running when the Popen constructors are called, in your particular case the convert utility seems like it won't do anything until it actually receives content via STDIN. If it were a less interactive command (like ls for example) then it wouldn't wait until communicate() to do anything.
UPDATED CONTENT
In your case, instead of using communicate to send input into p1, try instead the following:
p1.stdin.write(img.content)
p1.stdin.close()
p1.stdout.close() # Protect against the busted pipe condition when p2 finishes before p1
output = p2.communicate()[0]
print output
See if you have better luck with that.
Q1. there is no difference they are both subprocess.PIPE ....
Q2. you dont have to do it that way you could just as easily do
p1.stdin.write(img.content)
(in fact thats exactly what that part of communicate does.... )
communicate is used because it blocks until the process finishes (in this case so its stdout can be made available to p2.stdin(maybe?)) where as write simply writes to the pipe and continues to the next line of python in the file

Printing to stdout in subprocess

I have a script which runs a subprocess as follows:
child_process = subprocess.Popen(["python", testset['dir'] + testname, \
output_spec_file, plugin_directory],\
stderr=subprocess.PIPE, stdout=subprocess.PIPE)
In that process, I am trying to insert print statements but they are not appearing to stdout. I tried using sys.stdout.write() in that subprocess and then sys.stduout.read() right after child_process but it is not capturing the output.
I am new to Python and I haven't gotten to that level of complexity in Python. I am actually working low level in C and there are some Python test scripts and I'm not sure how to print out from the subprocess.
Any suggestions?
sys.stdout.read (and write) are for standard input/output of the current process (not the subprocess). If you want to write to stdin of the child process, you need to use:
child_process.stdin.write("this goes to child") #Popen(..., stdin=subprocess.PIPE)
and similar for reading from the child's stdout stream:
child_process = subprocess.Popen( ... , stdout=subprocess.PIPE)
child_process.stdout.read("This is the data that comes back")
Of course, it is generally more idiomatic to use:
stdoutdata, stderrdata = child_process.communicate(stdindata)
(taking care to pass subprocess.PIPE to the Popen constructor where appropriate) provided that your input data can be passed all at once.

Getting shell output with Python?

I have a shell script that gets whois info for domains, and outputs taken or available to the shell depending on the domain.
I'd like to execute the script, and be able to read this value inside my Python script.
I've been playing around with subprocess.call but can't figure out how to get the output.
e.g.,
subprocess.call('myscript www.google.com', shell=True)
will output taken to the shell.
subprocess.call() does not give you the output, only the return code. For the output you should use subprocess.check_output() instead. These are friendly wrappers around the popen family of functions, which you could also use directly.
For more details, see: http://docs.python.org/library/subprocess.html
Manually using stdin and stdout with Popen was such a common pattern that it has been abstracted into a very useful method in the subprocess module: communicate
Example:
p = subprocess.Popen(['myscript', 'www.google.com'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(stdoutdata, stderrdata) = p.communicate(input="myinputstring")
# all done!
import subprocess as sp
p = sp.Popen(["/usr/bin/svn", "update"], stdin=sp.PIPE, stdout=sp.PIPE, close_fds=True)
(stdout, stdin) = (p.stdout, p.stdin)
data = stdout.readline()
while data:
# Do stuff with data, linewise.
data = stdout.readline()
stdout.close()
stdin.close()
Is the idiom I use, obviously in this case I was updating an svn repository.
try subprocess.check_output.

Categories

Resources