How to pass subprocess control to regular stdin after using a pipe? - python

What I'd like to do is to, in Python, programmatically send a few initial commands via stdin to a process, and then pass input to the user to let them control the program afterward. The Python program should simply wait until the subprocess exits due to user input. In essence, what I want to do is something along the lines of:
import subprocess
p = subprocess.Popen(['cat'], stdin=subprocess.PIPE)
# Send initial commands.
p.stdin.write(b"three\ninitial\ncommands\n")
p.stdin.flush()
# Give over control to the user.
# …Although stdin can't simply be reassigned
# in post like this, it seems.
p.stdin = sys.stdin
# Wait for the subprocess to finish.
p.wait()
How can I pass stdin back to the user (not using raw_input, since I need the user's input to come into effect every keypress and not just after pressing enter)?

Unfortunately, there is no standard way to splice your own stdin to some other process's stdin for the duration of that process, other than to read from your own stdin and write to that process, once you have chosen to write to that process in the first place.
That is, you can do this:
proc = subprocess.Popen(...) # no stdin=
and the process will inherit your stdin; or you can do this:
proc = subprocess.Popen(..., stdin=subprocess.PIPE, ...)
and then you supply the stdin to that process. But once you have chosen to supply any of its stdin, you supply all of its stdin, even if that means you have to read your own stdin.
Linux offers a splice system call (documentation at man7.org, documentation at linux.die.net, Wikipedia, linux pipe data from file descriptor into a fifo) but your best bet is probably a background thread to copy the data.

So searching for this same thing, at least in my case, the pexpect library takes care of this:
https://pexpect.readthedocs.io/en/stable/
p = pexpect.spawn("ssh myhost")
p.sendline("some_line")
p.interact()
As by its name you can automate a lot of interaction before handing it over to the user.
Note, in your case you may want an output filter:
Using expect() and interact() simultaneously in pexpect

Related

How can I handle user input for subprocesses ran in parallel in Python?

I have a Python helper function to run grunt commands in parallel, using Popen to handle subprocesses. The purpose is communication over CLI. The problem starts when any user input is required for all those processes, e.g. file path, password, 'yes/no' decision:
Enter file path: Enter file path: Enter file path: Enter file path: Enter file path: Enter file path: Enter file path:
Everything up-to-date
Grunt task completed successfully.
User provides input once, one process completes successfully and all others never finish executing.
Code:
from subprocess import check_output, Popen
def run_grunt_parallel(grunt_commands):
return_code = 0
commands = []
for command in grunt_commands:
with tempfile.NamedTemporaryFile(delete=False) as f:
app = get_grunt_application_name(' '.join(command))
commands.append({'app': app, 'process': Popen(command, stdout=f)})
while len(commands):
sleep(5)
next_round = []
for command in commands:
rc = command['process'].poll()
if rc == None:
next_round.append(command)
else:
if rc == 0:
else:
return_code = rc
commands = next_round
return return_code
Is there a way to make sure that user can provide all necessary input for each process?
What you want is almost (if not entirely) impossible. But if you can recognize prompts in a prefix-free fashion (and, if it varies, know from them how many lines of input they expect), you should be able to manage it.
Run each process with two-way unbuffered pipes:
Popen(command, stdin=subprocess.PIPE,
stdout=f, stderr=subprocess.PIPE, bufsize=0)
(Well-behaved programs prompt on standard error. Yours seem to do so, since you showed the prompts despite the stdout=f; if they don’t do so reliably, you get to read that from a pipe as well, search for prompts in it, and copy it to a file yourself.)
Unix
Set all pipes non-blocking. Then use select to monitor the stderr pipes for all processes. (You might try selectors instead.) Buffer what you read separately for each process until you recognize a prompt from one. Then display the prompt (identifying the source process) and accept input from the user—if the output between prompts fits in the pipe buffers, this won’t slow the background work down. Put that user input in a buffer associated with that process, and add its stdin pipe to the select.
When a stdin pipe shows ready, write to it, and remove it from the set if you finish doing so. When a read from a pipe returns EOF, join the corresponding process (or do so in a SIGCHLD handler if you worry that a process might close its end early).
Windows
Unless you have a select emulation available that supports pipes, you’ll have to use threads—one for each process, or one for each pipe if a process might produce output after writing a prompt and before reading the response. Then use a Queue to post prompts as messages to the main thread, which can then use (for example) another per-process Queue to send the user input back to the thread (or its writing buddy).
This works on any threading-supporting platform and has the potential advantage of not relying on pipe buffers to avoid stalling talkative processes.

How to obtain output from external progam and put it into a variable in Python

I am still fairly new to the python world and know this should be an easy question to answer. I have this section of a script in python that calls a script in Perl. This Perl script is a SOAP service that fetches data from a web page. Everything works great and outputs what I want, but after a bit of trial and error I am confused to how I can capture the data with a python variable and not just output to the screen like it does now.
Any pointers appreciated!
Thank you,
Pablo
# SOAP SERVICE
# Fetch the perl script that will request the users email.
# This service will return a name, email, and certificate.
var = "soap.pl"
pipe = subprocess.Popen(["perl", "./soap.pl", var], stdin = subprocess.PIPE)
pipe.stdin.write(var)
print "\n"
pipe.stdin.close()
I am not sure what your code aims to do (with var in particular), but here are the basics.
There is the subprocess.check_output() function for this
import subprocess
out = subprocess.check_output(['ls', '-l'])
print out
If your Python is before 2.7 use Popen with the communicate() method
import subprocess
proc = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE)
out, err = proc.communicate()
print out
You can instead iterate proc.stdout but it appears that you want all output in one variable.
In both cases you provide the program's arguments in the list.
Or add stdin if needed
proc = subprocess.Popen(['perl', 'script.pl', 'arg'],\
stdin = subprocess.PIPE,\
stdout = subprocess.PIPE)
The purpose of stdin = subprocess.PIPE is to be able to feed the STDIN of the process that is started, as it runs. Then you would do proc.stdin.write(string) and this writes to the invoked program's STDIN. That program generally waits on its STDIN and after you send a newline it gets everything written to it (since the last newline) and runs relevant processing.
If you simply need to pass parameters/arguments to the script at its invocation then that generally doesn't need nor involve its STDIN.
Since Python 3.5 the recommended method is subprocess.run(), with a very similar full signature, and similar operation, to that of the Popen constructor.

Displaying output of shell commands with shared environments

Is there any way to display the output of a shell command in Python, as the command runs?
I have the following code to send commands to a specific shell (in this case, /bin/tcsh):
import subprocess
import select
cmd = subprocess.Popen(['/bin/tcsh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
poll = select.poll()
poll.register(cmd.stdout.fileno(),select.POLLIN)
# The list "commands" holds a list of shell commands
for command in commands:
cmd.stdin.write(command)
# Must include this to ensure data is passed to child process
cmd.stdin.flush()
ready = poll.poll()
if ready:
result = cmd.stdout.readline()
print result
Also, I got the code above from this thread, but I am not sure I understand how the polling mechanism works.
What exactly is registered above?
Why do I need the variable ready if I don't pass any timeout to poll.poll()?
Yes, it is entirely possible to display the output of a shell comamand as the command runs. There are two requirements:
1) The command must flush its output.
Many programs buffer their output differently according to whether the output is connected to a terminal, a pipe, or a file. If they are connected to a pipe, they might write their output in much bigger chunks much less often. For each program that you execute, consult its documentation. Some versions of /bin/cat', for example, have the -u switch.
2) You must read it piecemeal, and not all at once.
Your program must be structured to one piece at a time from the output stream. This means that you ought not do these, which each read the entire stream at one go:
cmd.stdout.read()
for i in cmd.stdout:
list(cmd.stdout.readline())
But instead, you could do one of these:
while not_dead_yet:
line = cmd.stdout.readline()
for line in iter(cmd.stdout.readline, b''):
pass
Now, for your three specific questions:
Is there any way to display the output of a shell command in Python, as the command runs?
Yes, but only if the command you are running outputs as it runs and doesn't save it up for the end.
What exactly is registered above?
The file descriptor which, when read, makes available the output of the subprocess.
Why do I need the variable ready if I don't pass any timeout to poll.poll()?
You don't. You also don't need the poll(). It is possible, if your commands list is fairly large, that might need to poll() both the stdin and stdout streams to avoid a deadlock. But if your commands list is fairly modest (less than 5Kbytes), then you will be OK just writing them at the beginning.
Here is one possible solution:
#! /usr/bin/python
import subprocess
import select
# Critical: all of this must fit inside ONE pipe() buffer
commands = ['echo Start\n', 'date\n', 'sleep 10\n', 'date\n', 'exit\n']
cmd = subprocess.Popen(['/bin/tcsh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# The list "commands" holds a list of shell commands
for command in commands:
cmd.stdin.write(command)
# Must include this to ensure data is passed to child process
cmd.stdin.flush()
for line in iter(cmd.stdout.readline, b''):
print line

How to communicate with command line program using python?

import subprocess
import sys
proc = subprocess.Popen(["program.exe"], stdin=subprocess.PIPE) #the cmd program opens
proc.communicate(input="filename.txt") #here the filename should be entered (runs)
#then the program asks to enter a number:
proc.communicate(input="1") #(the cmd stops here and nothing is passed)
proc.communicate(input="2") # (same not passing anything)
how do i pass and communicate with the cmd using python.
Thanks. (using windows platform)
The docs on communicate() explain this:
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
communicate() blocks once the input has been sent until the program finishes executing. In your example, the program waits for more input after you send "1", but Python waits for it to exit before it gets to the next line, meaning the whole thing deadlocks.
If you want to read and write a lot interchangeably, make pipes to stdin/stdout and write/read to/from them.

Python Popen, closing streams and multiple processes

I have some data that I would like to gzip, uuencode and then print to standard out. What I basically have is:
compressor = Popen("gzip", stdin = subprocess.PIPE, stdout = subprocess.PIPE)
encoder = Popen(["uuencode", "dummy"], stdin = compressor.stdout)
The way I feed data to the compressor is through compressor.stdin.write(stuff).
What I really need to do is to send an EOF to the compressor, and I have no idea how to do it.
At some point, I tried compressor.stdin.close() but that doesn't work -- it works well when the compressor writes to a file directly, but in the case above, the process doesn't terminate and stalls on compressor.wait().
Suggestions? In this case, gzip is an example and I really need to do something with piping the output of one process to another.
Note: The data I need to compress won't fit in memory, so communicate isn't really a good option here. Also, if I just run
compressor.communicate("Testing")
after the 2 lines above, it still hangs with the error
File "/usr/lib/python2.4/subprocess.py", line 1041, in communicate
rlist, wlist, xlist = select.select(read_set, write_set, [])
I suspect the issue is with the order in which you open the pipes. UUEncode is funny is that it will whine when you launch it if there's no incoming pipe in just the right way (try launching the darn thing on it's own in a Popen call to see the explosion with just PIPE as the stdin and stdout)
Try this:
encoder = Popen(["uuencode", "dummy"], stdin=PIPE, stdout=PIPE)
compressor = Popen("gzip", stdin=PIPE, stdout=encoder.stdin)
compressor.communicate("UUencode me please")
encoded_text = encoder.communicate()[0]
print encoded_text
begin 644 dummy
F'XL(`%]^L$D``PL-3<U+SD])5<A-52C(24TL3#4`;2O+"!(`````
`
end
You are right, btw... there is no way to send a generic EOF down a pipe. After all, each program really defines its own EOF. The way to do it is to close the pipe, as you were trying to do.
EDIT: I should be clearer about uuencode. As a shell program, it's default behaviour is to expect console input. If you run it without a "live" incoming pipe, it will block waiting for console input. By opening the encoder second, before you had sent material down the compressor pipe, the encoder was blocking waiting for you to start typing. Jerub was right in that there was something blocking.
This is not the sort of thing you should be doing directly in python, there are eccentricities regarding the how thing work that make it a much better idea to do this with a shell. If you can just use subprocess.Popen("foo | bar", shell=True), then all the better.
What might be happening is that gzip has not been able to output all of its input yet, and the process will no exit until its stdout writes have been finished.
You can look at what system call a process is blocking on if you use strace. Use ps auxwf to discover which process is the gzip process, then use strace -p $pidnum to see what system call it is performing. Note that stdin is FD 0 and stdout is FD 1, you will probably see it reading or writing on those file descriptors.
if you just want to compress and don't need the file wrappers consider using the zlib module
import zlib
compressed = zlib.compress("text")
any reason why the shell=True and unix pipes suggestions won't work?
from subprocess import *
pipes = Popen("gzip | uuencode dummy", stdin=PIPE, stdout=PIPE, shell=True)
for i in range(1, 100):
pipes.stdin.write("some data")
pipes.stdin.close()
print pipes.stdout.read()
seems to work

Categories

Resources