Repeatedly interacting with program using subprocess - python

I'm trying to run a program that requires successive interactions (I have to answer with strings: '0' or '1') from within my python script.
My code:
from subprocess import Popen, PIPE
command = ['program', '-arg1', 'path/file_to_arg1']
p = Popen(command, stdin=PIPE, stdout=PIPE)
p.communicate('0'.encode())
The last two lines work for the first interaction, but after that the program prints all the following questions on the screen without waiting for their respective inputs. I basically need to answer the first question, wait until the program deals with it and prints the second question, then answer the second question, and so on.
Any ideas?
Thanks!
PS: I'm using Python 3.3.4

The subprocess module is designed for single-use interactions. Print to a process and read the result, and then STOP. It is challenging to do a ongoing back-and-forth interaction with a Unix process, where you continue to take turns reading and writing. I recommend using a library built for the task, instead of rewriting all the necessary logic from scratch.
There is a classic library named Expect, which works well for interacting with a child process. There is a python implementation named Pexpect (read the docs here). I recommend using Pexpect, or a similar library.
Pexpect works like this:
# spawn a subprocess.
# then wait for expected output from the child process,
# and send additional commands to the child.
child = pexpect.spawnu('ftp ftp.openbsd.org')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')
child.sendline('pexpect#sourceforge.net')
child.expect('ftp> ')
child.sendline('cd /pub/OpenBSD/3.7/packages/i386')
child.expect('ftp> ')

For catching stdout in realtime from subprocess better check from this thread catching stdout in realtime from subprocess

Related

Shell hangs after killing subprocess

I know there are a bunch of similar questions on SO like this one or this one and maybe a couple more, but none of them seem to apply in my particular situation. My lack of understanding on how subprocess.Popen() works doesn't help either.
What i want to achieve is: launch a subprocess (a command line radio player) that also outputs data to the terminal and can also receive input -- wait for a while -- terminate the subprocess -- exit the shell. I am running python 2.7 on OSX 10.9
Case 1.
This launches the radio player (but audio only!), terminates the process, exits.
import subprocess
import time
p = subprocess.Popen(['/bin/bash', '-c', 'mplayer http://173.239.76.147:8090'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=False,
stderr=subprocess.STDOUT)
time.sleep(5)
p.kill()
Case 2.
This launches the radio player, outputs information like radio name, song, bitrate, etc and also accepts input. It terminates the subprocess but it never exists the shell and the terminal becomes unusable even after using 'Ctrl-C'.
p = subprocess.Popen(['/bin/bash', '-c', 'mplayer http://173.239.76.147:8090'],
shell=False)
time.sleep(5)
p.kill()
Any ideas on how to do it? I was even thinking at the possibility of opening a slave-shell for the subprocess if there is no other choice (of course it is also something that I don't have a clue about). Thanks!
It seems like mplayer uses the curses library and when kill()ing it or terminate()ing it, for some reason, it doesn't clean the library state correctly.
To restore the terminal state you can use the reset command.
Demo:
import subprocess, time
p = subprocess.Popen(['mplayer', 'http://173.239.76.147:8090'])
time.sleep(5)
p.terminate()
p.wait() # important!
subprocess.Popen(['reset']).wait()
print('Hello, World!')
In principle it should be possible to use stty sane too, but it doesn't work well for me.
As Sebastian points out, there was a missing wait() call in the above code (now added). With this wait() call and using terminate() the terminal doesn't get messed up (and so there shouldn't be any need for reset).
Without the wait() I sometimes do have problems of mixed output between the python process and mplayer.
Also, a solution specific to mplayer, as pointed out by Sebastian, is to send a q to the stdin of mplayer to quit it.
I leave the code that uses reset because it works with any program that uses the curses library, whether it correctly tears down the library or not, and thus it might be useful in other situations where a clean exit isn't possible.
What i want to achieve is: launch a subprocess (a command line radio player) that also outputs data to the terminal and can also receive input -- wait for a while -- terminate the subprocess -- exit the shell. I am running python 2.7 on OSX 10.9
On my system, mplayer accepts keyboard commands e.g., q to stop playing and quit:
#!/usr/bin/env python
import shlex
import time
from subprocess import Popen, PIPE
cmd = shlex.split("mplayer http://www.swissradio.ch/streams/6034.m3u")
p = Popen(cmd, stdin=PIPE)
time.sleep(5)
p.communicate(b'q')
It starts mplayer tuned to public domain classical; waits 5 seconds; asks mplayer to quit and waits for it to exit. The output is going to terminal (the same place where the python script's output goes).
I've also tried p.kill(), p.terminate(), p.send_signal(signal.SIGINT) (Ctrl + C). p.kill() creates the impression that the process hangs. Possible explanation: p.kill() leaves some pipes open e.g., if stdout=PIPE then your Python script might hang at p.stdout.read() i.e., it kills the parent mplayer process but there might be a child process that holds the pipes open. Nothing hangs with p.terminate(), p.send_signal(signal.SIGINT) -- mplayer exits in an orderly manner. None of the variants I've tried require reset.
how should I go about having both input from Python and keyboard? Do I need two different subprocesses and how to redirect the keyboard input to PIPE?
It would be much simpler just to drop stdin=PIPE and call p.terminate(); p.wait() instead of p.communicate(b'q').
If you want to keep stdin=PIPE then the general principle is: read from sys.stdin, write to p.stdin until timeout happens. Given that mplayer expects one letter commands, you need to be able to read one character at at time from sys.stdin. The write part is easy: p.stdin.write(c) (set bufsize=0 to avoid buffering on Python side. mplayer doesn't buffer its stdin so you don't need to worry about it).
You don't need two different subprocesses. To implement timeout, you could use threading.Timer(5, p.stdin.write, [b'q']).start() or select.select on sys.stdin with timeout.
I guess something using the good old raw_input has nothing to do with it, or?
raw_input() is not suitable for mplayer because it reads the full lines but mplayer expects one character at a time.

Use python subprocess module like a command line simulator

I am writing a test framework in Python for a command line application. The application will create directories, call other shell scripts in the current directory and will output on the Stdout.
I am trying to treat {Python-SubProcess, CommandLine} combo as equivalent to {Selenium, Browser}. The first component plays something on the second and checks if the output is expected. I am facing the following problems
The Popen construct takes a command and returns back after that command is completed. What I want is a live handle to the process so I can run further commands + verifications and finally close the shell once done
I am okay with writing some infrastructure code for achieveing this since we have a lot of command line applications that need testing like this.
Here is a sample code that I am running
p = subprocess.Popen("/bin/bash", cwd = test_dir)
p.communicate(input = "hostname") --> I expect the hostname to be printed out
p.communicate(input = "time") --> I expect current time to be printed out
but the process hangs or may be I am doing something wrong. Also how do I "grab" the output of that sub process so I can assert that something exists?
subprocess.Popen allows you to continue execution after starting a process. The Popen objects expose wait(), poll() and many other methods to communicate with a child process when it is running. Isn't it what you need?
See Popen constructor and Popen objects description for details.
Here is a small example that runs Bash on Unix systems and executes a command:
from subprocess import Popen, PIPE
p = Popen (['/bin/sh'], stdout=PIPE, stderr=PIPE, stdin=PIPE)
sout, serr = p.communicate('ls\n')
print 'OUT:'
print sout
print 'ERR:'
print serr
UPD: communicate() waits for process termination. If you do not need that, you may use the appropriate pipes directly, though that usually gives you rather ugly code.
UPD2: You updated the question. Yes, you cannot call communicate twice for a single process. You may either give all commands you need to execute in a single call to communicate and check the whole output, or work with pipes (Popen.stdin, Popen.stdout, Popen.stderr). If possible, I strongly recommend the first solution (using communicate).
Otherwise you will have to put a command to input and wait for some time for desired output. What you need is non-blocking read to avoid hanging when there is nothing to read. Here is a recipe how to emulate a non-blocking mode on pipes using threads. The code is ugly and strangely complicated for such a trivial purpose, but that's how it's done.
Another option could be using p.stdout.fileno() for select.select() call, but that won't work on Windows (on Windows select operates only on objects originating from WinSock). You may consider it if you are not on Windows.
Instead of using plain subprocess you might find Python sh library very useful:
http://amoffat.github.com/sh/
Here is an example how to build in an asynchronous interaction loop with sh:
http://amoffat.github.com/sh/tutorials/2-interacting_with_processes.html
Another (old) library for solving this problem is pexpect:
http://www.noah.org/wiki/pexpect

Open second Python console

Is there a way to open a second python console and let the new console run while the original console keeps going and when the new console finishes it sends back its data, in the form of a variable back to the original console?
Perhaps you should look into the multiprocessing or multi-threading modules. These help you spawn off child processes from your original (parent) program.
The sub-process module might be what you are looking for. However, the thing is, you get the output of the process after the entire program finishes. this means, if the program you are trying to run runs forever you will not be able to see the output until it is quit (either by forcing it or using termination methods).
An example of how you would assign the output to a variable would be:
output,error=your_process.communicate()
The output part of this is what you would be using (based on your question). However, the error is what you get if you run it and there is a problem (doesn't return 0). If you are not looking to capture errors then you can simply assign it to _.
Also note that if you are using key-word arguments, i would suggest using the shlex library for splitting your string into arguments. (you can just use a regular string such as: var="mypythonprogram.py argument1 argument2" and use arguments=shlex.split(var) and you can then just supply it into the arguments for the sub-process.
Another option if you don't need to interact with the program would be using Threads, and there are many questions on stack overflow about them, as well as plenty of documentation both officially, and on other websites all over the internet.
Read up on Python multiprocessing. It has examples of exchanging objects between processes.
It sounds like you want to do something like what the multiprocessing library offers. With out more info all i can do is point you to the docs.
for instance:
http://www.ibm.com/developerworks/aix/library/au-multiprocessing/
or
http://docs.python.org/library/multiprocessing.html
If you are on windows you can use win32console module to open a second console for your thread or subprocess output
Here is a sample code:
import win32console
import multiprocessing
def subprocess(queue):
win32console.FreeConsole() #Frees subprocess from using main console
win32console.AllocConsole() #Creates new console and all input and output of subprocess goes to this new console
while True:
print(queue.get())
#prints any output produced by main script passed to subprocess using queue
if __name__ == "__main__":
queue = multiprocessing.Queue()
multiprocessing.Process(target=subprocess, args=[queue]).start()
while True:
print("Hello World")
queue.put("Hello to subprocess console")
#sends above string to subprocess which prints it into its own console
#and whatever else you want to do in ur main process
You can also do this with threading. You have to use queue module if you want the queue functionality as threading module doesn't have queue
Here is the win32console module documentation

question about pexpect in python

I tried both pexpect and subprocess.Popen from python to call an external long term background process (this process use socket to communicate with external applications), with following details.
subprocess.Popen(launchcmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This works fine. I do not need to do anything else. However, because I have to get the output immediately, I choose pexpect to avoid the pipe file buffer problem.
obj= pexpect.spawn(launchcmd, timeout=None)
after launching external process, I use a separate thread to do "readline" to read the output of the launched process "obj", and everything is ok.
obj= pexpect.spawn(launchcmd, timeout=None)
after launching external process, I did nothing further, i.e., just leave it there. Although, by using the "ps -e" command I can find the launched process, but the launched process seems blocked and cannot communicate in sockets with other applications.
OK. To be more specific, I put some sample code to formulate my question.
import subprocess
import pexpect
import os
t=1
while(True):
if(t==1):
background_process="./XXX.out"
launchcmd = [background_process]
#---option 3--------
p=pexpect.spawn(launchcmd, timeout=None) # process launced, problem with socket.
#---option 1--------
p=subprocess.Popen(launchcmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # process launced, everything fine
t=0
Could anyone tell me what's wrong with the 3rd option? And if it is due to the fact that I did not use a separate thread to manipulate the output, why 1st option works with subprocess.popen? I suspect there is something wrong with pexpect to launch a process using socket, but I am not sure, especially considering option 2 works well.
I think that you are making this too complicated.
Yes, it is a good idea to use a pty instead of a pipe to communicate with the background process because most applications recognize tty/pty devices and switch to using unbuffered output, (or at least line-buffered).
But why pexpect? Just use Python's pty module. First call openpty to get some filehandles and then use Popen to spawn the process. Example code is found in the following question (the answer with the green checkmark) Python Run a daemon sub-process & read stdout

Executing multiple commands using Popen.stdin

I'd like to execute multiple commands in a standalone application launched from a python script, using pipes. The only way I could reliably pass the commands to the stdin of the program was using Popen.communicate but it closes the program after the command gets executed. If I use Popen.stdin.write than the command executes only 1 time out of 5 or so, it does not work reliable. What am I doing wrong?
To elaborate a bit :
I have an application that listens to stdin for commands and executes them line by line.
I'd like to be able to run the application and pass various commands to it, based on the users interaction with a GUI.
This is a simple test example:
import os, string
from subprocess import Popen, PIPE
command = "anApplication"
process = Popen(command, shell=False, stderr=None, stdin=PIPE)
process.stdin.write("doSomething1\n")
process.stdin.flush()
process.stdin.write("doSomething2\n")
process.stdin.flush()
I'd expect to see the result of both commands but I don't get any response. (If I execute one of the Popen.write lines multiple times it occasionally works.)
And if I execute:
process.communicate("doSomething1")
it works perfectly but the application terminates.
If I understand your problem correctly, you want to interact (i.e. send commands and read the responses) with a console application.
If so, you may want to check an Expect-like library, like pexpect for Python: http://pexpect.sourceforge.net
It will make your life easier, because it will take care of synchronization, the problem that ddaa also describes. See also:
http://www.noah.org/wiki/Pexpect#Q:_Why_not_just_use_a_pipe_.28popen.28.29.29.3F
The real issue here is whether the application is buffering its output, and if it is whether there's anything you can do to stop it. Presumably when the user generates a command and clicks a button on your GUI you want to see the output from that command before you require the user to enter the next.
Unfortunately there's nothing you can do on the client side of subprocess.Popen to ensure that when you have passed the application a command the application is making sure that all output is flushed to the final destination. You can call flush() all you like, but if it doesn't do the same, and you can't make it, then you are doomed to looking for workarounds.
Your code in the question should work as is. If it doesn't then either your actual code is different (e.g., you might use stdout=PIPE that may change the child buffering behavior) or it might indicate a bug in the child application itself such as the read-ahead bug in Python 2 i.e., your input is sent correctly by the parent process but it is stuck in the child's internal input buffer.
The following works on my Ubuntu machine:
#!/usr/bin/env python
import time
from subprocess import Popen, PIPE
LINE_BUFFERED = 1
#NOTE: the first argument is a list
p = Popen(['cat'], bufsize=LINE_BUFFERED, stdin=PIPE,
universal_newlines=True)
with p.stdin:
for cmd in ["doSomething1\n", "doSomethingElse\n"]:
time.sleep(1) # a delay to see that the commands appear one by one
p.stdin.write(cmd)
p.stdin.flush() # use explicit flush() to workaround
# buffering bugs on some Python versions
rc = p.wait()
It sounds like your application is treating input from a pipe in a strange way. This means it won't get all of the commands you send until you close the pipe.
So the approach I would suggest is just to do this:
process.stdin.write("command1\n")
process.stdin.write("command2\n")
process.stdin.write("command3\n")
process.stdin.close()
It doesn't sound like your Python program is reading output from the application, so it shouldn't matter if you send the commands all at once like that.

Categories

Resources