Executing multiple commands using Popen.stdin - python

I'd like to execute multiple commands in a standalone application launched from a python script, using pipes. The only way I could reliably pass the commands to the stdin of the program was using Popen.communicate but it closes the program after the command gets executed. If I use Popen.stdin.write than the command executes only 1 time out of 5 or so, it does not work reliable. What am I doing wrong?
To elaborate a bit :
I have an application that listens to stdin for commands and executes them line by line.
I'd like to be able to run the application and pass various commands to it, based on the users interaction with a GUI.
This is a simple test example:
import os, string
from subprocess import Popen, PIPE
command = "anApplication"
process = Popen(command, shell=False, stderr=None, stdin=PIPE)
process.stdin.write("doSomething1\n")
process.stdin.flush()
process.stdin.write("doSomething2\n")
process.stdin.flush()
I'd expect to see the result of both commands but I don't get any response. (If I execute one of the Popen.write lines multiple times it occasionally works.)
And if I execute:
process.communicate("doSomething1")
it works perfectly but the application terminates.

If I understand your problem correctly, you want to interact (i.e. send commands and read the responses) with a console application.
If so, you may want to check an Expect-like library, like pexpect for Python: http://pexpect.sourceforge.net
It will make your life easier, because it will take care of synchronization, the problem that ddaa also describes. See also:
http://www.noah.org/wiki/Pexpect#Q:_Why_not_just_use_a_pipe_.28popen.28.29.29.3F

The real issue here is whether the application is buffering its output, and if it is whether there's anything you can do to stop it. Presumably when the user generates a command and clicks a button on your GUI you want to see the output from that command before you require the user to enter the next.
Unfortunately there's nothing you can do on the client side of subprocess.Popen to ensure that when you have passed the application a command the application is making sure that all output is flushed to the final destination. You can call flush() all you like, but if it doesn't do the same, and you can't make it, then you are doomed to looking for workarounds.

Your code in the question should work as is. If it doesn't then either your actual code is different (e.g., you might use stdout=PIPE that may change the child buffering behavior) or it might indicate a bug in the child application itself such as the read-ahead bug in Python 2 i.e., your input is sent correctly by the parent process but it is stuck in the child's internal input buffer.
The following works on my Ubuntu machine:
#!/usr/bin/env python
import time
from subprocess import Popen, PIPE
LINE_BUFFERED = 1
#NOTE: the first argument is a list
p = Popen(['cat'], bufsize=LINE_BUFFERED, stdin=PIPE,
universal_newlines=True)
with p.stdin:
for cmd in ["doSomething1\n", "doSomethingElse\n"]:
time.sleep(1) # a delay to see that the commands appear one by one
p.stdin.write(cmd)
p.stdin.flush() # use explicit flush() to workaround
# buffering bugs on some Python versions
rc = p.wait()

It sounds like your application is treating input from a pipe in a strange way. This means it won't get all of the commands you send until you close the pipe.
So the approach I would suggest is just to do this:
process.stdin.write("command1\n")
process.stdin.write("command2\n")
process.stdin.write("command3\n")
process.stdin.close()
It doesn't sound like your Python program is reading output from the application, so it shouldn't matter if you send the commands all at once like that.

Related

Send and receive data multiple times to subprocess (Python)

Issue
I am communicating with a terminal application (xfoil) and I want to isolate the stdout corresponding to each stdin.
This question is also more general as I wish to know why I can't open an application with subprocess, and then use successively its stdin and stdout (or rather how could I do it).
What I can do now
As of now, I can send instructions to Xfoil using process.communicate which retrieves the entire stdout.
import subprocess
xfoil = subprocess.Popen('path_to_xfoil.exe', stdin=subprocess.PIPE, \
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
[output, _] = xfoil.communicate(input=instructions)
What I want to achieve
Instead of having to deal with the entire stdout, I wish to isolate each set of instructions (stdin) and results (stdout).
Something in the lines of:
output1 = process.communicate(input=instructions1)
output2 = process.communicate(input=instructions2)
output3 = process.communicate(input=instructions3)
...
I need the process to stay open (which is not the case with communicate).
What I have attempted
Communicate multiple times with a process without breaking the pipe? is probably the way to go, however it does not explain clearly how to read the output, and the following piece of code simply freezes, probably because I have no idea when read should stop.
xfoil.stdin.write(instructions1)
xfoil.stdout.read() # never passes this line
xfoil.stdin.write(instructions2)
xfoil.stdout.read()
Non-blocking read on a subprocess.PIPE in python seemed a good path as well, however it only takes care of output.
Or perhaps I need to use the os module as in ipc - communicate multiple times with a subprocess in Python ?
Thank you for your help
PS: I read a tiny bit about fcntl but I need the code to work on both Linux and Windows.

Shell hangs after killing subprocess

I know there are a bunch of similar questions on SO like this one or this one and maybe a couple more, but none of them seem to apply in my particular situation. My lack of understanding on how subprocess.Popen() works doesn't help either.
What i want to achieve is: launch a subprocess (a command line radio player) that also outputs data to the terminal and can also receive input -- wait for a while -- terminate the subprocess -- exit the shell. I am running python 2.7 on OSX 10.9
Case 1.
This launches the radio player (but audio only!), terminates the process, exits.
import subprocess
import time
p = subprocess.Popen(['/bin/bash', '-c', 'mplayer http://173.239.76.147:8090'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=False,
stderr=subprocess.STDOUT)
time.sleep(5)
p.kill()
Case 2.
This launches the radio player, outputs information like radio name, song, bitrate, etc and also accepts input. It terminates the subprocess but it never exists the shell and the terminal becomes unusable even after using 'Ctrl-C'.
p = subprocess.Popen(['/bin/bash', '-c', 'mplayer http://173.239.76.147:8090'],
shell=False)
time.sleep(5)
p.kill()
Any ideas on how to do it? I was even thinking at the possibility of opening a slave-shell for the subprocess if there is no other choice (of course it is also something that I don't have a clue about). Thanks!
It seems like mplayer uses the curses library and when kill()ing it or terminate()ing it, for some reason, it doesn't clean the library state correctly.
To restore the terminal state you can use the reset command.
Demo:
import subprocess, time
p = subprocess.Popen(['mplayer', 'http://173.239.76.147:8090'])
time.sleep(5)
p.terminate()
p.wait() # important!
subprocess.Popen(['reset']).wait()
print('Hello, World!')
In principle it should be possible to use stty sane too, but it doesn't work well for me.
As Sebastian points out, there was a missing wait() call in the above code (now added). With this wait() call and using terminate() the terminal doesn't get messed up (and so there shouldn't be any need for reset).
Without the wait() I sometimes do have problems of mixed output between the python process and mplayer.
Also, a solution specific to mplayer, as pointed out by Sebastian, is to send a q to the stdin of mplayer to quit it.
I leave the code that uses reset because it works with any program that uses the curses library, whether it correctly tears down the library or not, and thus it might be useful in other situations where a clean exit isn't possible.
What i want to achieve is: launch a subprocess (a command line radio player) that also outputs data to the terminal and can also receive input -- wait for a while -- terminate the subprocess -- exit the shell. I am running python 2.7 on OSX 10.9
On my system, mplayer accepts keyboard commands e.g., q to stop playing and quit:
#!/usr/bin/env python
import shlex
import time
from subprocess import Popen, PIPE
cmd = shlex.split("mplayer http://www.swissradio.ch/streams/6034.m3u")
p = Popen(cmd, stdin=PIPE)
time.sleep(5)
p.communicate(b'q')
It starts mplayer tuned to public domain classical; waits 5 seconds; asks mplayer to quit and waits for it to exit. The output is going to terminal (the same place where the python script's output goes).
I've also tried p.kill(), p.terminate(), p.send_signal(signal.SIGINT) (Ctrl + C). p.kill() creates the impression that the process hangs. Possible explanation: p.kill() leaves some pipes open e.g., if stdout=PIPE then your Python script might hang at p.stdout.read() i.e., it kills the parent mplayer process but there might be a child process that holds the pipes open. Nothing hangs with p.terminate(), p.send_signal(signal.SIGINT) -- mplayer exits in an orderly manner. None of the variants I've tried require reset.
how should I go about having both input from Python and keyboard? Do I need two different subprocesses and how to redirect the keyboard input to PIPE?
It would be much simpler just to drop stdin=PIPE and call p.terminate(); p.wait() instead of p.communicate(b'q').
If you want to keep stdin=PIPE then the general principle is: read from sys.stdin, write to p.stdin until timeout happens. Given that mplayer expects one letter commands, you need to be able to read one character at at time from sys.stdin. The write part is easy: p.stdin.write(c) (set bufsize=0 to avoid buffering on Python side. mplayer doesn't buffer its stdin so you don't need to worry about it).
You don't need two different subprocesses. To implement timeout, you could use threading.Timer(5, p.stdin.write, [b'q']).start() or select.select on sys.stdin with timeout.
I guess something using the good old raw_input has nothing to do with it, or?
raw_input() is not suitable for mplayer because it reads the full lines but mplayer expects one character at a time.

Use python subprocess module like a command line simulator

I am writing a test framework in Python for a command line application. The application will create directories, call other shell scripts in the current directory and will output on the Stdout.
I am trying to treat {Python-SubProcess, CommandLine} combo as equivalent to {Selenium, Browser}. The first component plays something on the second and checks if the output is expected. I am facing the following problems
The Popen construct takes a command and returns back after that command is completed. What I want is a live handle to the process so I can run further commands + verifications and finally close the shell once done
I am okay with writing some infrastructure code for achieveing this since we have a lot of command line applications that need testing like this.
Here is a sample code that I am running
p = subprocess.Popen("/bin/bash", cwd = test_dir)
p.communicate(input = "hostname") --> I expect the hostname to be printed out
p.communicate(input = "time") --> I expect current time to be printed out
but the process hangs or may be I am doing something wrong. Also how do I "grab" the output of that sub process so I can assert that something exists?
subprocess.Popen allows you to continue execution after starting a process. The Popen objects expose wait(), poll() and many other methods to communicate with a child process when it is running. Isn't it what you need?
See Popen constructor and Popen objects description for details.
Here is a small example that runs Bash on Unix systems and executes a command:
from subprocess import Popen, PIPE
p = Popen (['/bin/sh'], stdout=PIPE, stderr=PIPE, stdin=PIPE)
sout, serr = p.communicate('ls\n')
print 'OUT:'
print sout
print 'ERR:'
print serr
UPD: communicate() waits for process termination. If you do not need that, you may use the appropriate pipes directly, though that usually gives you rather ugly code.
UPD2: You updated the question. Yes, you cannot call communicate twice for a single process. You may either give all commands you need to execute in a single call to communicate and check the whole output, or work with pipes (Popen.stdin, Popen.stdout, Popen.stderr). If possible, I strongly recommend the first solution (using communicate).
Otherwise you will have to put a command to input and wait for some time for desired output. What you need is non-blocking read to avoid hanging when there is nothing to read. Here is a recipe how to emulate a non-blocking mode on pipes using threads. The code is ugly and strangely complicated for such a trivial purpose, but that's how it's done.
Another option could be using p.stdout.fileno() for select.select() call, but that won't work on Windows (on Windows select operates only on objects originating from WinSock). You may consider it if you are not on Windows.
Instead of using plain subprocess you might find Python sh library very useful:
http://amoffat.github.com/sh/
Here is an example how to build in an asynchronous interaction loop with sh:
http://amoffat.github.com/sh/tutorials/2-interacting_with_processes.html
Another (old) library for solving this problem is pexpect:
http://www.noah.org/wiki/pexpect

Python Printing StdOut As It Received

I'm trying to run wrap a simple (windows) command line tool up in a PyQt GUI app that I am writing. The problem I have is that the command line tool throws it's progress out to stdout (it's a server reset command so you get "Attempting to stop" and "Restarting" type output.
What I am trying to do is capture the output so I can display it as part of my app. I assumed it would be quite simple to do something like the following :
import os
import subprocess as sub
cmd = "COMMAND LINE APP NAME -ARGS"
proc = sub.Popen(cmd, shell=True, stdout=sub.PIPE).stdout
while 1:
line = proc.readline()
if not line:
break
print line
This partially works in that I do get the contents of StdOut but instead of as the progress messages are sent I get it once the command line application exits and it seems to flush StdOut in one go.
Is there a simple answer?
Interactive communication through stdin/stdout is a common problem.
You're in luck though, with PyQt you can use QProcess, as described here:
http://diotavelli.net/PyQtWiki/Capturing_Output_from_a_Process
Do I understand the question?
I believe you're running something like "echo first; sleep 60; echo second" and you want see the "first" well-ahead of the "second", but they're both spitting out at the same time.
The reason you're having issues is that the operating system stores the output of processes in its memory. It will only take the trouble of sending the output to your program if the buffer has filled, or the other program has ended. So, we need to dig into the O/S and figure out how to tell it "Hey, gimme that!" This is generally known as asynchronous or non-blocking mode.
Luckily someone has done the hard work for us.
This guy has added a send() and recv() method to the python built-in Popen class.
It also looks like he fixed the bugs that people found in the comments.
Try it out:
http://code.activestate.com/recipes/440554/

Logging output of external program with (wx)python

I'm writing a GUI for using the oracle exp/imp commands and starting sql-scripts through sqlplus. The subprocess class makes it easy to launch the commands, but I need some additional functionality. I want to get rid of the command prompt when using my wxPython GUI, but I still need a way to show the output of the exp/imp commands.
I already tried these two methods:
command = "exp userid=user/pwd#nsn file=dump.dmp"
process = subprocess.Popen(command, stdout=subprocess.PIPE)
output = process.communicate()[0]
process = subprocess.Popen(command, stdout=subprocess.PIPE)
process.wait()
output = process.stdout.read()
Through one of these methods (forgot which one) I really got the output of exp/imp, but only after the command finishes, which is quite worthless to me, as I need a frequent update during these potentially long running operations. And sqlplus made even more problems, as sqlplus mostly wants some input when an error occurs. When this happens python waits for the process to finish but the user can't see the prompt, so you don't know how long to wait or what to do...
What I'd like to have is a wrapper that outputs everything I can see on the standard commandline. I want to log this to a file and show it inside a wxPython control.
I also tried the code from this page: http://code.activestate.com/recipes/440554/
but this can't read the output either.
The OutputWrapper from this answer doesn't work either: How can I capture all exceptions from a wxPython application?
Any help would be appreciated!
EDIT:
The subprocesses don't seem to flush their output. I already tried it with .readline().
My Tool has to run on windows and unix, so pexpect is no solution if there's no windows version. And using cx_oracle would be extreme overkill as I would have to rebuild the whole functionality of exp, imp and sqlplus.
The solution is to use a list for your command
command = ["exp", "userid=user/pwd#nsn", "file=dump.dmp"]
process = subprocess.Popen(command, stdout=subprocess.PIPE)
then you read process.stdout in a line-by-line basis:
line = process.stdout.readline()
that way you can update the GUI without waiting. IF the subprocess you are running (exp) flushes output. It is possible that the output is buffered, then you won't see anything until the output buffer is full. If that is the case then you are probably out of luck.
If you're on Linux, check out pexpect. It does exactly what you want.
If you need to work on Windows, maybe you should bite the bullet and use Python bindings to Oracle, such as cx_Oracle, instead of running CL stuff via subprocess.
Are these solutions able to capture stderr as well? I see you have stdout= option above. How do you make sure to get stderr as well? Another question is is there a way to use import logging/import logging.handlers to capture command stdout/stderr. It would be interesting to be able to use the logger with its buildt in formatters/rotaters,etc.
Try this:
import subprocess
command = "ping google.com"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
output = process.stdout
while 1:
print output.readline(),

Categories

Resources