Send command and exit using python pty pseudo terminal process - python

Using python pty module, i want to send some commands to the terminal emulator, using a function as stdin (as pty module wants), and then force quitting. I thought about something like
import pty
cmnds = ['exit\n', 'ls -al\n']
# Command to send. I try exiting as last command, but it doesn't works.
def r(fd):
if cmnds:
cmnds.pop()
# It seems is not executing sent commands ('ls -al\n')
else:
# Can i quit here? Can i return EOF?
pass
pty.spawn('/bin/sh', r)
Thank you

Firstly, the pty module does not allow you to communicate with the terminal emulator Python is running in. Instead, it allows Python to pretend to be a terminal emulator.
Looking at the source-code of pty.spawn(), it looks like it is designed to let a spawned process take over Python's stdin and stdout while it runs, which is not what you want.
If you just want to spawn a shell, send commands to it, and read the output, you probably want Python's subprocess module (in particular, if there's just one command you want to run, the subprocess.Popen class' .communicate() method will be helpful).
If you really, really need the sub-process to be running in a pty instead of a pipe, you can use os.openpty() to allocate a master and a slave file descriptor. Use the slave file descriptor as the subprocess' stdin and stdout, then write your commands to the master file descriptor and read the responses back from it.

Related

Interact with an interactive shell script on python

I have an interactive shell application on windows.
I would like to write a python script that will send commands to that shell application and read back responses.
However i want to do it interactively, i.e. i want the shell application to keep running as long the python script is.
I have tried
self.m_process subprocess.Popen(path_to_shell_app,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,universal_newlines=True)
and then using stdin and stdout to send and recieve data.
it seems that the shell application is being opened but i can't communicate with it.
what am i doing wrong?
There is a module that was built just for that: pexpect. To use, import pexpect, and then use process = pexpect.spawn(myprogram) to create new subprocesses, and use process.expect(mystring) or process.expect_exact(mystring) to search for prompts, or to get responses. process.send(myinput) and process.sendline(myinput) are used for sending information to the subprocess.
Next you should use communicate
stdout, stderr = self.m_process.communicate(input=your_input_here)
From the subprocess module documentation
Popen.communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached.
Wait for process to terminate. The optional input argument should be a
string to be sent to the child process, or None, if no data should be
sent to the child.
communicate() returns a tuple (stdoutdata, stderrdata).
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.

subprocess stdin PIPE does not return until program terminates

I have been trying to troubleshoot subprocess.PIPE with subprocesses with no luck.
I'm trying to pass commands to an always running process and receive the results without having to close/open the process each time.
Here is the main launching code:
launcher.py:
import subprocess
import time
command = ['python', 'listener.py']
process = subprocess.Popen(
command, bufsize=0,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
# simulates sending a new command every 10 seconds
for x in range(1,10):
process.stdin.write(b'print\r\n')
process.stdin.flush()
time.sleep(10)
listener.py:
import sys
file = open('log.txt', 'w+')
while True:
file.write(sys.stdin.read(1))
file.close()
This is simplified to show relevent pieces. In the end I'll have threads listening on the stdout and stderr but for now I'm trying to troubleshoot the basics.
What I expect to happen: for each loop in launcher.py, the file.write() in listener.py would write.
What happens instead: everything writes when the loop closes and the program terminates, or I SIGTERM / CTRL-C the script.
I'm running this in Windows 8 Python 3.4.
It's almost as if stdin buffers until the process closes and then it passes through. I have buffsize=0 set, and I'm flushing, so that doesn't make sense to me. I thought either one or the other would be sufficient.
The subprocess is running in a different process, so the sleep in launcher should have no impact on the subprocess.
Does anyone have any ideas why this is blocking?
Update 1:
The same behaviour is also seen with the following code run from the console (python.exe stdinreader.py)
That is, when you type into the console while the program is running, nothing is written to the file.
stdinreader.py:
import sys
import os
file = open('log.txt', 'w+b')
while True:
file.write(sys.stdin.read(1))
file.close()
Adding a file.flush() just before file.write() solves this, but that doesn't help me with the subprocess because I don't have control of how subprocess flushes (which would be my return subprocess.PIPE). Maybe if I reinitialize that PIPE with open('wb') it will not buffer. I will try.
Update 2:
I seem to have isolated this problem to the subprocess being called which is not flushing after it's writes to stdout.
Is there anything I can do to force a flush on the stdout PIPE between parent and child without modifying the subprocess? The subprocess is magick.exe (imagemagick 7) running with args ['-script, '-']. From the point of view of the subprocess it has a stdout object of <_io.TextIOWrapper name='' mode='w' encoding='cp1252'>. I guess the subprocess will just open the default stdout objects on initialization and we can't really control whether it buffers or not.
The strange thing is that passing the child the normal sys.stdout object instead of subprocess.PIPE does not require the subprocess to .flush() after write.
Programs run differently depending on whether they are run from the console or through a pipe. If the console (a python process can check with os.stdin.isatty()), stdout data is line buffered and you see data promptly. If a pipe, stdout data is block buffered and you only see data when quite a bit has piled up or the program flushes the pipe.
When you want to grab program output, you have to use a pipe and the program runs in buffered mode. On linux, you can trick programs by creating a fake console (pseudo tty, pty, ...). The pty module, pexpect and others do that.
On windows, I don't know of any way to get it to work. If you control the program being run, have it flush often. Otherwise, glare futilely at the Windows logo. You can even mention the problem on your next blind date if you want it to end early. But I can't think of anything more.
(if somebody knows of a fix, I'd like to hear it. I've seen some code that tries to open a Windows console and screen scrape it, but those solutions keep losing data. It should work if there is a loopback char device out there somewhere).
The problem was that the subprocess being called was not flushing after writing to stdout. Thanks to J.F. and tdelaney for pointing me in the right direction. I have raised this with the developer here: http://www.imagemagick.org/discourse-server/viewtopic.php?f=2&t=26276&p=115545#p115545
There doesn't appear to be a work-around for this in Windows other than to alter the subprocess source. Perhaps if you redirected the output of the subprocess to a NamedTemporaryFile that might work, but I have not tested it and I think it would be locked in Windows so only one of the parent and child could open it at once. Not insurmountable but annoying. There might also be a way to exec the application through unixutils port of stdbuf or something similar as J.F. suggested here: Python C program subprocess hangs at "for line in iter"
If you have access to the source code of the subprocess you're calling you can always recompile it with buffering disabled. It's simple to disable buffering on stdout in C:
setbuf(stdout, NULL)
or set per-line buffering instead of block buffering:
setvbuf(stdout, (char *) NULL, _IOLBF, 0);
See also: Python C program subprocess hangs at "for line in iter"
Hope this helps someone else down the road.
can you try to close the pipe at the end of listener.py? i think that is the issue

Use python subprocess module like a command line simulator

I am writing a test framework in Python for a command line application. The application will create directories, call other shell scripts in the current directory and will output on the Stdout.
I am trying to treat {Python-SubProcess, CommandLine} combo as equivalent to {Selenium, Browser}. The first component plays something on the second and checks if the output is expected. I am facing the following problems
The Popen construct takes a command and returns back after that command is completed. What I want is a live handle to the process so I can run further commands + verifications and finally close the shell once done
I am okay with writing some infrastructure code for achieveing this since we have a lot of command line applications that need testing like this.
Here is a sample code that I am running
p = subprocess.Popen("/bin/bash", cwd = test_dir)
p.communicate(input = "hostname") --> I expect the hostname to be printed out
p.communicate(input = "time") --> I expect current time to be printed out
but the process hangs or may be I am doing something wrong. Also how do I "grab" the output of that sub process so I can assert that something exists?
subprocess.Popen allows you to continue execution after starting a process. The Popen objects expose wait(), poll() and many other methods to communicate with a child process when it is running. Isn't it what you need?
See Popen constructor and Popen objects description for details.
Here is a small example that runs Bash on Unix systems and executes a command:
from subprocess import Popen, PIPE
p = Popen (['/bin/sh'], stdout=PIPE, stderr=PIPE, stdin=PIPE)
sout, serr = p.communicate('ls\n')
print 'OUT:'
print sout
print 'ERR:'
print serr
UPD: communicate() waits for process termination. If you do not need that, you may use the appropriate pipes directly, though that usually gives you rather ugly code.
UPD2: You updated the question. Yes, you cannot call communicate twice for a single process. You may either give all commands you need to execute in a single call to communicate and check the whole output, or work with pipes (Popen.stdin, Popen.stdout, Popen.stderr). If possible, I strongly recommend the first solution (using communicate).
Otherwise you will have to put a command to input and wait for some time for desired output. What you need is non-blocking read to avoid hanging when there is nothing to read. Here is a recipe how to emulate a non-blocking mode on pipes using threads. The code is ugly and strangely complicated for such a trivial purpose, but that's how it's done.
Another option could be using p.stdout.fileno() for select.select() call, but that won't work on Windows (on Windows select operates only on objects originating from WinSock). You may consider it if you are not on Windows.
Instead of using plain subprocess you might find Python sh library very useful:
http://amoffat.github.com/sh/
Here is an example how to build in an asynchronous interaction loop with sh:
http://amoffat.github.com/sh/tutorials/2-interacting_with_processes.html
Another (old) library for solving this problem is pexpect:
http://www.noah.org/wiki/pexpect

Preserving bash redirection in a python subprocess

To begin with, I am only allowed to use python 2.4.4
I need to write a process controller in python which launches and various subprocesses monitors how they affect the environment. Each of these subprocesses are themselves python scripts.
When executed from the unix shell, the command lines look something like this:
python myscript arg1 arg2 arg3 >output.log 2>err.log &
I am not interested in the input or the output, python does not need to process. The python program only needs to know
1) The pid of each process
2) Whether each process is running.
And the processes run continuously.
I have tried reading in the output and just sending it out a file again but then I run into issues with readline not being asynchronous, for which there are several answers many of them very complex.
How can I a formulate a python subprocess call that preserves the bash redirection operations?
Thanks
If I understand your question correctly, it sounds like what you are looking for here is to be able to launch a list of scripts with the output redirected to files. In that case, launch each of your tasks something like this:
task = subprocess.Popen(['python', 'myscript', 'arg1', 'arg2', 'arg3'],
stdout=open('output.log', 'w'), stderr=open('err.log', 'w'))
Doing this means that the subprocess's stdout and stderr are redirected to files that the monitoring process opened, but the monitoring process does not have to be involved in copying data around. You can also redirect the subprocess stdins as well, if needed.
Note that you'll likely want to handle error cases and such, which aren't handled in this example.
You can use existing file descriptors as the stdout/stderr arguments to subprocess.Popen. This should be exquivalent to running from with redirection from bash. That redirection is implemented with fdup(2) after fork and the output should never touch your program. You can probably also pass fopen('/dev/null') as a file descriptor.
Alternatively you can redirect the stdout/stderr of your controller program and pass None as stdout/stderr. Children should print to your controllers stdout/stderr without passing through python itself. This works because the children will inherit the stdin/stdout descriptors of the controller, which were redirected by bash at launch time.
The subprocess module is good.
You can also do this on *ix with os.fork() and a periodic os.wait() with a WNOHANG.

Blocking writing to stdout

I'm writing a Python script that will use subprocesses. The main idea is to have one parent script that runs specialised child scripts, which e.g. run other programs or do some stuff on their own. There are pipes between parent script and subprocesses. I use them to control whether subprocess is still responding by sending some characters on regular basis and checking the response. The problem is that when the subprocess prints anything on screen (i.e. writes to stdout or stderr), the pipes are broken and everything crashes. So my main question is whether it is possible to block writing to std* in the subprocess, so only legitimate response written to pipe would be possible? I have already tried Stop a function from writing to stdout but without any success.
Also other ideas for communcation between parent and subprocess are welcome (except file based pipes). However, the subprocesses must be used.
I strongly believe that you do not just have to accept "that when the subprocess prints anything on screen (i.e. writes to stdout or stderr), the pipes are broken and everything crashes". You can solve this problem. Then you do not need to "block" the subprocesses from writing to standard streams.
Make proper use of all the power of the subprocess module. First of all, connect a subprocess.PIPE to each of the standard streams of a subprocess:
p = subprocess.Popen(
[executable, arg1, arg2],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
Run the subprocess and interact with it through those pipes:
stdout, stderr = p.communicate(stdin="command")
If communicate() is not flexible enough (if you need to monitor several subprocesses at the same time and/or if the stdin data to a certain subprocess depends on its output in response to a previous command) you can directly interact with the p.stdout, p.stderr, p.stdin attributes. In this case, you will likely have to build your own monitoring loop and make use of p.poll() and/or p.returncode. Controlling the subprocesses can also be realized via p.send_signal().
You can pass a function to subprocess.Popen that is executed prior to executing the requested program:
def close_std():
os.close(0)
os.close(1)
os.close(2)
p = subprocess.Popen(cmd, preexec_fn=close_std)
Note the use of low-level os.close; closing sys.std* will only have effect in the forked Python process. Also, be aware that if your underlying programs are Python scripts, they may die due to an exception when they try to write to closed file descriptors.

Categories

Resources