Interact with an interactive shell script on python - python

I have an interactive shell application on windows.
I would like to write a python script that will send commands to that shell application and read back responses.
However i want to do it interactively, i.e. i want the shell application to keep running as long the python script is.
I have tried
self.m_process subprocess.Popen(path_to_shell_app,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,universal_newlines=True)
and then using stdin and stdout to send and recieve data.
it seems that the shell application is being opened but i can't communicate with it.
what am i doing wrong?

There is a module that was built just for that: pexpect. To use, import pexpect, and then use process = pexpect.spawn(myprogram) to create new subprocesses, and use process.expect(mystring) or process.expect_exact(mystring) to search for prompts, or to get responses. process.send(myinput) and process.sendline(myinput) are used for sending information to the subprocess.

Next you should use communicate
stdout, stderr = self.m_process.communicate(input=your_input_here)
From the subprocess module documentation
Popen.communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached.
Wait for process to terminate. The optional input argument should be a
string to be sent to the child process, or None, if no data should be
sent to the child.
communicate() returns a tuple (stdoutdata, stderrdata).
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.

Related

Python subprocess: Start a process with a `stdin` that doesn't close

I'd like to start a new process using the subprocess module. I'd like the stdin to be a stream that never sends anything but also doesn't terminate. Basically the same kind of input stream that a program would get if I were to launch it in the shell and not type anything, ever. Is that possible?
Pass stdin=subprocess.PIPE to the Popen constructor, and then avoid doing anything -- like calling communicate() -- which would later close that FIFO.

Python send keystrokes to exe file

I need to send keystrokes to an exe using python. I can run the exe using python from subprocess as
import subprocess
subprocess.Popen('myfile.exe',stdout=subprocess.PIPE)
but how can I keep the connection and keep sending keys. I don't want to read back from the exe just to send some keystrokes any suggestions
Use stdin=subprocess.PIPE and Popen.communicate():
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
The optional input argument should be data to be sent to the child
process, or None, if no data should be sent to the child. If streams
were opened in text mode, input must be a string. Otherwise, it must
be bytes.
communicate() returns a tuple (stdout_data, stderr_data). The data
will be strings if streams were opened in text mode; otherwise, bytes.
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.

What is the difference if I don't use stdout=subprocess.PIPE in subprocess.Popen()?

I recently noted in Python the subprocess.Popen() has an argument:
stdout=None(default)
I also saw people using stdout=subprocess.PIPE.
What is the difference? Which one should I use?
Another question would be, why the wait() function can't wait until the process is really done sometimes? I used:
a = sp.Popen(....,shell=True)
a.wait()
a2 = sp.Popen(...,shell=True)
a2.wait()
sometimes the a2 command is executed before the command a is done.
stdout=None means, the stdout-handle from the process is directly inherited from the parent, in easier words it basically means, it gets printed to the console (same applies for stderr).
Then you have the option stderr=STDOUT, this redirects stderr to the stdout, which means the output of stdout and stderr are forwarded to the same file handle.
If you set stdout=PIPE, Python will redirect the data from the process to a new file handle, which can be accessed through p.stdout (p beeing a Popen object). You would use this to capture the output of the process, or for the case of stdin to send data (constantly) to stdin.
But mostly you want to use p.communicate, which allows you to send data to the process once (if you need to) and returns the complete stderr and stdout if the process is completed!
One more interesting fact, you can pass any file-object to stdin/stderr/stdout, e.g. also a file opened with open (the object has to provide a fileno() method).
To your wait problem. This should not be the case! As workaround you could use p.poll() to check if the process did exit! What is the return-value of the wait call?
Furthermore, you should avoid shell=True especially if you pass user-input as first argument, this could be used by a malicious user to exploit your program! Also it launches a shell process which means additional overhead. Of course there is the 1% of cases where you actually need shell=True, I can't judge this with your minimalistic example.
stdout=None means that subprocess prints to whatever place your script prints
stdout=PIPE means that subprocess' stdout is redirected to a pipe that you should read e.g., using process.communicate() to read all at once or using process.stdout object to read via a file/iterator interfaces

Repeatedly write to STDIN and read STDOUT of a Subprocess without closing it

I am trying to employ a Subprocess in Python for keeping an external script open in a Server-like fashion. The external script first loads a model. Once this is done, it accepts requests via STDIN and returns processed strings to STDOUT.
So far, I've tried
tokenizer = subprocess.Popen([tokenizer_path, '-l', lang_prefix], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
However, I cannot use
tokenizer.stdin.write(input_string+'\n')
out = self._tokenizer.stdout.readline()
in order to repeatedly process input_strings by means of the subprocess – out will just be empty, no matter if I use stdout.read() or stdout.readline(). However, it works when I close the stdin with tokenizer.stdin.close() before reading STDOUT, but this closes the subprocess, which is not what I want as I would have to reload the whole external script again before sending another request.
Is there any way to use a subprocess in a server-like fashion in python without closing and re-opening it?
Thanks to this Answer, I found out that a slave handle must be used in order to properly communicate with the subprocess:
master, slave = pty.openpty()
tokenizer = subprocess.Popen(script, shell=True stdin=subprocess.PIPE, stdout=slave)
stdin_handle = process.stdin
stdout_handle = os.fdopen(master)
Now, I can communicate to the subprocess without closing it via
stdin_handle.write(input)
stdout_handle.readline() #gets the processed input
Your external script probably buffers its output, so you only can read it in the father when the buffer in the child is flushed (which the child must do itself). One way to make it flush its buffers is probably closing the input because then it terminates in a proper fashion and flushes its buffers in the process.
If you have control over the external program (i. e. if you can patch it), insert a flushing after the output is produced.
Otherwise programs sometimes can be made to not buffer their output by attaching them to a pseudo-TTY (many programs, including the stdlib, assume that when their output is going to a TTY, no buffering is wished). But this is a bit tricky.

Send command and exit using python pty pseudo terminal process

Using python pty module, i want to send some commands to the terminal emulator, using a function as stdin (as pty module wants), and then force quitting. I thought about something like
import pty
cmnds = ['exit\n', 'ls -al\n']
# Command to send. I try exiting as last command, but it doesn't works.
def r(fd):
if cmnds:
cmnds.pop()
# It seems is not executing sent commands ('ls -al\n')
else:
# Can i quit here? Can i return EOF?
pass
pty.spawn('/bin/sh', r)
Thank you
Firstly, the pty module does not allow you to communicate with the terminal emulator Python is running in. Instead, it allows Python to pretend to be a terminal emulator.
Looking at the source-code of pty.spawn(), it looks like it is designed to let a spawned process take over Python's stdin and stdout while it runs, which is not what you want.
If you just want to spawn a shell, send commands to it, and read the output, you probably want Python's subprocess module (in particular, if there's just one command you want to run, the subprocess.Popen class' .communicate() method will be helpful).
If you really, really need the sub-process to be running in a pty instead of a pipe, you can use os.openpty() to allocate a master and a slave file descriptor. Use the slave file descriptor as the subprocess' stdin and stdout, then write your commands to the master file descriptor and read the responses back from it.

Categories

Resources