Python send keystrokes to exe file - python

I need to send keystrokes to an exe using python. I can run the exe using python from subprocess as
import subprocess
subprocess.Popen('myfile.exe',stdout=subprocess.PIPE)
but how can I keep the connection and keep sending keys. I don't want to read back from the exe just to send some keystrokes any suggestions

Use stdin=subprocess.PIPE and Popen.communicate():
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
The optional input argument should be data to be sent to the child
process, or None, if no data should be sent to the child. If streams
were opened in text mode, input must be a string. Otherwise, it must
be bytes.
communicate() returns a tuple (stdout_data, stderr_data). The data
will be strings if streams were opened in text mode; otherwise, bytes.
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.

Related

Interact with an interactive shell script on python

I have an interactive shell application on windows.
I would like to write a python script that will send commands to that shell application and read back responses.
However i want to do it interactively, i.e. i want the shell application to keep running as long the python script is.
I have tried
self.m_process subprocess.Popen(path_to_shell_app,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,universal_newlines=True)
and then using stdin and stdout to send and recieve data.
it seems that the shell application is being opened but i can't communicate with it.
what am i doing wrong?
There is a module that was built just for that: pexpect. To use, import pexpect, and then use process = pexpect.spawn(myprogram) to create new subprocesses, and use process.expect(mystring) or process.expect_exact(mystring) to search for prompts, or to get responses. process.send(myinput) and process.sendline(myinput) are used for sending information to the subprocess.
Next you should use communicate
stdout, stderr = self.m_process.communicate(input=your_input_here)
From the subprocess module documentation
Popen.communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached.
Wait for process to terminate. The optional input argument should be a
string to be sent to the child process, or None, if no data should be
sent to the child.
communicate() returns a tuple (stdoutdata, stderrdata).
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.

subprocess.communicate - read lines that are not newline terminated

I'm writing a Python program that uses the subprocess module to communicate with the admin interface of an appliance over ssh. Sometimes the appliance prompts for input with a line that's not newline terminated. How do I get subprocess.communicate() to return those lines to me? Is there a way to read unbuffered and character-by character? The amount of I/O generated is pretty small, so I'm not concerned about high overhead here.
Opening the process with bufsize=0 will turn off output buffering according to the subprocess docs. I think you'll still have to pass some custom file-like object (like a StringIO) into Popen as stdout or stderr and you'll have to read from those; communicate() waits for the process to terminate before it returns any of the command's output.

Live reading / writing to a subprocess stdin/stdout

I want to make a Python wrapper for another command-line program.
I want to read Python's stdin as quickly as possible, filter and translate it, and then write it promptly to the child program's stdin.
At the same time, I want to be reading as quickly as possible from the child program's stdout and, after a bit of massaging, writing it promptly to Python's stdout.
The Python subprocess module is full of warnings to use communicate() to avoid deadlocks. However, communicate() doesn't give me access to the child program's stdout until the child has terminated.
I think you'll be fine (carefully) ignoring the warnings using Popen.stdin, etc yourself. Just be sure to process the streams line-by-line and iterate through them on a fair schedule so not to fill up any buffers. A relatively simple (and inefficient) way of doing this in Python is using separate threads for the three streams. That's how Popen.communicate does it internally. Check out its source code to see how.
Disclaimer: This solution likely requires that you have access to the source code of the process you are trying to call, but may be worth trying anyways. It depends on the called process periodically flushing its stdout buffer which is not standard.
Say you have a process proc created by subprocess.Popen. proc has attributes stdin and stdout. These attributes are simply file-like objects. So, in order to send information through stdin you would call proc.stdin.write(). To retrieve information from proc.stdout you would call proc.stdout.readline() to read an individual line.
A couple of caveats:
When writing to proc.stdin via write() you will need to end the input with a newline character. Without a newline character, your subprocess will hang until a newline is passed.
In order to read information from proc.stdout you will need to make sure that the command called by subprocess appropriately flushes its stdout buffer after each print statement and that each line ends with a newline. If the stdout buffer does not flush at appropriate times, your call to proc.stdout.readline() will hang.

What is the difference if I don't use stdout=subprocess.PIPE in subprocess.Popen()?

I recently noted in Python the subprocess.Popen() has an argument:
stdout=None(default)
I also saw people using stdout=subprocess.PIPE.
What is the difference? Which one should I use?
Another question would be, why the wait() function can't wait until the process is really done sometimes? I used:
a = sp.Popen(....,shell=True)
a.wait()
a2 = sp.Popen(...,shell=True)
a2.wait()
sometimes the a2 command is executed before the command a is done.
stdout=None means, the stdout-handle from the process is directly inherited from the parent, in easier words it basically means, it gets printed to the console (same applies for stderr).
Then you have the option stderr=STDOUT, this redirects stderr to the stdout, which means the output of stdout and stderr are forwarded to the same file handle.
If you set stdout=PIPE, Python will redirect the data from the process to a new file handle, which can be accessed through p.stdout (p beeing a Popen object). You would use this to capture the output of the process, or for the case of stdin to send data (constantly) to stdin.
But mostly you want to use p.communicate, which allows you to send data to the process once (if you need to) and returns the complete stderr and stdout if the process is completed!
One more interesting fact, you can pass any file-object to stdin/stderr/stdout, e.g. also a file opened with open (the object has to provide a fileno() method).
To your wait problem. This should not be the case! As workaround you could use p.poll() to check if the process did exit! What is the return-value of the wait call?
Furthermore, you should avoid shell=True especially if you pass user-input as first argument, this could be used by a malicious user to exploit your program! Also it launches a shell process which means additional overhead. Of course there is the 1% of cases where you actually need shell=True, I can't judge this with your minimalistic example.
stdout=None means that subprocess prints to whatever place your script prints
stdout=PIPE means that subprocess' stdout is redirected to a pipe that you should read e.g., using process.communicate() to read all at once or using process.stdout object to read via a file/iterator interfaces

Repeatedly write to STDIN and read STDOUT of a Subprocess without closing it

I am trying to employ a Subprocess in Python for keeping an external script open in a Server-like fashion. The external script first loads a model. Once this is done, it accepts requests via STDIN and returns processed strings to STDOUT.
So far, I've tried
tokenizer = subprocess.Popen([tokenizer_path, '-l', lang_prefix], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
However, I cannot use
tokenizer.stdin.write(input_string+'\n')
out = self._tokenizer.stdout.readline()
in order to repeatedly process input_strings by means of the subprocess – out will just be empty, no matter if I use stdout.read() or stdout.readline(). However, it works when I close the stdin with tokenizer.stdin.close() before reading STDOUT, but this closes the subprocess, which is not what I want as I would have to reload the whole external script again before sending another request.
Is there any way to use a subprocess in a server-like fashion in python without closing and re-opening it?
Thanks to this Answer, I found out that a slave handle must be used in order to properly communicate with the subprocess:
master, slave = pty.openpty()
tokenizer = subprocess.Popen(script, shell=True stdin=subprocess.PIPE, stdout=slave)
stdin_handle = process.stdin
stdout_handle = os.fdopen(master)
Now, I can communicate to the subprocess without closing it via
stdin_handle.write(input)
stdout_handle.readline() #gets the processed input
Your external script probably buffers its output, so you only can read it in the father when the buffer in the child is flushed (which the child must do itself). One way to make it flush its buffers is probably closing the input because then it terminates in a proper fashion and flushes its buffers in the process.
If you have control over the external program (i. e. if you can patch it), insert a flushing after the output is produced.
Otherwise programs sometimes can be made to not buffer their output by attaching them to a pseudo-TTY (many programs, including the stdlib, assume that when their output is going to a TTY, no buffering is wished). But this is a bit tricky.

Categories

Resources