How to communicate with command line program using python? - python

import subprocess
import sys
proc = subprocess.Popen(["program.exe"], stdin=subprocess.PIPE) #the cmd program opens
proc.communicate(input="filename.txt") #here the filename should be entered (runs)
#then the program asks to enter a number:
proc.communicate(input="1") #(the cmd stops here and nothing is passed)
proc.communicate(input="2") # (same not passing anything)
how do i pass and communicate with the cmd using python.
Thanks. (using windows platform)

The docs on communicate() explain this:
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
communicate() blocks once the input has been sent until the program finishes executing. In your example, the program waits for more input after you send "1", but Python waits for it to exit before it gets to the next line, meaning the whole thing deadlocks.
If you want to read and write a lot interchangeably, make pipes to stdin/stdout and write/read to/from them.

Related

How do I properly loop through subprocess.stdout

I'm creating a program where I need to use a powershell session and I found out how I could have a persistent session using the below code. However I want to loop through the new lines of the output of powershell when a command has been run. The for loop below is the only way i've found to do so but it expects an EOF and doesn't get it so it just lingers and the program never exits. How can I get the amount of new lines in stdout so I can properly loop through them?
from subprocess import Popen, PIPE
process = Popen(["powershell"], stdin=PIPE, stdout=PIPE)
def ps(command):
command = bytes("{}\n".format(command), encoding='utf-8')
process.stdin.write(command)
process.stdin.flush()
process.stdout.readline()
return process.stdout.readline().decode("utf-8")
ps("echo hello world")
for line in process.stdout:
print(line.strip().decode("utf-8"))
process.stdin.close()
process.wait()
You need the Powershell command to know when to exit. Typically, the solution is to not just flush, but close the stdin for the child process; when it's done with its work and finds EOF on its input, it should exit on its own. Just change:
process.stdin.flush()
to:
process.stdin.close()
which implies a flush and also ensures the child process knows input is done. If that doesn't work on its own, you might explicitly add a quit or exit (whatever Powershell uses to terminate the session manually) command after the command you're actually running.
If you must run multiple commands in the single subprocess, and each command must be fully consumed before the next one is sent, there are terrible heuristic solutions available, e.g. sending three commands at once, where the second simply echoes a sentinel string and the third explicitly flushes stdout (to ensure block buffering doesn't mean you deadlock waiting for the sentinel when its stuck in subprocess's internal buffers), and your loop can terminate once it sees the sentinel. Without a sentinel, it's worse, because you basically can't tell when the command is done, and just have to use the select/selectors module to poll the process's stdout with a timeout, reading lines whenever there is available data, and assuming the process is done if no new input is available without the expected timeout window.

subprocess Popen stdin will only input and run after the script has finished

Description:
I was trying to make a shell that can be interactive on a chatting software, so I need a cmd.exe as a subprocess and pass strings into the process.
I have this:
from subprocess import Popen
from subprocess import PIPE as p
proc = Popen("cmd",stdout=p,stdin=p,shell=True)
so usually what we do if we need to pass input to the process is by using proc.stdin.write()
but it seems that the string will only pass in and work after the python script is complete
for example, I have
#same thing above
proc.stdin.write("ping 127.0.0.1".encode())
time.sleep(10)
the script will wait for 10 sec then pass and run the ping command.
which means it's impossible to get the result stdout.read() because there is nothing.
I have tried to use subprocess.Popen.communicate() but it closes the pipe after one input.
Is there any way to solve the "only run the command after script finish" thing, or make communicate() not close the pipe?
Writes to pipes are buffered, you need to flush the buffer.
proc.stdin.write("ping 127.0.0.1".encode())
proc.stdin.flush()

How to pass subprocess control to regular stdin after using a pipe?

What I'd like to do is to, in Python, programmatically send a few initial commands via stdin to a process, and then pass input to the user to let them control the program afterward. The Python program should simply wait until the subprocess exits due to user input. In essence, what I want to do is something along the lines of:
import subprocess
p = subprocess.Popen(['cat'], stdin=subprocess.PIPE)
# Send initial commands.
p.stdin.write(b"three\ninitial\ncommands\n")
p.stdin.flush()
# Give over control to the user.
# …Although stdin can't simply be reassigned
# in post like this, it seems.
p.stdin = sys.stdin
# Wait for the subprocess to finish.
p.wait()
How can I pass stdin back to the user (not using raw_input, since I need the user's input to come into effect every keypress and not just after pressing enter)?
Unfortunately, there is no standard way to splice your own stdin to some other process's stdin for the duration of that process, other than to read from your own stdin and write to that process, once you have chosen to write to that process in the first place.
That is, you can do this:
proc = subprocess.Popen(...) # no stdin=
and the process will inherit your stdin; or you can do this:
proc = subprocess.Popen(..., stdin=subprocess.PIPE, ...)
and then you supply the stdin to that process. But once you have chosen to supply any of its stdin, you supply all of its stdin, even if that means you have to read your own stdin.
Linux offers a splice system call (documentation at man7.org, documentation at linux.die.net, Wikipedia, linux pipe data from file descriptor into a fifo) but your best bet is probably a background thread to copy the data.
So searching for this same thing, at least in my case, the pexpect library takes care of this:
https://pexpect.readthedocs.io/en/stable/
p = pexpect.spawn("ssh myhost")
p.sendline("some_line")
p.interact()
As by its name you can automate a lot of interaction before handing it over to the user.
Note, in your case you may want an output filter:
Using expect() and interact() simultaneously in pexpect

How can I handle user input for subprocesses ran in parallel in Python?

I have a Python helper function to run grunt commands in parallel, using Popen to handle subprocesses. The purpose is communication over CLI. The problem starts when any user input is required for all those processes, e.g. file path, password, 'yes/no' decision:
Enter file path: Enter file path: Enter file path: Enter file path: Enter file path: Enter file path: Enter file path:
Everything up-to-date
Grunt task completed successfully.
User provides input once, one process completes successfully and all others never finish executing.
Code:
from subprocess import check_output, Popen
def run_grunt_parallel(grunt_commands):
return_code = 0
commands = []
for command in grunt_commands:
with tempfile.NamedTemporaryFile(delete=False) as f:
app = get_grunt_application_name(' '.join(command))
commands.append({'app': app, 'process': Popen(command, stdout=f)})
while len(commands):
sleep(5)
next_round = []
for command in commands:
rc = command['process'].poll()
if rc == None:
next_round.append(command)
else:
if rc == 0:
else:
return_code = rc
commands = next_round
return return_code
Is there a way to make sure that user can provide all necessary input for each process?
What you want is almost (if not entirely) impossible. But if you can recognize prompts in a prefix-free fashion (and, if it varies, know from them how many lines of input they expect), you should be able to manage it.
Run each process with two-way unbuffered pipes:
Popen(command, stdin=subprocess.PIPE,
stdout=f, stderr=subprocess.PIPE, bufsize=0)
(Well-behaved programs prompt on standard error. Yours seem to do so, since you showed the prompts despite the stdout=f; if they don’t do so reliably, you get to read that from a pipe as well, search for prompts in it, and copy it to a file yourself.)
Unix
Set all pipes non-blocking. Then use select to monitor the stderr pipes for all processes. (You might try selectors instead.) Buffer what you read separately for each process until you recognize a prompt from one. Then display the prompt (identifying the source process) and accept input from the user—if the output between prompts fits in the pipe buffers, this won’t slow the background work down. Put that user input in a buffer associated with that process, and add its stdin pipe to the select.
When a stdin pipe shows ready, write to it, and remove it from the set if you finish doing so. When a read from a pipe returns EOF, join the corresponding process (or do so in a SIGCHLD handler if you worry that a process might close its end early).
Windows
Unless you have a select emulation available that supports pipes, you’ll have to use threads—one for each process, or one for each pipe if a process might produce output after writing a prompt and before reading the response. Then use a Queue to post prompts as messages to the main thread, which can then use (for example) another per-process Queue to send the user input back to the thread (or its writing buddy).
This works on any threading-supporting platform and has the potential advantage of not relying on pipe buffers to avoid stalling talkative processes.

Manage/kill subprocess from python

This should be a basic problem, but I'm scratching my head on this one...
I'm trying to build the skeleton of a python script, part of which will use a loop to pipe strings into a camera SDK's console .exe (which opens, waits for user input, captures/saves an image with filename specified by input, waits for next input or to be killed).
I've built some test code which uses another simple .exe program that opens, takes user input, writes it to a .txt, and waits for next input or to be killed:
from subprocess import Popen, PIPE
p = Popen("testinput1.exe", stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=False)
bytes = str.encode("no")
for n in range(1, 5):
p.stdin.write(bytes)
p.stdin.close()
p.terminate()
However, this code on its own doesn't successfully open the process, write to it, and close - mysteriously, it doesn't do anything at all.
If I remove the line
p.terminate()
I can successfully pipe my "nonononono" into the text file, but the subprocess isn't closed, and the .txt file grows by 1KB/sec until I close the "testinput1.exe"
Do I need to put some kind of wait before p.terminate()? Or am I going about the whole process incorrectly?

Categories

Resources