I am automating a GUI application that also has a CLI. I can successfully run the following (theApplication is an internally developed app):
import subprocess
process = subprocess.Popen(r'theApplication.exe -foo=7')
It then prints its output which I wish to process, so I use subprocess.PIPE
import subprocess
process = subprocess.Popen(r'theApplication.exe -foo=7', stdout=subprocess.PIPE)
Now the program does not start and Windows reports that theApplication has stopped working. The same happens if the stdout is redirected to DEVNULL or a file. I tried setting bufsize to 0 and 1 and the same happens.
Tried all permutations with stdout and stderr being redirected or not. It only works if neither of them is redirected. If one is redirected it instantly breaks the program. The value of process.returncode is None.
How can pipeing the output can break the program?
Have you already tried it with list parameter? Like:
import subprocess
process = subprocess.Popen(["theApplication.exe", "-foo=7"], stdout=subprocess.PIPE)
So the answer was to also redirect stdin=subprocess.PIPE, altough the theApplication does not read from it.
Related
Description:
I was trying to make a shell that can be interactive on a chatting software, so I need a cmd.exe as a subprocess and pass strings into the process.
I have this:
from subprocess import Popen
from subprocess import PIPE as p
proc = Popen("cmd",stdout=p,stdin=p,shell=True)
so usually what we do if we need to pass input to the process is by using proc.stdin.write()
but it seems that the string will only pass in and work after the python script is complete
for example, I have
#same thing above
proc.stdin.write("ping 127.0.0.1".encode())
time.sleep(10)
the script will wait for 10 sec then pass and run the ping command.
which means it's impossible to get the result stdout.read() because there is nothing.
I have tried to use subprocess.Popen.communicate() but it closes the pipe after one input.
Is there any way to solve the "only run the command after script finish" thing, or make communicate() not close the pipe?
Writes to pipes are buffered, you need to flush the buffer.
proc.stdin.write("ping 127.0.0.1".encode())
proc.stdin.flush()
My python script (python 3.4.3) calls a bash script via subprocess.
OutPST = subprocess.check_output(cmd,shell=True)
It works, but the problem is, that I only get half of the data. The subprocess I call, calls a different subprocess and I have the guess, that if the "sub subprocess" sends the EOF, my programm thinks, that that´s it and ends the check_output.
Has someone an idea how to get all the data?
You should use subprocess.run() unless you really need that fine grained of control over talking to the processing via its stdin (or doing something else while the process is running instead of blocking for it to finish). It makes capturing output super easy:
from subprocess import run, PIPE
result = run(cmd, stdout=PIPE, stderr=PIPE)
print(result.stdout)
print(result.stderr)
If you want to merge stdout and stderr (like how you'd see it in your terminal if you didn't do any redirection), you can use the special destination STDOUT for stderr:
from subprocess import STDOUT
result = run(cmd, stdout=PIPE, stderr=STDOUT)
print(result.stdout)
I have the following piece of code. myprg has to run after my script exits. But before exiting, I would like to wait for mystring to be emitted by myprg and then redirect rest of the stderr messages to mylogfile and then exit. Is this possible at all? I tried the following and the python script does not exit. Where am I going wrong?
import subprocess
proc = subprocess.Popen(myprg, stderr=subprocess.PIPE)
for line in iter(proc.stderr.readline,'')
if check_string in line:
break
fh = open (mylogfile,'w')
proc.stderr = fh.fileno()
EDIT: I have temporarily worked around this problem by making myprg to emit check_string to stdout. This is less than optimal, but works for the moment.
shouldn't you call proc.communicate() for the script to exit?
I am writing a script in which in the external system command may sometimes require user input. I am not able to handle that properly. I have tried using os.popen4 and subprocess module but could not achieve the desired behavior.
Below mentioned example would show this problem using "cp" command. ("cp" command is used to show this problem, i am calling some different exe which may similarly prompt for user response in some scenarios). In this example there are two files present on disk and when user tries to copy file1 to file2, an conformer message comes up.
proc = subprocess.Popen("cp -i a.txt b.txt", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,)
stdout_val, stderr_val = proc.communicate()
print stdout_val
b.txt?
proc.communicate("y")
Now in this example if i read only stdout/stderr and prints it, later on if i try to write "y" or "n" based on user's input, i got an error that channel is closed.
Can some one please help me on achieving this behavior in python such that i can print stdout first, then should take user input and write stdin later on.
I found another solution (Threading) from Non-blocking read on a subprocess.PIPE in python , not sure whether it would help. But it appears it is printing question from cp command, i have modified code but not sure on how to write in threading code.
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['cp', '-i', 'a.txt', 'b.txt'],stdin=PIPE, stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.start()
try:
line = q.get_nowait()
except Empty:
print('no output yet')
else:
pass
Popen.communicate will run the subprocess to completion, so you can't call it more than once. You could use the stdin and stdout attributes directly, although that's risky as you could deadlock if the process uses block buffering or the buffers fill up:
stdout_val = proc.stdout.readline()
print stdout_val
proc.stdin.write('y\n')
As there is a risk of deadlock and because this may not work if the process uses block buffering, you would do well to consider using the pexpect package instead.
I don't have a technical answer to this question. More of just a solution. It has something to do with the way the process waits for the input, and once you communicate with the process, a None input is enough to close the process.
For your cp example, what you can do is check the return code immediately with proc.poll(). If the return value is None, you might assume it is trying to wait for input and can ask your user a question. You can then pass the response to the process via proc.communicate(response). It will then pass the value and proceed with the process.
Maybe someone else can chime in with a more technical reason why an initial communicate with a None value closes the process.
I am using python 2.5 on Windows. I wish to interact with a console process via Popen. I currently have this small snippet of code:
p = Popen( ["console_app.exe"], stdin=PIPE, stdout=PIPE )
# issue command 1...
p.stdin.write( 'command1\n' )
result1 = p.stdout.read() # <---- we never return here
# issue command 2...
p.stdin.write( 'command2\n' )
result2 = p.stdout.read()
I can write to stdin but can not read from stdout. Have I missed a step? I don't want to use p.communicate( "command" )[0] as it terminates the process and I need to interact with the process dynamically over time.
Thanks in advance.
Your problem here is that you are trying to control an interactive application.
stdout.read() will continue reading until it has reached the end of the stream, file or pipe. Unfortunately, in case of an interactive program, the pipe is only closed then whe program exits; which is never, if the command you sent it was anything other than "quit".
You will have to revert to reading the output of the subprocess line-by-line using stdout.readline(), and you'd better have a way to tell when the program is ready to accept a command, and when the command you issued to the program is finished and you can supply a new one. In case of a program like cmd.exe, even readline() won't suffice as the line that indicates a new command can be sent is not terminated by a newline, so will have to analyze the output byte-by-byte. Here's a sample script that runs cmd.exe, looks for the prompt, then issues a dir and then an exit:
from subprocess import *
import re
class InteractiveCommand:
def __init__(self, process, prompt):
self.process = process
self.prompt = prompt
self.output = ""
self.wait_for_prompt()
def wait_for_prompt(self):
while not self.prompt.search(self.output):
c = self.process.stdout.read(1)
if c == "":
break
self.output += c
# Now we're at a prompt; clear the output buffer and return its contents
tmp = self.output
self.output = ""
return tmp
def command(self, command):
self.process.stdin.write(command + "\n")
return self.wait_for_prompt()
p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE )
prompt = re.compile(r"^C:\\.*>", re.M)
cmd = InteractiveCommand(p, prompt)
listing = cmd.command("dir")
cmd.command("exit")
print listing
If the timing isn't important, and interactivity for a user isn't required, it can be a lot simpler just to batch up the calls:
from subprocess import *
p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE )
p.stdin.write("dir\n")
p.stdin.write("exit\n")
print p.stdout.read()
Have you tried to force windows end lines?
i.e.
p.stdin.write( 'command1 \r\n' )
p.stdout.readline()
UPDATE:
I've just checked the solution on windows cmd.exe and it works with readline(). But it has one problem Popen's stdout.readline blocks. So if the app will ever return something without endline your app will stuck forever.
But there is a work around for that check out: http://code.activestate.com/recipes/440554/
I think you might want to try to use readline() instead?
Edit: sorry, misunderstoud.
Maybe this question can help you?
Is it possible that the console app is buffering its output in some way so that it is only being sent to stdout when the pipe is closed? If you have access to the code for the console app, maybe sticking a flush after a batch of output data might help?
Alternatively, is it actually writing to stderr and instead of stdout for some reason?
Just looked at your code again and thought of something else, I see you're sending in "command\n". Could the console app be simply waiting for a carriage return character instead of a new line? Maybe the console app is waiting for you to submit the command before it produces any output.
Had the exact same problem here. I dug into DrPython source code and stole wx.Execute() solution, which is working fine, especially if your script is already using wx. I never found correct solution on windows platform though...