I am running jirashell in a python script using the subprocess library. I am currently having issues having the outputs print in real time. When I run jirashell it outputs information than prompts the user (y/n). The subprocess won't print out information prior to the prompt until I enter 'y' or 'n'.
The code I am using is
_consumer_key = "justin-git"
_cmd = "jirashell -s {0} -od -k {1} -ck {2} -pt".format(JIRA_SERVER,
_rsa_private_key_path, _consumer_key)
p = subprocess.Popen(_cmd.split(" "), stdout=subprocess.PIPE,
stderr=subprocess.PIPE, bufsize=0)
out, err = p.communicate() # Blocks here
print out
print err
The output is like so:
n # I enter a "n" before program will print output.
Output:
Request tokens received.
Request token: asqvavefafegagadggsgewgqqegqgqge
Request token secret: asdbresbdfbrebsaerbbsbdabweabfbb
Please visit this URL to authorize the OAuth request:
http://localhost:8000/plugins/servlet/oauth/authorize?oauth_token=zzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Have you authorized this program to connect on your behalf to http://localhost:8000? (y/n)
Error:
Abandoning OAuth dance. Your partner faceplants. The audience boos. You feel shame.
Does anyone know how I can have it print the output prior to the prompt than wait for an input of y/n? Note I also need to be able to store the output produced by the command so "os.system()" won't work...
EDIT:
It looks like inside jirashell there is a part of the code that is waiting for an input and this is causing the block. Until something is passed into this input nothing is outputted... Still looking into how I can get around this. I'm in the process of trying to move the portion of code I need into my application. This solution doesn't seem elegant but I can't see any other way right now.
approved = input(
'Have you authorized this program to connect on your behalf to {}? (y/n)'.format(server))
Method which prints and caches the standard output:
You can use a thread which reads the standard output of your subprocess, while the main thread is blocked until the subprocess is done. The following example will run the program other.py, which looks like
#!/usr/bin/env python3
print("Hello")
x = input("Type 'yes': ")
Example:
import threading
import subprocess
import sys
class LivePrinter(threading.Thread):
"""
Thread which reads byte-by-byte from the input stream and writes it to the
standard out.
"""
def __init__(self, stream):
self.stream = stream
self.log = bytearray()
super().__init__()
def run(self):
while True:
# read one byte from the stream
buf = self.stream.read(1)
# break if end of file reached
if len(buf) == 0:
break
# save output to internal log
self.log.extend(buf)
# write and flush to main standard output
sys.stdout.buffer.write(buf)
sys.stdout.flush()
# create subprocess
p = subprocess.Popen('./other.py', stdout=subprocess.PIPE)
# Create reader and start the thread
r = LivePrinter(p.stdout)
r.start()
# Wait until subprocess is done
p.wait()
# Print summary
print(" -- The process is done now -- ")
print("The standard output was:")
print(r.log.decode("utf-8"))
The class LivePrinter reads every byte from the subprocess and writes it to the standard output. (I have to admit, this is not the most efficient approach, but a larger buffer size blocks, the LiveReader until the buffer is full, even though the subprocess is awaiting the answer to a prompt.) Since the bytes are written to sys.stdout.buffer, there shouldn't be a problem with multi-byte utf-8 characters.
The LiveReader class also stores the complete output of the subprocess in the variable log for later use.
As this answer summarizes, it is save to start a thread after forking with subprocess.
Original answer which has problems, when the prompt line doesn't end a line:
The output is delayed because communicate() blocks the execution of your script until the sub-process is done (https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate).
You can read and print the standard output of the subprocess, while it is executed using stdout.readline. There are some issues about buffering, which require this rather complicated iter(process.stdout.readline, b'') construct. The following example uses gpg2 --gen-key because this command starts an interactive tool.
import subprocess
process = subprocess.Popen(["gpg2", "--gen-key"], stdout=subprocess.PIPE)
for stdout_line in iter(process.stdout.readline, b''):
print(stdout_line.rstrip())
Alternative answer which uses shell and does not cache the output:
As Sam pointed out, there is a problem with the above solution, when the prompt line does not end the line (which prompts they usually don't). An alternative solution is to use the shell argument to interact with the sub-process.
import subprocess
subprocess.call("gpg2 --gen-key", shell=True)
Related
There's a console program I want to run from a python script. Let me call it the child.
Once in a while, to continue processing, the child expects to read 0 bytes of data from stdin.
For simplicity, let's assume the child is the following python script:
child.py
import os
import sys
import time
stdin = sys.stdin.fileno()
def speak(message):
print(message, flush=True, end="")
while True:
speak("Please say nothing!")
data = os.read(stdin, 1024)
if data == b"":
speak("Thank you for nothing.")
time.sleep(5)
else:
speak("I won't continue unless you keep silent.")
To work successfully (i.e. seeing "Thank you for nothing." printed), you must typically hit Ctrl+D when running it in a UNIX terminal, while in cmd or PowerShell under Windows hitting Ctrl+Z followed by Enter will do the trick.
Here's an attempt to run the child from inside a python script, which I shall call the parent:
parent.py
import os
import subprocess
def speak_to_child(child_stdin_r, child_stdin_w, message):
child = subprocess.Popen(
["python", "child.py"],
stdin=child_stdin_r,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
child_stdout_r = child.stdout.fileno()
while True:
data = os.read(child_stdout_r, 1024)
print(f"child said: {data}")
if data == b"Please say nothing!":
os.write(child_stdin_w, message)
child_stdin_r, child_stdin_w = os.pipe()
speak_to_child(child_stdin_r, child_stdin_w, b"Not sure how to say nothing.")
This is of course an unsuccessful attempt as the child will clearly answer with "I won't continue unless you keep silent." after reading "Not sure how to say nothing." from its stdin.
Naively changing the message b"Not sure how to say nothing." in the parent to the empty message b"" doesn't get rid of the problem, since writing 0 bytes to a pipe won't cause a read of 0 bytes on the receiving end.
Now on UNIX we could easily solve the problem by replacing the pipe with a pseudoterminal and the empty message b"" with an EOT character b"\x04" like so:
import pty
child_stdin_w, child_stdin_r = pty.openpty()
speak_to_child(child_stdin_r, child_stdin_w, b"\x04")
This works ... but evidently not on Windows. So now my questions:
On UNIX, is using a pseudoterminal the best way to force the read of 0 bytes or is there a better way?
On Windows, given that pseudoterminals aren't available, how can I solve the problem?
A platform agnostic solution would of course be ideal.
I want to get user input from a subprocess in an new terminal.
import subprocess
additionalBuildArguments = "defaultarg1"
proc = subprocess.Popen(["python", "user_input.py", additionalBuildArguments],
creationflags=subprocess.CREATE_NEW_CONSOLE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
try:
outs, errs = proc.communicate(timeout=15)
except subprocess.TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
additionalBuildArguments = outs or additionalBuildArguments
user_input.py:
import sys
arg = sys.argv[1]
user_input = input(f"Additional build arguments [{arg}] (Push <ENTER> to use these settings):\n")
print(user_input)
as long as I don't set the stdout=subprocess.PIPE and/or the stderr=subprocess.PIPE options I can enter input. But with these options I can't write any input to the console.
Indeed I need these options to redirect the stdout, to have access to the printed user_input in the parent process.
Does anyone know what's the problem here?
Please note that I do not understand why you want to do this, and feel instinctively that you should not. However, it's perfectly possible: just catpure only stdout:
import sys
from subprocess import run
print("Type away: ", end="")
sys.stdout.flush()
r = run(["python", "-c", "print(input())"], capture_output=True, encoding="utf8")
print(f"You entered {r.stdout}")
EDIT Apparently you are using windows. Per the docs your flag is set when shell=True. With shell=True this works for me, but I have no idea whether it will for you:
import sys
from subprocess import run
print("Type away: ", end="")
sys.stdout.flush()
r = run("python -c 'print(input())'", capture_output=True, shell=True, encoding="utf8")
print(f"You entered {r.stdout}")
This can be chained to run in yet a third process, which would be needed to print whilst also capturing stdout, from a subprocess. But at this point we are in the realm of very horrible hacks.
A better, but still hacky, solution, is to re-phrase the problem a bit. You want to spawn a terminal, which apparently you can do, and the user can interact with it correctly, and then you want to get output from that terminal in the spawning code. STDOUT is not the proper channel for this communication. Personally I would structure my code like this:
in spawning code:
generate parametrised script to run in spawned terminal and save it as a temp file
spawn subterminal running the generated script
wait for completion
read temp out file and get data
delete both temp script and temp out file
in generated code:
do as much as possible (you have a full python, after all)
dump output as json to a temporary file
This is still hacky, but it only involves spawning one terminal. Note that I still don't understand why you want to do this, but this should at least work.
I'm working on a script to automate tests of a certain software, and as part of it I need to chech if it runs commands correctly.
I'm currently launching an executeable using subprocess and passing the initial parameters.
My code is: subprocess.run("program.exe get -n WiiVNC", shell=True, check=True)
As far as I understand, this runs the executeable, and is supposed to return an exception if the exit code is 1.
Now, the program launches, but at some point waits for user input like so:
My question is, how do I go about submitting the user input "y" using subprocess once either, the text "Continue with download of "WiiVNC"? (y/n) >" shows up, or once the program waits for user input.
You should use the pexpect module for all complicated subprocessing. In particular, the module is designed to handle the complicated case of either passing through the input to the current process for the user to answer and/or allowing your script to answer the input for the user and continue the subprocess.
Added some code for an example:
### File Temp ###
# #!/bin/env python
# x = input('Type something:')
# print(x)
import pexpect
x = pexpect.spawn('python temp') #Start subprocess.
x.interact() #Imbed subprocess in current process.
# or
x = pexpect.spawn('python temp') #Start subprocess.
find_this_output = x.expect(['Type something:'])
if find_this_output is 0:
x.send('I type this in for subprocess because I found the 0th string.')
Try this:
import subprocess
process = subprocess.Popen("program.exe get -n WiiVNC", stdin=subprocess.PIPE, shell=True)
process.stdin.write(b"y\n")
process.stdin.flush()
stdout, stderr = process.communicate()
I have a Python script that calls another Python script using subprocess.Popen. I know the called code always returns 10 ,which means it failed.
My problem is, the caller only reads 10 approximatively 75% of the time. The other 25% it reads 0 and mistakes the called program failure code as a success. Same command, same environment, apparently random occurences.
Environment: Python 2.7.10, Linux Redhat 6.4. The code presented here is a (very) simplified version but I can still reproduce the problem using it.
This is the called script, constant_return.py:
#!/usr/bin/env python2.7
# -*- coding: utf-8 -*-
"""
Simplified called code
"""
import sys
if __name__ == "__main__":
sys.exit(10)
This is the caller code:
#!/usr/bin/env python2.7
# -*- coding: utf-8 -*-
"""
Simplified version of the calling code
"""
try:
import sys
import subprocess
import threading
except Exception, eImp:
print "Error while loading Python library : %s" % eImp
sys.exit(100)
class BizarreProcessing(object):
"""
Simplified caller class
"""
def __init__(self):
"""
Classic initialization
"""
object.__init__(self)
def logPipe(self, isStdOut_, process_):
"""
Simplified log handler
"""
try:
if isStdOut_:
output = process_.stdout
logfile = open("./log_out.txt", "wb")
else:
output = process_.stderr
logfile = open("./log_err.txt", "wb")
#Read pipe content as long as the process is running
while (process_.poll() == None):
text = output.readline()
if (text != '' and text.strip() != ''):
logfile.write(text)
#When the process is finished, there might still be lines remaining in the pipe
output.readlines()
for oneline in output.readlines():
if (oneline != None and oneline.strip() != ''):
logfile.write(text)
finally:
logfile.close()
def startProcessing(self):
"""
Launch process
"""
# Simplified command line definition
command = "/absolute/path/to/file/constant_return.py"
# Execute command in a new process
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#Launch a thread to gather called programm stdout and stderr
#This to avoid a deadlock with pipe filled and such
stdoutTread = threading.Thread(target=self.logPipe, args=(True, process))
stdoutTread.start()
stderrThread = threading.Thread(target=self.logPipe, args=(False, process))
stderrThread.start()
#Wait for the end of the process and get process result
stdoutTread.join()
stderrThread.join()
result = process.wait()
print("returned code: " + str(result))
#Send it back to the caller
return (result)
#
# Main
#
if __name__ == "__main__":
# Execute caller code
processingInstance = BizarreProcessing()
aResult = processingInstance.startProcessing()
#Return the code
sys.exit(aResult)
Here is what I type in bash to execute the caller script:
for res in {1..100}
do
/path/to/caller/script.py
echo $? >> /tmp/returncodelist.txt
done
It seems to be somehow connected to the way I read the called program outputs, because when I create the subprocess with process = subprocess.Popen(command, shell=True, stdout=sys.stdout, stderr=sys.stderr) and remove all the Thread stuff it reads the correct return code (but doesn't log as I want anymore...)
Any idea what I did wrong ?
Thanks a lot for your help
logPipe is also checking whether the process is alive to determine whether there's more data to read. This is not correct - you should be checking whether the pipe has reached EOF, by looking for a zero-length read, or by using output.readlines(). The I/O pipes may outlive the process.
This simplifies logPipe significantly: Change logPipe as below:
def logPipe(self, isStdOut_, process_):
"""
Simplified log handler
"""
try:
if isStdOut_:
output = process_.stdout
logfile = open("./log_out.txt", "wb")
else:
output = process_.stderr
logfile = open("./log_err.txt", "wb")
#Read pipe content as long as the process is running
with output:
for text in output:
if text.strip(): # ... checks if it's not an empty string
logfile.write(text)
finally:
logfile.close()
Second, don't join your logging threads until after process.wait(), for the same reason - the I/O pipes may outlive the process.
What I think is happening under the covers is that there's a SIGPIPE being emitted and mishandled somewhere - possibly being misconstrued as the process termination condition. This is because the pipe is being closed on one end or the other without being flushed. SIGPIPE can sometimes be a nuisance in larger applications; it may be that the Python library swallows it or does something childish with it.
edit As #Blackjack points out, SIGPIPE is automatically blocked by Python. So, that rules out SIGPIPE malfeasance. A second theory though: The documentation behind Popen.poll() states:
Check if child process has terminated. Set and return returncode
attribute.
If you strace this (eg, strace -f -o strace.log ./caller.py), this appears to be being done via wait4(WNOHANG). You've got 2 threads waiting with WNOHANG and one waiting normally, but only one call will return correctly with the process exit code. If there is no lock in the implementation of subprocess.poll(), then there is quite likely a race to assign process.resultcode, or a potential failure to do so correctly. Limiting your Popen.waits/polls to a single thread should be a good way to avoid this. See man waitpid.
edit as an aside, if you can hold all your stdout/stderr data in memory, subprocess.communicate() is much easier to use and does not require the logPipe or background threads at all.
https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate
I am writing a script in which in the external system command may sometimes require user input. I am not able to handle that properly. I have tried using os.popen4 and subprocess module but could not achieve the desired behavior.
Below mentioned example would show this problem using "cp" command. ("cp" command is used to show this problem, i am calling some different exe which may similarly prompt for user response in some scenarios). In this example there are two files present on disk and when user tries to copy file1 to file2, an conformer message comes up.
proc = subprocess.Popen("cp -i a.txt b.txt", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,)
stdout_val, stderr_val = proc.communicate()
print stdout_val
b.txt?
proc.communicate("y")
Now in this example if i read only stdout/stderr and prints it, later on if i try to write "y" or "n" based on user's input, i got an error that channel is closed.
Can some one please help me on achieving this behavior in python such that i can print stdout first, then should take user input and write stdin later on.
I found another solution (Threading) from Non-blocking read on a subprocess.PIPE in python , not sure whether it would help. But it appears it is printing question from cp command, i have modified code but not sure on how to write in threading code.
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['cp', '-i', 'a.txt', 'b.txt'],stdin=PIPE, stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.start()
try:
line = q.get_nowait()
except Empty:
print('no output yet')
else:
pass
Popen.communicate will run the subprocess to completion, so you can't call it more than once. You could use the stdin and stdout attributes directly, although that's risky as you could deadlock if the process uses block buffering or the buffers fill up:
stdout_val = proc.stdout.readline()
print stdout_val
proc.stdin.write('y\n')
As there is a risk of deadlock and because this may not work if the process uses block buffering, you would do well to consider using the pexpect package instead.
I don't have a technical answer to this question. More of just a solution. It has something to do with the way the process waits for the input, and once you communicate with the process, a None input is enough to close the process.
For your cp example, what you can do is check the return code immediately with proc.poll(). If the return value is None, you might assume it is trying to wait for input and can ask your user a question. You can then pass the response to the process via proc.communicate(response). It will then pass the value and proceed with the process.
Maybe someone else can chime in with a more technical reason why an initial communicate with a None value closes the process.