Get output from a program with the user input that program takes - python

I'm trying to capture a string from the output of a subprocess and when the subprocess asks for user input, include the user input in the string, but I can't get stdout to work.
I got the string output from stdout using a while loop, but I don't know how to terminate it after reading the string.
I tried using subprocess.check_output, but then I can't see the prompts for user input.
import subprocess
import sys
child = subprocess.Popen(["java","findTheAverage"], stdout = subprocess.PIPE, stdin = subprocess.PIPE )
string = u""
while True:
line = str(child.stdout.read(1))
if line != '':
string += line[2]
print(string)
else:
break
print(string)
for line in sys.stdin:
print(line)
child.stdin.write(bytes(line, 'utf-8'))
EDIT:
With help and code from Alfe post I now have a string getting created from the subprocess programs output, and the users input to that program, but its jumbled about.
The string appears to first get The first letter of the output, then the user input, then the rest of the output.
Example of string muddling:
U2
3ser! please enter a double:U
4ser! please enter another double: U
5ser! please enter one final double: Your numbers were:
a = 2.0
b = 3.0
c = 4.0
average = 3.0
Is meant to be:
User! please enter a double:2
User! please enter another double: 3
User! please enter one final double: 4
Your numbers were:
a = 2.0
b = 3.0
c = 4.0
average = 3.0
Using the code:
import subprocess
import sys
import signal
import select
def signal_handler(signum, frame):
raise Exception("Timed out!")
child = subprocess.Popen(["java","findTheAverage"], universal_newlines = True, stdout = subprocess.PIPE, stdin = subprocess.PIPE )
string = u""
stringbuf = ""
while True:
print(child.poll())
if child.poll() != None and not stringbuf:
break
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(1)
try:
r, w, e = select.select([ child.stdout, sys.stdin ], [], [])
if child.stdout in r:
stringbuf = child.stdout.read(1)
string += stringbuf
print(stringbuf)
except:
print(string)
print(stringbuf)
if sys.stdin in r:
typed = sys.stdin.read(1)
child.stdin.write(typed)
string += typed
FINAL EDIT:
Alright, I played around with it and got it working with this code:
import subprocess
import sys
import select
import fcntl
import os
# the string that we will return filled with tasty program output and user input #
string = ""
# the subprocess running the program #
child = subprocess.Popen(["java","findTheAverage"],bufsize = 0, universal_newlines = True, stdout = subprocess.PIPE, stdin = subprocess.PIPE )
# stuff to stop IO blocks in child.stdout and sys.stdin ## (I stole if from http://stackoverflow.com/a/8980466/2674170)
fcntl.fcntl(child.stdout.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
fcntl.fcntl(sys.stdin.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
# this here in the unlikely event that the program has #
# finished by the time the main loop is first running #
# because if that happened the loop would end without #
# having added the programs output to the string! #
progout = ""
typedbuf = "#"
### here we have the main loop, this friendly fellah is
### going to read from the program and user, and tell
### each other what needs to be known
while True:
## stop when the program finishes and there is no more output
if child.poll() != None and not progout:
break
# read from
typed = ""
while typedbuf:
try:
typedbuf = sys.stdin.read(1)
except:
break
typed += typedbuf
stringbuf = "#"
string += typed
child.stdin.write(typed)
progout = ""
progoutbuf = "#"
while progoutbuf:
try:
progoutbuf = child.stdout.read(1)
except:
typedbuf = "#"
break
progout += progoutbuf
if progout:
print(progout)
string += progout
# the final output string #
print( string)

You need select to read from more than one source at the same time (in your case stdin and the output of the child process).
import select
string = ''
while True:
r, w, e = select.select([ child.stdout, sys.stdin ], [], [])
if child.stdout in r:
string += child.stdout.read()
if sys.stdin in r:
typed = sys.stdin.read()
child.stdin.write(typed)
string += typed
You will still need to find a proper breaking condition to leave that loop. But you probably get the idea already.
I want to give a warning at this point: Processes writing into pipes typically buffer until the latest possible moment; you might not expect this because when testing the same program from the command line (in a terminal) typically only lines get buffered. This is due to performance considerations. When writing to a terminal, typically a user expects to see the output as soon as possible. When writing to a pipe, typically a reading process is happy to be given larger chunks in order to sleep longer before they arrive.

Related

How to print number of characters based on terminal width that also resize?

Not sure if it's possible, but I was hoping to do something where I can print a hyphen for the width of the terminal on one line. If the window's width is resized, the amount of hyphens displayed would print accordingly.
This is a more elaborated version that allows printing whatever you want always according to the dimension of the terminal. You can also resize the terminal while nothing is being printed and the content will be resized accordingly.
I commented the code a little bit... but if you need I can be more explicit.
#!/usr/bin/env python2
import threading
import Queue
import time
import sys
import subprocess
from backports.shutil_get_terminal_size import get_terminal_size
printq = Queue.Queue()
interrupt = False
lines = []
def main():
ptt = threading.Thread(target=printer) # Turn the printer on
ptt.daemon = True
ptt.start()
# Stupid example of stuff to print
for i in xrange(1,100):
printq.put(' '.join([str(x) for x in range(1,i)])) # The actual way to send stuff to the printer
time.sleep(.5)
def split_line(line, cols):
if len(line) > cols:
new_line = ''
ww = line.split()
i = 0
while len(new_line) <= (cols - len(ww[i]) - 1):
new_line += ww[i] + ' '
i += 1
print len(new_line)
if new_line == '':
return (line, '')
return (new_line, ' '.join(ww[i:]))
else:
return (line, '')
def printer():
while True:
cols, rows = get_terminal_size() # Get the terminal dimensions
msg = '#' + '-' * (cols - 2) + '#\n' # Create the
try:
new_line = str(printq.get_nowait())
if new_line != '!##EXIT##!': # A nice way to turn the printer
# thread out gracefully
lines.append(new_line)
printq.task_done()
else:
printq.task_done()
sys.exit()
except Queue.Empty:
pass
# Build the new message to show and split too long lines
for line in lines:
res = line # The following is to split lines which are
# longer than cols.
while len(res) !=0:
toprint, res = split_line(res, cols)
msg += '\n' + toprint
# Clear the shell and print the new output
subprocess.check_call('clear') # Keep the shell clean
sys.stdout.write(msg)
sys.stdout.flush()
time.sleep(.5)
if __name__ == '__main__':
main()
Check this out:(it worked on windows and python3 )
import os
os.system('mode con: cols=100 lines=40')
input("Press any key to continue...")
os.system('mode con: cols=1000 lines=400')
input("Press any key to continue...")
This is doing exactly what you asked for... with a very small issue: when you make the shell smaller the cursor goes down of one line and the stuff that is above will stay there.... I can try to solve this issue... but the result will be more complicated.
I assumed you are using a unix system.
The code uses threads to be able to keep the line on the screen while doing other things. In this case just sleeping... Moreover, only using a thread is actually possible to have a "fast" answer to the change of the dimension of the terminal.
#!/usr/bin/env python2
import threading
import time
import sys
from backports.shutil_get_terminal_size import get_terminal_size
def main1():
ptt = threading.Thread(target=printer2)
ptt.daemon = True
ptt.start()
time.sleep(10)
def printer2():
while True:
cols, rows = get_terminal_size()
line = '-' * (cols - 2)
sys.stdout.write("\r" + '#' + line + '#')
sys.stdout.flush()
time.sleep(.5)

Why is this script slowing down per item with increased amount of input?

Consider the following program:
#!/usr/bin/env pypy
import json
import cStringIO
import sys
def main():
BUFSIZE = 10240
f = sys.stdin
decoder = json.JSONDecoder()
io = cStringIO.StringIO()
do_continue = True
while True:
read = f.read(BUFSIZE)
if len(read) < BUFSIZE:
do_continue = False
io.write(read)
try:
data, offset = decoder.raw_decode(io.getvalue())
print(data)
rest = io.getvalue()[offset:]
if rest.startswith('\n'):
rest = rest[1:]
decoder = json.JSONDecoder()
io = cStringIO.StringIO()
io.write(rest)
except ValueError, e:
#print(e)
#print(repr(io.getvalue()))
continue
if not do_continue:
break
if __name__ == '__main__':
main()
And here's a test case:
$ yes '{}' | pv | pypy parser-test.py >/dev/null
As you can see, the following script slows down when you add more input to it. This also happens with cPython. I tried to profile the script using mprof and cProfile, but I found no hint on why is that. Does anybody have a clue?
Apparently the string operations slowed it down. Instead of:
data, offset = decoder.raw_decode(io.getvalue())
print(data)
rest = io.getvalue()[offset:]
if rest.startswith('\n'):
rest = rest[1:]
It is better to do:
data, offset = decoder.raw_decode(io.read())
print(data)
rest = io.getvalue()[offset:]
io.truncate()
io.write(rest)
if rest.startswith('\n'):
io.seek(1)
You may want to close your StringIO at the end of the iteration (after writing).
io.close()
The memory buffer for a StringIO will free once it is closed, but will stay open otherwise. This would explain why each additional input is slowing your script down.

Python tkinter, do something before subprocess starts

I'm really having a problem with my Python tkinter program. Basically all I want to do is to press a button to start a subprocess and indicate that the subprocess is running by changing a label's value. The subprocess takes some time and the problem is that the label always waits for the subprocess to be finished to change, which I don't understand, because I used a variable to first change the label and then go on with the subprocess. Here is the code:
def program_final():
start = False
while True:
if start == False:
v.set("scanning...")
label.pack()
start = True
else:
# p = subprocess.Popen('sudo nfc-poll', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # opens a subporocess which starts nfc-polling in background
p = subprocess.Popen('ping 8.8.8.8', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
counter = 0
output = ""
lines = []
while True:
line = p.stdout.readline() # this while loop iterates through the output lines of the
lines.insert(counter, line) # subproccess and saves the whole result into a list
counter = counter +1 # the result will be needed to set output
if counter == 9:
break
if lines[6][7:10] == 'UID': # check if UID line is present
output = output + "Tag found!\n" + lines[6][7:] # if yes the output string gets added the UID of the tag
elif lines[6][7:10] != 'UID': # if the UID line is not present which means no tag is found
output = output + "No tag found!\n" # the output is set to no tag found
text.delete(1.0, END) # old tag infos are getting deleted out of texfield
text.insert(INSERT, output) # tag infos or 'no tag found' message is added to the textfield
break
Thanks in advance.
Tkinter update graphic only when there is nothing else to do. You can force the GUI to refresh if you use the update method on the widgets you really want to update.
tk.updates()

Pytonic way of passing values between process

I need a simple way to pass the stdout of a subprocess as a list to another function using multiprocess:
The first function that invokes subprocess:
def beginRecvTest():
command = ["receivetest","-f=/dev/pcan33"]
incoming = Popen(command, stdout = PIPE)
processing = iter(incoming.stdout.readline, "")
lines = list(processing)
return lines
The function that should receive lines:
def readByLine(lines):
i = 0
while (i < len(lines)):
system("clear")
if(lines[i][0].isdigit()):
line = lines[i].split()
dictAdd(line)
else:
next
print ; print "-" *80
for _i in mydict.keys():
printMsg(mydict, _i)
print "Keys: ", ; print mydict.keys()
print ; print "-" *80
sleep(0.3)
i += 1
and the main from my program:
if __name__ == "__main__":
dataStream = beginRecvTest()
p = Process(target=dataStream)
reader = Process(target=readByLine, args=(dataStream,))
p.start()
reader.start()
I've read up on using queues, but I don't think that's exactly what I need.
The subprocess called returns infinite data so some people have suggested using tempfile, but I am totally confused about how to do this.
At the moment the script only returns the first line read, and all efforts on looping the beginRecvTest() function have ended in compilation errors.

Broken Pipe - Trying to display progress of dd on LCD display

I am trying to use Python to create a tool for imaging CF cards with a Raspberry Pi.
I had most of it working until I implemented compressed images with dd.
When I try and pipe the output of gzip to ddI lose the ability to poke the dd process and get a progress.
I have tried to use multiple sub processes but keep getting broken pipe or no such file errors.
Below is my code:
#!/usr/bin/env python
from Adafruit_CharLCD import Adafruit_CharLCD
import os
import sys
import time
import signal
from subprocess import Popen, PIPE
lcd = Adafruit_CharLCD()
lcd.begin(16,2)
imgpth = '/home/pi/image/image_V11.img.gz'
line0 = ""
line1 = ""
q = 0
r = 0
s = 0
def lcdPrint(column, row, message, clear=False):
if ( clear == True ):
lcd.clear()
lcd.setCursor(column, row)
lcd.message(message)
lcd.clear()
lcdPrint(0, 0, 'Preparing Copy', True)
lcdPrint(0, 1, '')
gz = Popen(['gunzip -c /home/pi/image/image_V11.img.gz'], stdout=PIPE)
dd = Popen(['dd of=/dev/sda'],stdin=gz.stdout, stderr=PIPE)
filebyte = os.path.getsize(imgpth)
flsz = filebyte/1024000
while dd.poll() is None:
time.sleep(1)
dd.send_signal(signal.SIGUSR1)
while 1:
l = dd.stderr.readline()
if '(' in l:
param, value = l.split('b',1)
line1 = param.rstrip()
r = float(line1)
s = r/1024000
break
lcdPrint(0, 0, 'Copying....', True)
q = round(s/flsz*100, 2)
per = str(q)
lcdPrint(0, 1, per + '% Complete',)
lcdPrint(0, 0, 'Copy Complete', True)
time.sleep(1)
exit()
How can I fix this?
I stumbled across this question because I am doing exactly the same. My complete solution is here:
http://github.com/jrmhaig/Bakery
I've tried to pick out some differences between what I have and yours that might show you the solution.
When starting the dd I redirected both stderr and stdout to the pipe.
dd = subprocess.Popen(['dd', 'of=/dev/sda', 'bs=1M'], bufsize=1, stdin=unzip.stdout, stdout=PIPE, stderr=STDOUT)
I don't think this should really make a difference. Everything you need should go to stderr but for some reason it appeared to get mixed up for me.
I use a separate thread to pick up the output from dd:
def read_pipe(out, queue):
for line in iter(out.readline, b''):
queue.put(str(line))
out.close()
dd_queue = queue.Queue()
dd_thread = threading.Thread(target = read_pipe, args=(dd.stdout, dd_queue))
dd_thread.daemon = True
dd_thread.start()
Then when you call:
dd.send_signal(signal.SIGUSR1)
the output gets caught on dd_queue.
I also found that the uncompressed size of an gzipped file is stored in the last 4 bytes:
fl = open(str(imgpath), 'rb')
fl.seek(-4, 2)
r = fl.read()
fl.close()
size = struct.unpack('<I', r)[0]
os.path.getsize(imgpth) will only give you the compressed size so the percentage calculation will be wrong.

Categories

Resources