I'm trying to capture the stdout of a Popen object while it's running and display this data on a gui and log it. However whenever I try and read from the stdout attribute my program freezes. Minimal working code below. 'here' prints, then the process string representation, but then it hangs when it tries to read the first byte of stdout. Why is this the case?
Main script
import subprocess
import os
from threading import Thread
def print_to_terminal(process):
print(process)
print(process.stdout.read(1), flush=True)
sys.stdout.flush()
runner = subprocess.Popen(['python', 'print_and_wait.py'], env=os.environ, stdout=subprocess.PIPE)
print('here')
t = Thread(target=print_to_terminal, args=[runner]).run()
print('there')
runner.wait()
script Popen is calling
from time import sleep
for _ in range(10):
print('hello')
sleep(1)
After comments: This did work if I added a flush to the print in the print_and_wait function. See below
from time import sleep
for _ in range(10):
print('hello', flush=True)
sleep(1)
Related
Right now, I'm using subprocess to run a long-running job in the background. For multiple reasons (PyInstaller + AWS CLI) I can't use subprocess anymore.
Is there an easy way to achieve the same thing as below ? Running a long running python function in a multiprocess pool (or something else) and do real time processing of stdout/stderr ?
import subprocess
process = subprocess.Popen(
["python", "long-job.py"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
)
while True:
out = process.stdout.read(2000).decode()
if not out:
err = process.stderr.read().decode()
else:
err = ""
if (out == "" or err == "") and process.poll() is not None:
break
live_stdout_process(out)
Thanks
getting it cross platform is messy .... first of all windows implementation of non-blocking pipe is not user friendly or portable.
one option is to just have your application read its command line arguments and conditionally execute a file, and you get to use subprocess since you will be launching yourself with different argument.
but to keep it to multiprocessing :
the output must be logged to queues instead of pipes.
you need the child to execute a python file, this can be done using runpy to execute the file as __main__.
this runpy function should run under a multiprocessing child, this child must first redirect its stdout and stderr in the initializer.
when an error happens, your main application must catch it .... but if it is too busy reading the output it won't be able to wait for the error, so a child thread has to start the multiprocess and wait for the error.
the main process has to create the queues and launch the child thread and read the output.
putting it all together:
import multiprocessing
from multiprocessing import Queue
import sys
import concurrent.futures
import threading
import traceback
import runpy
import time
class StdoutQueueWrapper:
def __init__(self,queue:Queue):
self._queue = queue
def write(self,text):
self._queue.put(text)
def flush(self):
pass
def function_to_run():
# runpy.run_path("long-job.py",run_name="__main__") # run long-job.py
print("hello") # print something
raise ValueError # error out
def initializer(stdout_queue: Queue,stderr_queue: Queue):
sys.stdout = StdoutQueueWrapper(stdout_queue)
sys.stderr = StdoutQueueWrapper(stderr_queue)
def thread_function(child_stdout_queue,child_stderr_queue):
with concurrent.futures.ProcessPoolExecutor(1, initializer=initializer,
initargs=(child_stdout_queue, child_stderr_queue)) as pool:
result = pool.submit(function_to_run)
try:
result.result()
except Exception as e:
child_stderr_queue.put(traceback.format_exc())
if __name__ == "__main__":
child_stdout_queue = multiprocessing.Queue()
child_stderr_queue = multiprocessing.Queue()
child_thread = threading.Thread(target=thread_function,args=(child_stdout_queue,child_stderr_queue),daemon=True)
child_thread.start()
while True:
while not child_stdout_queue.empty():
var = child_stdout_queue.get()
print(var,end='')
while not child_stderr_queue.empty():
var = child_stderr_queue.get()
print(var,end='')
if not child_thread.is_alive():
break
time.sleep(0.01) # check output every 0.01 seconds
Note that a direct consequence of running as a multiprocess is that if the child runs into a segmentation fault or some unrecoverable error the parent will also die, hencing running yourself under subprocess might seem a better option if segfaults are expected.
I have two python files:
a.py:
import subprocess, time, os, signal
myprocess = subprocess.Popen("b.py", shell=True)
time.sleep(2)
os.kill(myprocess.pid, signal.SIGTERM)
b.py:
import atexit
def cleanup():
print "Cleaning up things before the program exits..."
atexit.register(cleanup)
print "Hello world!"
while True:
pass
a.py is spawning b.py and after 2 seconds it is killing the process. The problem is that I want the cleanup function to call in b.py before it gets killed but I can't get it to work.
I also tried SIGKILL and SIGINT in the os.kill function but neither worked for me.
Current output (a.py):
Hello, World!
(2 seconds later, program ends)
Expected output (a.py):
Hello, World!
(2 seconds later)
Cleaning up things before the program exits...
(program ends)
Use a different signal for Windows platform: signal.CTRL_C_EVENT
Put some more sleep into a.py, otherwise the child process does not get a chance to clean up before the parent process exits:
import subprocess, time, os, signal
myprocess = subprocess.Popen("b.py", shell=True)
time.sleep(2)
os.kill(myprocess.pid, signal.CTRL_C_EVENT)
time.sleep(2)
I also want to discourage you from using the shell, if you don't actually need the shell features:
import subprocess, time, os, signal, sys
myprocess = subprocess.Popen([sys.executable, "b.py"])
Linux/macOS users: signal.CTRL_C_EVENT doesn't exist, you want signal.SIGINT.
Why does communicate kill my process? I want an interactive process but communicate does something so that I cannot take raw_input any more in my process.
from sys import stdin
from threading import Thread
from time import sleep
if __name__ == '__main__':
print("Still Running\n")
x = raw_input()
i = 0
while ('n' not in x ) :
print("Still Running " + str(i) + " \r\n")
x = raw_input()
i += 1
print("quit")
print(aSubProc.theProcess.communicate('y'))
print(aSubProc.theProcess.communicate('y'))
exception!
self.stdin.write(input)
ValueError: I/O operation on closed file
communicate and wait methods of Popen objects, close the PIPE after the process returns. If you want stay in communication with the process try something like this:
import subprocess
proc = subprocess.Popen("some_process", stdout=subprocess.PIPE, stdin=subprocess.PIPE)
proc.stdin.write("input")
proc.stdout.readline()
Why does communicate kill my process?
From the docs for Popen.communicate(input=None, timeout=None):
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
emphasize mine
You may call .communicate() only once. It means that you should provide all input at once:
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE
p = Popen([sys.executable, 'child.py'], stdin=PIPE, stdout=PIPE)
print p.communicate(os.linesep.join('yyn'))[0]
Output
Still Running
Still Running 0
Still Running 1
quit
Notice the doubled newlines: one from '\r\n' and another from print statement itself in your script for the child process.
Output shows that the child process received three input lines successfully ('y', 'y', and 'n').
Here's a similar code using subprocess.check_output()'s input parameter from Python3.4:
#!/usr/bin/env python3.4
import os
import sys
from subprocess import check_output
output = check_output(['python2', 'child.py'], universal_newlines=True,
input='\n'.join('yyn'))
print(output, end='')
It produces the same output.
If you want to provide a different input depending on responses from the child processes then use pexpect module or its analogs to avoid issues mentioned in Why not just use a pipe (popen())?
I have a problem with sub-process code. The subprocess.Popen() works fine but when I try to read its output through stdout.read() there is no value to read.
**import os
import signal
import subprocess
import threading
import sys
import commands
print commands.getoutput("hcitool dev")
print 'down'
commands.getoutput('hciconfig hci1 down')
print 'up'
commands.getoutput('hciconfig hci1 up')
commands.getoutput('killall hcitool')
stop = False
ping = subprocess.call('hcitool lescan', shell = False,
stdout=subprocess.PIPE,executable='/bin/bash')
for i in ping.stdout:
print i
def kill():
global stop
stop = True
os.kill(ping.pid, signal.SIGTERM)
threading.Timer(5, kill).start()
#while not stop:
# print 'now in while not loop'
# sys.stdout.write(ping.stdout.read(1))
print 'trying to print stdout'
out, err = ping.communicate()
print "out",out
#result = out.decode()
print "Result : ",result**
This code works fine when I change hcitool lescan to ping www.google.com, and produces output but when I try with hcitool lescan it either hangs forever or produces no output. Help is appreciated!
Any of the above answers didn't work for me. Was hung up in the forever scan of hcitool. So finally i wrote a shell script and called it by my python code. This is working fine for me and i am reading the output from the file "result.txt".
hcitool lescan>result.txt &
sleep 5
pkill --signal SIGINT hcitool
There are multiple errors in your code e.g., subprocess.call() returns an integer (exit status of the program) and an integer has no .stdout attribute; also the combination of shell=False and non-None executable is very rarely useful (and it is probably used incorrectly in this case).
The simplest way to fix the code is to use check_output():
from subprocess import check_output as qx
output = qx(["hcitool", "lescan"]) # get all output at once
print output,
As an alternative, you could print program's output line by line as soon as its stdout is flushed:
from subprocess import Popen, PIPE
proc = Popen(["hcitool", "lescan"], stdout=PIPE, bufsize=1) # start process
for line in iter(proc.stdout.readline, b''): # read output line-by-line
print line,
# reached EOF, nothing more to read
proc.communicate() # close `proc.stdout`, wait for child process to terminate
print "Exit status", proc.returncode
To kill a subprocess, you could use its .kill() method e.g.:
from threading import Timer
def kill(process):
try:
process.kill()
process.wait() # to avoid zombies
except OSError: # ignore errors
pass
Timer(5, kill, [proc]).start() # kill in 5 seconds
thank you very much..but the problem is hcitool lescan never stops,and hence hangs out in the very next line of your code.,
and i found similar solution here it is.this works fine and i dont have to kill subprocess,this code takes some extra time to pour output,but this following code works preciesly,
from os import kill
import signal
import subprocess
import threading
import tempfile
import sys
import time
from tempfile import TemporaryFile
import commands
t = TemporaryFile()
global pipe_output
print commands.getoutput("hcitool dev")
print 'down'
commands.getoutput('hciconfig hci0 down')
print 'up'
commands.getoutput('hciconfig hci0 up')
print commands.getoutput("hcitool dev")
commands.getoutput('killall hcitool')
p = subprocess.Popen('hcitool lescan', bufsize = 0,shell = True, stdout =subprocess.PIPE,stderr = subprocess.STDOUT)
time.sleep(10)
#os.kill(p.pid,signal.SIGTERM)
for i in range(0,30,1):
print 'for'
inchar = p.stdout.readline()
i+=1
if inchar:
print 'loop num:',i
print str(inchar)
t.write(str(inchar))
print 'out of loop'
t.seek(0)
print t.read()
any help how to reduce waiting time,other than just changing time.sleep() ,is appreciated
thank you all
Use Popen class instead of the call class. hcitool lescan will run forever. subprocess.call waits for the call to be finished to return. Popen does not wait.
I have a simple python program:
test.py:
import time
for i in range(100000):
print i
time.sleep(0.5)
I want to use another program that executes the above one in order to read the last line output while the above program is counting.
import subprocess
process = subprocess.Popen("test",stdout=PIPE)
sleep(20) # sleeps an arbitrary time
print stdout.readlines()[-1]
The problem is that process.stdout.readlines() waits until test.py finishes execution.
Is there any way to read the last line that has been writen in the output while the program is executing?
You could use collections.deque to save only the last specified number of lines:
#!/usr/bin/env python
import collections
import subprocess
import time
import threading
def read_output(process, append):
for line in iter(process.stdout.readline, ""):
append(line)
def main():
process = subprocess.Popen(["program"], stdout=subprocess.PIPE)
# save last `number_of_lines` lines of the process output
number_of_lines = 1
q = collections.deque(maxlen=number_of_lines)
t = threading.Thread(target=read_output, args=(process, q.append))
t.daemon = True
t.start()
#
time.sleep(20)
# print saved lines
print ''.join(q),
# process is still running
# uncomment if you don't want to wait for the process to complete
##process.terminate() # if it doesn't terminate; use process.kill()
process.wait()
if __name__=="__main__":
main()
See other tail-like solutions that print only the portion of the output
See here if your child program uses a block-buffering (instead of line-bufferring) for its stdout while running non-interactively.
Fairly trivial with sh.py:
import sh
def process_line(line):
print line
process = sh.python("test.py", _out=process_line)
process.wait()