In your Document folder create a folder temp:
/My Documents/temp
Save these few lines as worker.py Python scripts:
import time
from datetime import datetime
for i in range(10):
print '%s...working on iteration %s' % (datetime.now(), i)
time.sleep(0.2)
print '\nCompleted!\n'
Save the code below as caller.py:
import subprocess
cmd = ['python', 'worker.py']
stdout = subprocess.check_output(cmd)
print stdout
(Please note that both Python scripts were saved in to the same folder).
Now using the OS X Terminal or Windows CMD window change the current directory to the folder you created:
cd /My Documents/temp
Now run:
python caller.py
The process takes 2 seconds to complete. When completed it prints out the entire progress log all at once:
2018-01-20 07:52:14.399679...working on iteration 0
...
2018-01-20 07:52:16.216237...working on iteration 9
Completed!
Instead of getting the log printed (all at once after the process has been already completed), I would like the have a real-time progress update. I would like to get every printed line from the process at the same moment it occured.
So, when I run python worker.pycommand it will give me line by line update happening in a real time. How to achieve it?
To get a real-time feed from the subprocess you can use this code in the caller.py
import time
import subprocess
# Start worker process
p = subprocess.Popen(['python', '-u', 'worker.py'], stdout=subprocess.PIPE)
# Loop forever
while True:
# Get new line value
l = p.stdout.readline()
# Stop looping if the child process has terminated
if p.poll() is not None:
break
# Print the line
print l
Note the -u in the subprocess.Popen, you need unbuffered stdout.
https://docs.python.org/3/using/cmdline.html#cmdoption-u
With readline() you are reading a single line per time from the subprocess output. Be aware when the subprocess prints '\nCompleted!\n' you will read it in three loops.
https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects
In the example, the loop will run until the subprocess will terminate.
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.poll
Related
I'm trying to spawn a subprocess that should still be running after the main process closed. This part works fine, but if I redirect the output of this process to a file, I can't start the script a second time because the process still blocks the log file.
This short example demonstrates the problem:
In this case the second process is "notepad" and is started by "other.cmd". While the main process/script is "test_it.py" which is started by "start_it.cmd".
start_it.cmd
#python test_it.py > test.log
test_it.py
import subprocess
from subprocess import DEVNULL, STDOUT
subprocess.Popen(["other.cmd"], stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
other.cmd
start notepad
When start_it.cmd is executed the second time, it will fail with this error message "The process cannot access the file because it is being used by another process".
How can I start the subprocess so that it doesn't block the log file?
A solution using a pipe.
multiplexer.py
with open('log.txt', 'a') as outputFile:
while True:
data = sys.stdin.read(1024)
if None == data:
break
outputFile.write(data)
start_it.cmd
#python test_it.py | python multiplexer.py
Everything else stays the same.
I found a solution that is close to what I originally intended:
subprocess.Popen("explorer other.cmd", shell=True)
By letting the explorer start the .cmd file this succesfully detaches the called .cmd from the original process. And thus doesn't keep the log file open.
I need to realise a python script who read the ouput of an other process in real time line by line. Obviously I've ever try to use "subprocess" to get the output process with stdout. But i can't get that output in real time, indeed every times python script return the output in several chunk few time after the process launch. For exemple i use this python script :
import subprocess as sub
import sys
proc = sub.Popen(["python", "bin/read_test.py"],
stdout = sub.PIPE,
bufsize = 1)
while True:
if proc.poll() is None:
line = proc.stdout.read()
line = line.decode("utf-8")
print(line, flush = True)
sys.stdout.flush()
else:
proc.stdout.close()
break
read_test.py script to be read :
from time import sleep
for i in range(5):
print(i)
sleep(1)
I've try a lot of methods with "readline()" with for loops but the issue still the same.
Moreover I don't want to use "communicate" because it's a blocking method.
Thanks for your help,
Problem is that you're trying to read stdout fully.
Since python sees that process is still running, it waits until process ends so process output is complete.
To do what you want you probably want to read the output line by line using line = proc.stdout.readline()
You probably have to change your loop to read line, and stop when process ends
proc = sub.Popen(["python", "bin/read_test.py"],
stdout = sub.PIPE,
bufsize = 1)
while True:
line = proc.stdout.readline()
if line:
line = line.decode("utf-8")
print(line)
sys.stdout.flush()
if proc.poll() is not None: # process ends
break
proc.wait()
Also that's not enough: you have to flush the output on the read_test.py side to make sure that the emitter actually sends the lines (when output is redirected, it's not sure). Example program:
import time,sys
for i in range(5):
print(i)
sys.stdout.flush() # not flushing: you get all the lines at the end of the process
time.sleep(1)
I've connected both programs, and got 1 sequenced output (1,2,3,4,5) each second
I am running multiple subprocesses in parallel, but I need to lock each process until the subprocess gives an output (via print function). The subprocesses are running a python script that has been packaged to an executable.
The code looks like this:
import multiprocessing as mp
import subprocess
import os
def main(args):
l,inpath = args
l.acquire()
print "Running KNN.exe for files in %s" % os.path.normpath(inpath).split('\\')[-1]
#Run KNN executable as a subprocess
subprocess.call(os.path.join(os.getcwd(), "KNN.exe"))
#This is where I want to wait for any output from the subprocess before releasing the lock
l.release()
#Here I would like to wait until subprocess is done then print that it is done
l.acquire()
print "Done %s" % os.path.normpath(inpath).split('\\')[-1]
l.release()
if __name__ == "__main__":
#Set working directory path containing input text file
os.chdir("C:\Users\Patrick\Google Drive\KNN")
#Get folder names in directory containing GCM input
manager = mp.Manager()
l = manager.Lock()
gcm_dir = "F:\FIDS_GCM_Data_CMIP5\UTRB\UTRB KNN-CAD\Input"
paths = [(l, os.path.join(gcm_dir, folder)) for folder in os.listdir(gcm_dir)]
#Set up multiprocessing pool
p = mp.Pool(mp.cpu_count())
#Map function through input paths
p.map(main, paths)
So the goal is to lock the process so that a subprocess can be run until receiving an output. After which the lock can be released and the subprocess can continue, until it is complete, then I'd like to print that it is complete.
My question is how can I wait for the single (and only) output from the subprocess before releasing the lock on the process (out of multiple)?
Additionally how can I wait for the process to terminate then print that it is complete?
Your code makes use of the call method, which already waits for the subprocess to finish (which means all output has already been generated). I'm inferring from your question you'd like to be able to differentiate between when output is first written and when the subprocess is finished. Below is your code with my recommended modifications inline:
def main(args):
l,inpath = args
l.acquire()
print "Running KNN.exe for files in %s" % os.path.normpath(inpath).split('\\')[-1]
#Run KNN executable as a subprocess
#Use the Popen constructor
proc = subprocess.Popen(os.path.join(os.getcwd(), "KNN.exe"), stdout=subprocess.PIPE)
#This is where I want to wait for any output from the subprocess before releasing the lock
# Wait until the subprocess has written at least 1 byte to STDOUT (modify if you need different logic)
proc.stdout.read(1)
l.release()
#Here I would like to wait until subprocess is done then print that it is done
#proc.wait()
(proc_output, proc_error) = proc.communicate()
l.acquire()
print "Done %s" % os.path.normpath(inpath).split('\\')[-1]
l.release()
Note that the above doesn't assume you want to do anything with the subprocess's output other than check that it has been generated. If you want to do anything with that output that is less trivial than the above (consume 1 byte then drop it on the floor), the proc.stdout (which is a file object) should represent everything that the subprocess generates while running.
I have a python script that opens a .exe program using the subprocess module. This .exe program is an infinitely iterative script, in that it will continue to print the results of each iteration until the user closes the window. Every so often, it prints the results of the iteration into a file, replacing the previous data in the file.
My aims here are to:
Run the .exe program, and test for the existence of the file it outputs.
Once the file has been shown to exist, I need to run a test on the file to see if the iteration has converged to within a given tolerance. Once the iteration has converged, I need to kill the .exe subprocess.
This is my current code. It is designed to kill the subprocess once the iterate file has been created:
import subprocess
from subprocess import Popen, PIPE
fileexists = False
iteratecomms = Popen('iterate.exe', stdout=PIPE, stdin=PIPE, stderr=PIPE)
# Begin the iteration. Need to select options 1 and then 1 again at program menu
out, err = iteratecomms.communicate("1\n1\n".encode())
while (fileexists == False):
fileexists = os.path.exists(filelocation)
else:
Popen.kill(iteratecomms)
I know that this is incorrect; the issue is that as soon as I start the out, err = iteratecomms.communicate("1\n1\n".encode()) line, the program begins iterating, and does not move on to the next set of python code. Essentially, I need to start the .exe program, and at the same time test to see if the file has been created. I can't do this, however, because the program runs indefinitely.
How could I get around this? I have assumed that moving on to step 2 (testing the file and killing the subprocess under certain conditions) would not take too much work on top of this; if this is not true, how would I go about completing all of my aims?
Thank you very much for the help!
Edit: Clarified that the external file is overwritten.
I would use the multiprocessing module.
pool = multiprocessing.Pool()
def start_iteration():
return Popen('iterate.exe', stdout=PIPE, stdin=PIPE, stderr=PIPE)
pool.apply_async(start_iteration)
while (fileexists == False):
fileexists = os.path.exists(filelocation)
Popen.kill(???)
The only problem now is that you'll have to somehow find the PID of the process without waiting for Popen to return (because Popen should never return.)
Assuming that you're trying to continuously trying to read this file I would suggest running a tail on the file in question. This can be done from a separate terminal in any *nix family OS, but otherwise I would check out this article for a Python implementation:
http://code.activestate.com/recipes/157035-tail-f-in-python/
After that if you want to kill the program running you should just be able to call terminate on the process running:
import subprocess
sub = subprocess.popen(#Whatever)
#Do something
sub.terminate()
I was looking to implement a python script that called another script and captured its stdout. The called script will contain some input and output messages eg
print ("Line 1 of Text")
variable = raw_input("Input 1 :")
print "Line 2 of Text Input: ", vairable
The section of the code I'm running is
import subprocess
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
so, se = p.communicate()
print(so)
The problem that is occurring is that the stdout is not printing until after the script has been executed. This leaves a blank prompt waiting for the user input. Is there a way to get stdout to print while the called script is still running?
Thanks,
There are two problems here.
Firstly, python is buffering output to stdout and you need to prevent this. You could insert a call to sys.stdout.flush() in testfile.py as Ilia Frenkel has suggested, or you could use python -u to execute testfile.py with unbuffered I/O. (See the other stack overflow question that Ilia linked to.)
You need a way of asynchronously reading data from the sub-process and then, when it is ready for input, printing the data you've read so that the prompt for the user appears. For this, it would be very helpful to have an asynchronous version of the subprocess module.
I downloaded the asynchronous subprocess and re-wrote your script to use it, along with using python -u to get unbuffered I/O:
import async_subprocess as subprocess
cmd = ['python', '-u', 'testfile.py']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
so = p.asyncread()
print so,
(so, se) = p.communicate()
print so
When I run this script using python -u I get the following results:
$ python -u script.py
Line 1 of Text
Input 1:
and the script pauses, waiting for input. This is the desired result.
If I then type something (e.g. "Hullo") I get the following:
$ python -u script.py
Line 1 of Text
Input 1:Hullo
Line 2 of Text Input: Hullo
You don't need to capture it's stdout really, just have the child program print out its stuff and quit, instead of feeding the output into your parent program and printing it there. If you need variable output, just use a function instead.
But anyways, that's not what you asked.
I actually got this from another stackoverflow question:
import subprocess, sys
cmd='testfile.py'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
out = p.stdout.read(20)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
First, it opens up your process: then it continually reads the output from p and prints it onto the screen using sys.stdout.write. The part that makes this all work is sys.stdout.flush(), which will continually "flush out" the output of the program.