Simultaneously wait() for multiple subproccess.Popen commands, then exit - python

I'm trying to run an unknown number of commands and capture their stdout in a file. However, I am presented with a difficulty when attempting to p.wait() on each instance. My code looks something like this:
print "Started..."
for i, cmd in enumerate(commands):
i = "output_%d.log" % i
p = Popen(cmd, shell=True, universal_newlines=True, stdout=open(i, 'w'))
p.wait()
print "Done!"
I'm looking for a way to execute everything in commands simultaneously and exit the current script only when each and every single process has been completed. It would also help to be informed when each command returns an exit code.
I've looked at some answers, including this one by J.F. Sebastian and tried to adapt it to my situation by changing args=(p.stdout, q) to args=(p.returncode, q) but it ended up exiting immediately and running in the background (possibly due to shell=True?), as well as not responding to any keys pressed inside the bash shell... I don't know where to go with this.
Jeremy Brown's answer also helped, sort of, but select.epoll() was throwing an AttributeError exception.
Is there any other seamless way or trick to make it work? It doesn't need to be cross platform, a solution for GNU/Linux and macOS would be much appreciated. Thanks in advance!

A big thanks to Adam Matan for the biggest hint towards the solution. This is what I came up with, and it works flawlessly:
It initiates each Thread object in parallel
It starts each instance simultaneously
Finally it waits for each exit code without blocking other threads
Here is the code:
import threading
import subprocess
...
def run(cmd):
name = cmd.split()[0]
out = open("%s_log.txt" % name, 'w')
err = open('/dev/null', 'w')
p = subprocess.Popen(cmd.split(), stdout=out, stderr=err)
p.wait()
print name + " completed, return code: " + str(p.returncode)
...
proc = [threading.Thread(target=run, args=(cmd)) for cmd in commands]
[p.start() for p in proc]
[p.join() for p in proc]
print "Done!"

I would have rathered add this as a comment because I was working off of Jack of all Spades' answer. I had trouble getting that exact command to work because it was unpacking the string list I had of commands.
Here's my edit for python3:
import subprocess
import threading
commands = ['sleep 2', 'sleep 4', 'sleep 8']
def run(cmd):
print("Command %s" % cmd)
name = cmd.split(' ')[0]
print("name %s" % name)
out = open('/tmp/%s_log.txt' % name, 'w')
err = open('/dev/null', 'w')
p = subprocess.Popen(cmd.split(' '), stdout=out, stderr=err)
p.wait()
print(name + " completed, return code: " + str(p.returncode))
proc = [threading.Thread(target=run, kwargs={'cmd':cmd}) for cmd in commands]
[p.start() for p in proc]
[p.join() for p in proc]
print("Done!")

Related

My python script stops... no error, just stops

I am running a script that iterates through a text file. On each line on the text file there is an ip adress. The script grabs the banner, then writes the ip + banner on another file.
The problem is, it just stops around 500 lines, more or less, with no error.
Another weird thing is if i run it with python3 it does what i said above. If i run it with python it iterates through those 500 lines, then starts at the beggining. I noticed this when i saw repetitions in my output file. Anyway here is the code, maybe you guys can tell me what im doing wrong:
import os
import subprocess
import concurrent.futures
#import time, random
import threading
import multiprocessing
with open("ipuri666.txt") as f:
def multiprocessing_func():
try:
line2 = line.rstrip('\r\n')
a = subprocess.Popen(["curl", "-I", line2, "--connect-timeout", "1", "--max-time", "1"], stdout=subprocess.PIPE)
b = subprocess.Popen(["grep", "Server"], stdin=a.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#a.stdout.close()
out, err = b.communicate()
g = open("IP_BANNER2","a")
print( "out: {0}".format(out))
g.write(line2 + " " + "out: {0}\n".format(out))
print("err: {0}".format(err))
except IOError:
print("Connection timed out")
if __name__ == '__main__':
#starttime = time.time()
processes = []
for line in f:
p = multiprocessing.Process(target=multiprocessing_func, args=())
processes.append(p)
p.start()
for process in processes:
process.join()
If your use case allows I would recommend just rewriting this as a shell script, there is no need to use Python. (This would likely solve your issue indirectly.)
#!/usr/bin/env bash
readarray -t ips < ipuri666.txt
for ip in ${ips[#]}; do
output=$(curl -I "$ip" --connect-timeout 1 --max-time 1 | grep "Server")
echo "$ip $output" >> fisier.txt
done
The script is slightly simpler than what you are trying to do, for instance I do not capture the error. This should be pretty close to what you are trying to accomplish. I will update again if needed.

Use Jython Subprocess to send commands to bash shell

I need to send a number of subsequent commands to one bash shell in a Jython engine.
Executing commands 1 by 1, with os.system(s) or subsystem.call(s, ...) does not work as a new shell is created every time.
I hope someone has an idea .... following 3 tests are not a sufficient slution.
Sample Commands : <br>
cd /home/xxx/dir1/dir2<br>
pwd<br>
cd ..<br>
pwd
In this first test, the commands are executed, but the output is only retrieved at the end.
def testRun1():
# Actual Output
# run 0
# run 1
# run 2
# /home/usr/dir1/dir2
# /home/usr/dir1
# /home/usr
print 'All output is shown at the end...'
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
output = proc.communicate()[0]
print output
Whereas the 'desired output' is
# run 0
# /home/usr/dir1/dir2
# run 1
# /home/usr/dir1
# run 2
# /home/usr
This second tryout delivers what we want, but the output is only shown when jython script is interrupted.
def testRun2():
# Weird : it is what we want, but all output is blocked until CTRL-C is pressed
# run 0
# /home/usr/dir1/dir2
# run 1
# /home/usr/dir1
# run 2
# /home/usr
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
print 'start to print output'
for line in proc.stdout:
print(line.decode("utf-8"))
print_remaining(proc.stdout)
print 'printed output'
This last tryout crashes in the second run because a stream was closed.
def testRun3():
# This fails with error
# ValueError: I/O operation on closed file
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
output = proc.communicate()[0]
print output
The troubles you're having are only partially to do with subprocess. Pipes are fundamentally the wrong IPC mechanism for this job. To put an interactive command interpreter under scripted control, what you want is a pseudoterminal, and even then it's not as simple as reading and writing.
The Python standard library doesn't have any built-in modules that do pseudoterminal handling for you, unless they've added something very recently that I'm not aware of. However, the third-party package pexpect can do it, and it's geared for exactly the thing you are trying to do.
Using the basic pexpect API:
import pexpect
def testRun4():
proc = pexpect.spawn("/bin/bash")
for i in range(3):
proc.expect(":^[^$#]*[$#] *")
print("run", i)
proc.sendline("pwd")
proc.expect("^[^$#]*[$#] *")
print(proc.before)
proc.sendline("cd ..")
With pexpect.replwrap, it's a little more involved to set up but then the loop is tidier:
def testRun5():
proc = pexpect.replwrap.REPLWrapper(
"/bin/bash",
orig_prompt="^[^$#]*[$#] *",
prompt_change="PS1='{}'; PS2='{}'")
for i in range(3):
print("run", i)
print(proc.run_command("pwd"))
proc.run_command("cd ..")

How to control background process in linux

I need to write a script in Linux which can start a background process using one command and stop the process using another.
The specific application is to take userspace and kernel logs for android.
following command should start taking logs
$ mylogscript start
following command should stop the logging
$ mylogscript stop
Also, the commands should not block the terminal. For example, once I send the start command, the script run in background and I should be able to do other work on terminal.
Any pointers on how to implement this in perl or python would be helpful.
EDIT:
Solved: https://stackoverflow.com/a/14596380/443889
I got the solution to my problem. Solution essentially includes starting a subprocess in python and sending a signal to kill the process when done.
Here is the code for reference:
#!/usr/bin/python
import subprocess
import sys
import os
import signal
U_LOG_FILE_PATH = "u.log"
K_LOG_FILE_PATH = "k.log"
U_COMMAND = "adb logcat > " + U_LOG_FILE_PATH
K_COMMAND = "adb shell cat /proc/kmsg > " + K_LOG_FILE_PATH
LOG_PID_PATH="log-pid"
def start_log():
if(os.path.isfile(LOG_PID_PATH) == True):
print "log process already started, found file: ", LOG_PID_PATH
return
file = open(LOG_PID_PATH, "w")
print "starting log process: ", U_COMMAND
proc = subprocess.Popen(U_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process1 id = ", proc.pid
file.write(str(proc.pid) + "\n")
print "starting log process: ", K_COMMAND
proc = subprocess.Popen(K_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process2 id = ", proc.pid
file.write(str(proc.pid) + "\n")
file.close()
def stop_log():
if(os.path.isfile(LOG_PID_PATH) != True):
print "log process not started, can not find file: ", LOG_PID_PATH
return
print "terminating log processes"
file = open(LOG_PID_PATH, "r")
log_pid1 = int(file.readline())
log_pid2 = int(file.readline())
file.close()
print "log-pid1 = ", log_pid1
print "log-pid2 = ", log_pid2
os.killpg(log_pid1, signal.SIGTERM)
print "logprocess1 killed"
os.killpg(log_pid2, signal.SIGTERM)
print "logprocess2 killed"
subprocess.call("rm " + LOG_PID_PATH, shell=True)
def print_usage(str):
print "usage: ", str, "[start|stop]"
# Main script
if(len(sys.argv) != 2):
print_usage(sys.argv[0])
sys.exit(1)
if(sys.argv[1] == "start"):
start_log()
elif(sys.argv[1] == "stop"):
stop_log()
else:
print_usage(sys.argv[0])
sys.exit(1)
sys.exit(0)
There are a couple of different approaches you can take on this:
1. Signal - you use a signal handler, and use, typically "SIGHUP" to signal the process to restart ("start"), SIGTERM to stop it ("stop").
2. Use a named pipe or other IPC mechanism. The background process has a separate thread that simply reads from the pipe, and when something comes in, acts on it. This method relies on having a separate executable file that opens the pipe, and sends messages ("start", "stop", "set loglevel 1" or whatever you fancy).
I'm sorry, I haven't implemented either of these in Python [and perl I haven't really written anything in], but I doubt it's very hard - there's bound to be a ready-made set of python code to deal with named pipes.
Edit: Another method that just struck me is that you simply daemonise the program at start, and then let the "stop" version find your deamonized process [e.g. by reading the "pidfile" that you stashed somewhere suitable], and then sends a SIGTERM for it to terminate.
I don't know if this is the optimum way to do it in perl, but for example:
system("sleep 60 &")
This starts a background process that will sleep for 60 seconds without blocking the terminal. The ampersand in shell means to do something in the background.
A simple mechanism for telling the process when to stop is to have it periodically check for the existence of a certain file. If the file exists, it exits.

Constantly print Subprocess output while process is running

To launch programs from my Python-scripts, I'm using the following method:
def execute(command):
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = process.communicate()[0]
exitCode = process.returncode
if (exitCode == 0):
return output
else:
raise ProcessException(command, exitCode, output)
So when i launch a process like Process.execute("mvn clean install"), my program waits until the process is finished, and only then i get the complete output of my program. This is annoying if i'm running a process that takes a while to finish.
Can I let my program write the process output line by line, by polling the process output before it finishes in a loop or something?
I found this article which might be related.
You can use iter to process lines as soon as the command outputs them: lines = iter(fd.readline, ""). Here's a full example showing a typical use case (thanks to #jfs for helping out):
from __future__ import print_function # Only Python 2.x
import subprocess
def execute(cmd):
popen = subprocess.Popen(cmd, stdout=subprocess.PIPE, universal_newlines=True)
for stdout_line in iter(popen.stdout.readline, ""):
yield stdout_line
popen.stdout.close()
return_code = popen.wait()
if return_code:
raise subprocess.CalledProcessError(return_code, cmd)
# Example
for path in execute(["locate", "a"]):
print(path, end="")
To print subprocess' output line-by-line as soon as its stdout buffer is flushed in Python 3:
from subprocess import Popen, PIPE, CalledProcessError
with Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True) as p:
for line in p.stdout:
print(line, end='') # process line here
if p.returncode != 0:
raise CalledProcessError(p.returncode, p.args)
Notice: you do not need p.poll() -- the loop ends when eof is reached. And you do not need iter(p.stdout.readline, '') -- the read-ahead bug is fixed in Python 3.
See also, Python: read streaming input from subprocess.communicate().
Ok i managed to solve it without threads (any suggestions why using threads would be better are appreciated) by using a snippet from this question Intercepting stdout of a subprocess while it is running
def execute(command):
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Poll process for new output until finished
while True:
nextline = process.stdout.readline()
if nextline == '' and process.poll() is not None:
break
sys.stdout.write(nextline)
sys.stdout.flush()
output = process.communicate()[0]
exitCode = process.returncode
if (exitCode == 0):
return output
else:
raise ProcessException(command, exitCode, output)
There is actually a really simple way to do this when you just want to print the output:
import subprocess
import sys
def execute(command):
subprocess.check_call(command, shell=True, stdout=sys.stdout, stderr=subprocess.STDOUT)
Here we're simply pointing the subprocess to our own stdout, and using existing succeed or exception api.
#tokland
tried your code and corrected it for 3.4 and windows
dir.cmd is a simple dir command, saved as cmd-file
import subprocess
c = "dir.cmd"
def execute(command):
popen = subprocess.Popen(command, stdout=subprocess.PIPE,bufsize=1)
lines_iterator = iter(popen.stdout.readline, b"")
while popen.poll() is None:
for line in lines_iterator:
nline = line.rstrip()
print(nline.decode("latin"), end = "\r\n",flush =True) # yield line
execute(c)
In Python >= 3.5 using subprocess.run works for me:
import subprocess
cmd = 'echo foo; sleep 1; echo foo; sleep 2; echo foo'
subprocess.run(cmd, shell=True)
(getting the output during execution also works without shell=True)
https://docs.python.org/3/library/subprocess.html#subprocess.run
For anyone trying the answers to this question to get the stdout from a Python script note that Python buffers its stdout, and therefore it may take a while to see the stdout.
This can be rectified by adding the following after each stdout write in the target script:
sys.stdout.flush()
To answer the original question, the best way IMO is just redirecting subprocess stdout directly to your program's stdout (optionally, the same can be done for stderr, as in example below)
p = Popen(cmd, stdout=sys.stdout, stderr=sys.stderr)
p.communicate()
In case someone wants to read from both stdout and stderr at the same time using threads, this is what I came up with:
import threading
import subprocess
import Queue
class AsyncLineReader(threading.Thread):
def __init__(self, fd, outputQueue):
threading.Thread.__init__(self)
assert isinstance(outputQueue, Queue.Queue)
assert callable(fd.readline)
self.fd = fd
self.outputQueue = outputQueue
def run(self):
map(self.outputQueue.put, iter(self.fd.readline, ''))
def eof(self):
return not self.is_alive() and self.outputQueue.empty()
#classmethod
def getForFd(cls, fd, start=True):
queue = Queue.Queue()
reader = cls(fd, queue)
if start:
reader.start()
return reader, queue
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdoutReader, stdoutQueue) = AsyncLineReader.getForFd(process.stdout)
(stderrReader, stderrQueue) = AsyncLineReader.getForFd(process.stderr)
# Keep checking queues until there is no more output.
while not stdoutReader.eof() or not stderrReader.eof():
# Process all available lines from the stdout Queue.
while not stdoutQueue.empty():
line = stdoutQueue.get()
print 'Received stdout: ' + repr(line)
# Do stuff with stdout line.
# Process all available lines from the stderr Queue.
while not stderrQueue.empty():
line = stderrQueue.get()
print 'Received stderr: ' + repr(line)
# Do stuff with stderr line.
# Sleep for a short time to avoid excessive CPU use while waiting for data.
sleep(0.05)
print "Waiting for async readers to finish..."
stdoutReader.join()
stderrReader.join()
# Close subprocess' file descriptors.
process.stdout.close()
process.stderr.close()
print "Waiting for process to exit..."
returnCode = process.wait()
if returnCode != 0:
raise subprocess.CalledProcessError(returnCode, command)
I just wanted to share this, as I ended up on this question trying to do something similar, but none of the answers solved my problem. Hopefully it helps someone!
Note that in my use case, an external process kills the process that we Popen().
This PoC constantly reads the output from a process and can be accessed when needed. Only the last result is kept, all other output is discarded, hence prevents the PIPE from growing out of memory:
import subprocess
import time
import threading
import Queue
class FlushPipe(object):
def __init__(self):
self.command = ['python', './print_date.py']
self.process = None
self.process_output = Queue.LifoQueue(0)
self.capture_output = threading.Thread(target=self.output_reader)
def output_reader(self):
for line in iter(self.process.stdout.readline, b''):
self.process_output.put_nowait(line)
def start_process(self):
self.process = subprocess.Popen(self.command,
stdout=subprocess.PIPE)
self.capture_output.start()
def get_output_for_processing(self):
line = self.process_output.get()
print ">>>" + line
if __name__ == "__main__":
flush_pipe = FlushPipe()
flush_pipe.start_process()
now = time.time()
while time.time() - now < 10:
flush_pipe.get_output_for_processing()
time.sleep(2.5)
flush_pipe.capture_output.join(timeout=0.001)
flush_pipe.process.kill()
print_date.py
#!/usr/bin/env python
import time
if __name__ == "__main__":
while True:
print str(time.time())
time.sleep(0.01)
output: You can clearly see that there is only output from ~2.5s interval nothing in between.
>>>1520535158.51
>>>1520535161.01
>>>1520535163.51
>>>1520535166.01
This works at least in Python3.4
import subprocess
process = subprocess.Popen(cmd_list, stdout=subprocess.PIPE)
for line in process.stdout:
print(line.decode().strip())
Building on #jfs's excellent answer, here is a complete working example for you to play with. Requires Python 3.7 or newer.
sub.py
import time
for i in range(10):
print(i, flush=True)
time.sleep(1)
main.py
from subprocess import PIPE, Popen
import sys
with Popen([sys.executable, 'sub.py'], bufsize=1, stdout=PIPE, text=True) as sub:
for line in sub.stdout:
print(line, end='')
Notice the flush argument used in the child script.
None of the answers here addressed all of my needs.
No threads for stdout (no Queues, etc, either)
Non-blocking as I need to check for other things going on
Use PIPE as I needed to do multiple things, e.g. stream output, write to a log file and return a string copy of the output.
A little background: I am using a ThreadPoolExecutor to manage a pool of threads, each launching a subprocess and running them concurrency. (In Python2.7, but this should work in newer 3.x as well). I don't want to use threads just for output gathering as I want as many available as possible for other things (a pool of 20 processes would be using 40 threads just to run; 1 for the process thread and 1 for stdout...and more if you want stderr I guess)
I'm stripping back a lot of exception and such here so this is based on code that works in production. Hopefully I didn't ruin it in the copy and paste. Also, feedback very much welcome!
import time
import fcntl
import subprocess
import time
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Make stdout non-blocking when using read/readline
proc_stdout = proc.stdout
fl = fcntl.fcntl(proc_stdout, fcntl.F_GETFL)
fcntl.fcntl(proc_stdout, fcntl.F_SETFL, fl | os.O_NONBLOCK)
def handle_stdout(proc_stream, my_buffer, echo_streams=True, log_file=None):
"""A little inline function to handle the stdout business. """
# fcntl makes readline non-blocking so it raises an IOError when empty
try:
for s in iter(proc_stream.readline, ''): # replace '' with b'' for Python 3
my_buffer.append(s)
if echo_streams:
sys.stdout.write(s)
if log_file:
log_file.write(s)
except IOError:
pass
# The main loop while subprocess is running
stdout_parts = []
while proc.poll() is None:
handle_stdout(proc_stdout, stdout_parts)
# ...Check for other things here...
# For example, check a multiprocessor.Value('b') to proc.kill()
time.sleep(0.01)
# Not sure if this is needed, but run it again just to be sure we got it all?
handle_stdout(proc_stdout, stdout_parts)
stdout_str = "".join(stdout_parts) # Just to demo
I'm sure there is overhead being added here but it is not a concern in my case. Functionally it does what I need. The only thing I haven't solved is why this works perfectly for log messages but I see some print messages show up later and all at once.
import time
import sys
import subprocess
import threading
import queue
cmd='esptool.py --chip esp8266 write_flash -z 0x1000 /home/pi/zero2/fw/base/boot_40m.bin'
cmd2='esptool.py --chip esp32 -b 115200 write_flash -z 0x1000 /home/pi/zero2/fw/test.bin'
cmd3='esptool.py --chip esp32 -b 115200 erase_flash'
class ExecutorFlushSTDOUT(object):
def __init__(self,timeout=15):
self.process = None
self.process_output = queue.Queue(0)
self.capture_output = threading.Thread(target=self.output_reader)
self.timeout=timeout
self.result=False
self.validator=None
def output_reader(self):
start=time.time()
while self.process.poll() is None and (time.time() - start) < self.timeout:
try:
if not self.process_output.full():
line=self.process.stdout.readline()
if line:
line=line.decode().rstrip("\n")
start=time.time()
self.process_output.put(line)
if self.validator:
if self.validator in line: print("Valid");self.result=True
except:pass
self.process.kill()
return
def start_process(self,cmd_list,callback=None,validator=None,timeout=None):
if timeout: self.timeout=timeout
self.validator=validator
self.process = subprocess.Popen(cmd_list,stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True)
self.capture_output.start()
line=None
self.result=False
while self.process.poll() is None:
try:
if not self.process_output.empty():
line = self.process_output.get()
if line:
if callback:callback(line)
#print(line)
line=None
except:pass
error = self.process.returncode
if error:
print("Error Found",str(error))
raise RuntimeError(error)
return self.result
execute = ExecutorFlushSTDOUT()
def liveOUTPUT(line):
print("liveOUTPUT",line)
try:
if "Writing" in line:
line=''.join([n for n in line.split(' ')[3] if n.isdigit()])
print("percent={}".format(line))
except Exception as e:
pass
result=execute.start_process(cmd2,callback=liveOUTPUT,validator="Hash of data verified.")
print("Finish",result)
Use the -u Python option with subprocess.Popen() if you want to print from stdout while the process is running. (shell=True is necessary, despite the risks...)
Simple better than complex.
os library has built-in module system. You should execute your code and see the output.
import os
os.system("python --version")
# Output
"""
Python 3.8.6
0
"""
After version it is also printed return value as 0.
In Python 3.6 I used this:
import subprocess
cmd = "command"
output = subprocess.call(cmd, shell=True)
print(process)

How can I run an external command asynchronously from Python?

I need to run a shell command asynchronously from a Python script. By this I mean that I want my Python script to continue running while the external command goes off and does whatever it needs to do.
I read this post:
Calling an external command in Python
I then went off and did some testing, and it looks like os.system() will do the job provided that I use & at the end of the command so that I don't have to wait for it to return. What I am wondering is if this is the proper way to accomplish such a thing? I tried commands.call() but it will not work for me because it blocks on the external command.
Please let me know if using os.system() for this is advisable or if I should try some other route.
subprocess.Popen does exactly what you want.
from subprocess import Popen
p = Popen(['watch', 'ls']) # something long running
# ... do other stuff while subprocess is running
p.terminate()
(Edit to complete the answer from comments)
The Popen instance can do various other things like you can poll() it to see if it is still running, and you can communicate() with it to send it data on stdin, and wait for it to terminate.
If you want to run many processes in parallel and then handle them when they yield results, you can use polling like in the following:
from subprocess import Popen, PIPE
import time
running_procs = [
Popen(['/usr/bin/my_cmd', '-i %s' % path], stdout=PIPE, stderr=PIPE)
for path in '/tmp/file0 /tmp/file1 /tmp/file2'.split()]
while running_procs:
for proc in running_procs:
retcode = proc.poll()
if retcode is not None: # Process finished.
running_procs.remove(proc)
break
else: # No process is done, wait a bit and check again.
time.sleep(.1)
continue
# Here, `proc` has finished with return code `retcode`
if retcode != 0:
"""Error handling."""
handle_results(proc.stdout)
The control flow there is a little bit convoluted because I'm trying to make it small -- you can refactor to your taste. :-)
This has the advantage of servicing the early-finishing requests first. If you call communicate on the first running process and that turns out to run the longest, the other running processes will have been sitting there idle when you could have been handling their results.
This is covered by Python 3 Subprocess Examples under "Wait for command to terminate asynchronously". Run this code using IPython or python -m asyncio:
import asyncio
proc = await asyncio.create_subprocess_exec(
'ls','-lha',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
# do something else while ls is working
# if proc takes very long to complete, the CPUs are free to use cycles for
# other processes
stdout, stderr = await proc.communicate()
The process will start running as soon as the await asyncio.create_subprocess_exec(...) has completed. If it hasn't finished by the time you call await proc.communicate(), it will wait there in order to give you your output status. If it has finished, proc.communicate() will return immediately.
The gist here is similar to Terrels answer but I think Terrels answer appears to overcomplicate things.
See asyncio.create_subprocess_exec for more information.
What I am wondering is if this [os.system()] is the proper way to accomplish such a thing?
No. os.system() is not the proper way. That's why everyone says to use subprocess.
For more information, read http://docs.python.org/library/os.html#os.system
The subprocess module provides more
powerful facilities for spawning new
processes and retrieving their
results; using that module is
preferable to using this function. Use
the subprocess module. Check
especially the Replacing Older
Functions with the subprocess Module
section.
The accepted answer is very old.
I found a better modern answer here:
https://kevinmccarthy.org/2016/07/25/streaming-subprocess-stdin-and-stdout-with-asyncio-in-python/
and made some changes:
make it work on windows
make it work with multiple commands
import sys
import asyncio
if sys.platform == "win32":
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
async def _read_stream(stream, cb):
while True:
line = await stream.readline()
if line:
cb(line)
else:
break
async def _stream_subprocess(cmd, stdout_cb, stderr_cb):
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
await asyncio.wait(
[
_read_stream(process.stdout, stdout_cb),
_read_stream(process.stderr, stderr_cb),
]
)
rc = await process.wait()
return process.pid, rc
except OSError as e:
# the program will hang if we let any exception propagate
return e
def execute(*aws):
""" run the given coroutines in an asyncio loop
returns a list containing the values returned from each coroutine.
"""
loop = asyncio.get_event_loop()
rc = loop.run_until_complete(asyncio.gather(*aws))
loop.close()
return rc
def printer(label):
def pr(*args, **kw):
print(label, *args, **kw)
return pr
def name_it(start=0, template="s{}"):
"""a simple generator for task names
"""
while True:
yield template.format(start)
start += 1
def runners(cmds):
"""
cmds is a list of commands to excecute as subprocesses
each item is a list appropriate for use by subprocess.call
"""
next_name = name_it().__next__
for cmd in cmds:
name = next_name()
out = printer(f"{name}.stdout")
err = printer(f"{name}.stderr")
yield _stream_subprocess(cmd, out, err)
if __name__ == "__main__":
cmds = (
[
"sh",
"-c",
"""echo "$SHELL"-stdout && sleep 1 && echo stderr 1>&2 && sleep 1 && echo done""",
],
[
"bash",
"-c",
"echo 'hello, Dave.' && sleep 1 && echo dave_err 1>&2 && sleep 1 && echo done",
],
[sys.executable, "-c", 'print("hello from python");import sys;sys.exit(2)'],
)
print(execute(*runners(cmds)))
It is unlikely that the example commands will work perfectly on your system, and it doesn't handle weird errors, but this code does demonstrate one way to run multiple subprocesses using asyncio and stream the output.
I've had good success with the asyncproc module, which deals nicely with the output from the processes. For example:
import os
from asynproc import Process
myProc = Process("myprogram.app")
while True:
# check to see if process has ended
poll = myProc.wait(os.WNOHANG)
if poll is not None:
break
# print any new output
out = myProc.read()
if out != "":
print out
Using pexpect with non-blocking readlines is another way to do this. Pexpect solves the deadlock problems, allows you to easily run the processes in the background, and gives easy ways to have callbacks when your process spits out predefined strings, and generally makes interacting with the process much easier.
Considering "I don't have to wait for it to return", one of the easiest solutions will be this:
subprocess.Popen( \
[path_to_executable, arg1, arg2, ... argN],
creationflags = subprocess.CREATE_NEW_CONSOLE,
).pid
But... From what I read this is not "the proper way to accomplish such a thing" because of security risks created by subprocess.CREATE_NEW_CONSOLE flag.
The key things that happen here is use of subprocess.CREATE_NEW_CONSOLE to create new console and .pid (returns process ID so that you could check program later on if you want to) so that not to wait for program to finish its job.
I have the same problem trying to connect to an 3270 terminal using the s3270 scripting software in Python. Now I'm solving the problem with an subclass of Process that I found here:
http://code.activestate.com/recipes/440554/
And here is the sample taken from file:
def recv_some(p, t=.1, e=1, tr=5, stderr=0):
if tr < 1:
tr = 1
x = time.time()+t
y = []
r = ''
pr = p.recv
if stderr:
pr = p.recv_err
while time.time() < x or r:
r = pr()
if r is None:
if e:
raise Exception(message)
else:
break
elif r:
y.append(r)
else:
time.sleep(max((x-time.time())/tr, 0))
return ''.join(y)
def send_all(p, data):
while len(data):
sent = p.send(data)
if sent is None:
raise Exception(message)
data = buffer(data, sent)
if __name__ == '__main__':
if sys.platform == 'win32':
shell, commands, tail = ('cmd', ('dir /w', 'echo HELLO WORLD'), '\r\n')
else:
shell, commands, tail = ('sh', ('ls', 'echo HELLO WORLD'), '\n')
a = Popen(shell, stdin=PIPE, stdout=PIPE)
print recv_some(a),
for cmd in commands:
send_all(a, cmd + tail)
print recv_some(a),
send_all(a, 'exit' + tail)
print recv_some(a, e=0)
a.wait()
There are several answers here but none of them satisfied my below requirements:
I don't want to wait for command to finish or pollute my terminal with subprocess outputs.
I want to run bash script with redirects.
I want to support piping within my bash script (for example find ... | tar ...).
The only combination that satiesfies above requirements is:
subprocess.Popen(['./my_script.sh "arg1" > "redirect/path/to"'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)

Categories

Resources