My question is hopefully particular enough to not relate to any of the other ones that I've read. I'm wanting to use subprocess and multiprocessing to spawn a bunch of jobs serially and return the return code to me. The problem is that I don't want to wait() so I can spawn the jobs all at once, but I do want to know when it finishes so I can get the return code. I'm having this weird problem where if I poll() the process it won't run. It just hangs out in the activity monitor without running (I'm on a Mac). I thought I could use a watcher thread, but I'm hanging on the q_out.get() which is leading me to believe that maybe I'm filling up the buffer and deadlocking. I'm not sure how to get around this. This is basically what my code looks like. If anyone has any better ideas on how to do this I would be happy to completely change my approach.
def watchJob(p1,out_q):
while p1.poll() == None:
pass
print "Job is done"
out_q.put(p1.returncode)
def runJob(out_q):
LOGFILE = open('job_to_run.log','w')
p1 = Popen(['../../bin/jobexe','job_to_run'], stdout = LOGFILE)
t = threading.Thread(target=watchJob, args=(p1,out_q))
t.start()
out_q= Queue()
outlst=[]
for i in range(len(nprocs)):
proc = Process(target=runJob, args=(out_q,))
proc.start()
outlst.append(out_q.get()) # This hangs indefinitely
proc.join()
You don't need neither multiprocessing nor threading here. You could run multiple child processes in parallel and collect their statutes all in a single thread:
#!/usr/bin/env python3
from subprocess import Popen
def run(cmd, log_filename):
with open(log_filename, 'wb', 0) as logfile:
return Popen(cmd, stdout=logfile)
# start several subprocesses
processes = {run(['echo', c], 'subprocess.%s.log' % c) for c in 'abc'}
# now they all run in parallel
# report as soon as a child process exits
while processes:
for p in processes:
if p.poll() is not None:
processes.remove(p)
print('{} done, status {}'.format(p.args, p.returncode))
break
p.args stores cmd in Python 3.3+, keep track of cmd yourself on earlier Python versions.
See also:
Python threading multiple bash subprocesses?
Python subprocess in parallel
Python: execute cat subprocess in parallel
Using Python's Multiprocessing module to execute simultaneous and separate SEAWAT/MODFLOW model runs
To limit number of parallel jobs a ThreadPool could be used (as shown in the first link):
#!/usr/bin/env python3
from multiprocessing.dummy import Pool # use threads
from subprocess import Popen
def run_until_done(args):
cmd, log_filename = args
try:
with open(log_filename, 'wb', 0) as logfile:
p = Popen(cmd, stdout=logfile)
return cmd, p.wait(), None
except Exception as e:
return cmd, None, str(e)
commands = ((('echo', str(d)), 'subprocess.%03d.log' % d) for d in range(500))
pool = Pool(128) # 128 concurrent commands at a time
for cmd, status, error in pool.imap_unordered(run_until_done, commands):
if error is None:
fmt = '{cmd} done, status {status}'
else:
fmt = 'failed to run {cmd}, reason: {error}'
print(fmt.format_map(vars())) # or fmt.format(**vars()) on older versions
The thread pool in the example has 128 threads (no more, no less). It can't execute more than 128 jobs concurrently. As soon as any of the threads frees (done with a job), it takes another, etc. Total number of jobs that is executed concurrently is limited by the number of threads. New job doesn't wait for all 128 previous jobs to finish. It is started when any of the old jobs is done.
If you're going to run watchJob in a thread, there's no reason to busy-loop with p1.poll; just call p1.wait() to block until the process finishes. Using the busy loop requires the GIL to constantly be released/re-acquired, which slows down the main thread, and also pegs the CPU, which hurts performance even more.
Also, if you're not using the stdout of the child process, you shouldn't send it to PIPE, because that could cause a deadlock if the process writes enough data to the stdout buffer to fill it up (which may actually be what's happening in your case). There's also no need to use multiprocessing here; just call Popen in the main thread, and then have the watchJob thread wait on the process to finish.
import threading
from subprocess import Popen
from Queue import Queue
def watchJob(p1, out_q):
p1.wait()
out_q.put(p1.returncode)
out_q = Queue()
outlst=[]
p1 = Popen(['../../bin/jobexe','job_to_run'])
t = threading.Thread(target=watchJob, args=(p1,out_q))
t.start()
outlst.append(out_q.get())
t.join()
Edit:
Here's how to run multiple jobs concurrently this way:
out_q = Queue()
outlst = []
threads = []
num_jobs = 3
for _ in range(num_jobs):
p = Popen(['../../bin/jobexe','job_to_run'])
t = threading.Thread(target=watchJob, args=(p1, out_q))
t.start()
# Don't consume from the queue yet.
# All jobs are running, so now we can start
# consuming results from the queue.
for _ in range(num_jobs):
outlst.append(out_q.get())
t.join()
Related
Right now, I'm using subprocess to run a long-running job in the background. For multiple reasons (PyInstaller + AWS CLI) I can't use subprocess anymore.
Is there an easy way to achieve the same thing as below ? Running a long running python function in a multiprocess pool (or something else) and do real time processing of stdout/stderr ?
import subprocess
process = subprocess.Popen(
["python", "long-job.py"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
)
while True:
out = process.stdout.read(2000).decode()
if not out:
err = process.stderr.read().decode()
else:
err = ""
if (out == "" or err == "") and process.poll() is not None:
break
live_stdout_process(out)
Thanks
getting it cross platform is messy .... first of all windows implementation of non-blocking pipe is not user friendly or portable.
one option is to just have your application read its command line arguments and conditionally execute a file, and you get to use subprocess since you will be launching yourself with different argument.
but to keep it to multiprocessing :
the output must be logged to queues instead of pipes.
you need the child to execute a python file, this can be done using runpy to execute the file as __main__.
this runpy function should run under a multiprocessing child, this child must first redirect its stdout and stderr in the initializer.
when an error happens, your main application must catch it .... but if it is too busy reading the output it won't be able to wait for the error, so a child thread has to start the multiprocess and wait for the error.
the main process has to create the queues and launch the child thread and read the output.
putting it all together:
import multiprocessing
from multiprocessing import Queue
import sys
import concurrent.futures
import threading
import traceback
import runpy
import time
class StdoutQueueWrapper:
def __init__(self,queue:Queue):
self._queue = queue
def write(self,text):
self._queue.put(text)
def flush(self):
pass
def function_to_run():
# runpy.run_path("long-job.py",run_name="__main__") # run long-job.py
print("hello") # print something
raise ValueError # error out
def initializer(stdout_queue: Queue,stderr_queue: Queue):
sys.stdout = StdoutQueueWrapper(stdout_queue)
sys.stderr = StdoutQueueWrapper(stderr_queue)
def thread_function(child_stdout_queue,child_stderr_queue):
with concurrent.futures.ProcessPoolExecutor(1, initializer=initializer,
initargs=(child_stdout_queue, child_stderr_queue)) as pool:
result = pool.submit(function_to_run)
try:
result.result()
except Exception as e:
child_stderr_queue.put(traceback.format_exc())
if __name__ == "__main__":
child_stdout_queue = multiprocessing.Queue()
child_stderr_queue = multiprocessing.Queue()
child_thread = threading.Thread(target=thread_function,args=(child_stdout_queue,child_stderr_queue),daemon=True)
child_thread.start()
while True:
while not child_stdout_queue.empty():
var = child_stdout_queue.get()
print(var,end='')
while not child_stderr_queue.empty():
var = child_stderr_queue.get()
print(var,end='')
if not child_thread.is_alive():
break
time.sleep(0.01) # check output every 0.01 seconds
Note that a direct consequence of running as a multiprocess is that if the child runs into a segmentation fault or some unrecoverable error the parent will also die, hencing running yourself under subprocess might seem a better option if segfaults are expected.
I am trying to use multiprocessing from inside another process that was spawned with Popen. I want to be able to communicate between this process and a new child process, but this "middle" process has a polling read on the pipe with its parent, which seems to block execution of its child process.
Here is my file structure:
entry.py
import subprocess, threading, time, sys
def start():
# Create process 2
worker = subprocess.Popen([sys.executable, "-u", "mproc.py"],
# When creating the subprocess with an open pipe to stdin and
# subsequently polling that pipe, it blocks further communication
# between subprocesses
stdin=subprocess.PIPE,
close_fds=False,)
t = threading.Thread(args=(worker))
t.start()
time.sleep(4)
if __name__ == '__main__':
start()
mproc.py
import multiprocessing as mp
import time, sys, threading
def exit_on_stdin_close():
try:
while sys.stdin.read():
pass
except:
pass
def put_hello(q):
# We never reach this line if exit_poll.start() is uncommented
q.put("hello")
time.sleep(2.4)
def start():
exit_poll = threading.Thread(target=exit_on_stdin_close, name="exit-poll")
exit_poll.daemon = True
# This daemon thread polling stdin blocks execution of subprocesses
# But ONLY if running in another process with stdin connected
# to its parent by PIPE
exit_poll.start()
ctx = mp.get_context('spawn')
q = ctx.Queue()
p = ctx.Process(target=put_hello, args=(q,))
# Create process 3
p.start()
p.join()
print(f"result: {q.get()}")
if __name__ == '__main__':
start()
My desired behavior is that when running entry.py, mproc.py should run on a subprocess and be able to communicate with its own subprocess to get the Queue output, and this does happen if I don't start the exit-poll daemon thread:
$ python -u entry.py
result: hello
but if exit-poll is running, then process 3 blocks as soon as it's started. The put_hello method isn't even entered until the exit-poll thread ends.
Is there a way to create a process 3 from process 2 and communicate between the two, even while the pipe between processes 1 and 2 is being used?
Edit: I can only consistently reproduce this problem on Windows. On Linux (Ubuntu 20.04 WSL) the Queues are able to communicate even with exit-poll running, but only if I'm using the spawn multiprocessing context. If I change it to fork then I get the same behavior that I see on Windows.
To start, I'm aware this looks like a duplicate. I've been reading:
Python subprocess readlines() hangs
Python Subprocess readline hangs() after reading all input
subprocess readline hangs waiting for EOF
But these options either straight don't work or I can't use them.
The Problem
# Obviously, swap HOSTNAME1 and HOSTNAME2 with something real
cmd = "ssh -N -f -L 1111:<HOSTNAME1>:80 <HOSTNAME2>"
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=os.environ)
while True:
out = p.stdout.readline()
# Hangs here ^^^^^^^ forever
out = out.decode('utf-8')
if out:
print(out)
if p.poll() is not None:
break
My dilemma is that the function calling the subprocess.Popen() is a library function for running bash commands, so it needs to be very generic and has the following restrictions:
Must display output as it comes in; not block and then spam the screen all at once
Can't use multiprocessing in case the parent caller is multiprocessing the library function (Python doesn't allow child processes to have child processes)
Can't use signal.SIGALRM for the same reason as multiprocessing; the parent caller may be trying to set their own timeout
Can't use third party non-built-in modules
Threading straight up doesn't work. When the readline() call is in a thread, thread.join(timeout=1)lets the program continue, but ctrl+c doesn't work on it at all, and calling sys.exit() doesn't exit the program, since the thread is still open. And as you know, you can't kill a thread in python by design.
No manner of bufsize or other subprocess args seems to make a difference; neither does putting readline() in an iterator.
I would have a workable solution if I could kill a thread, but that's super taboo, even though this is definitely a legitimate use case.
I'm open to any ideas.
One option is to use a thread to publish to a queue. Then you can block on the queue with a timeout. You can make the reader thread a daemon so it won't prevent system exit. Here's a sketch:
import subprocess
from threading import Thread
from queue import Queue
def reader(stream, queue):
while True:
line = stream.readline()
queue.put(line)
if not line:
break
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, ...)
queue = Queue()
thread = Thread(target=reader, args=(p.stdout, queue))
thread.daemon = True
thread.start()
while True:
out = queue.get(timeout=1) # timeout is optional
if not out: # Reached end of stream
break
... # Do whatever with output
# Output stream was closed but process may still be running
p.wait()
Note that you should adapt this answer to your particular use case. For example, you may want to add a way to signal to the reader thread to stop running before reaching the end of stream.
Another option would be to poll the input stream, like in this question: timeout on subprocess readline in python
I finally got a working solution; the key piece of information I was missing was thread.daemon = True, which #augurar pointed out in their answer.
Setting thread.daemon = True allows the thread to be terminated when the main process terminates; therefore unblocking my use of a sub-thread to monitor readline().
Here is a sample implementation of my solution; I used a Queue() object to pass strings to the main process, and I implemented a 3 second timer for cases like the original problem I was trying to solve where the subprocess has finished and terminated, but the readline() is hung for some reason.
This also helps avoid a race condition between which thing finishes first.
This works for both Python 2 and 3.
import sys
import threading
import subprocess
from datetime import datetime
try:
import queue
except:
import Queue as queue # Python 2 compatibility
def _monitor_readline(process, q):
while True:
bail = True
if process.poll() is None:
bail = False
out = ""
if sys.version_info[0] >= 3:
out = process.stdout.readline().decode('utf-8')
else:
out = process.stdout.readline()
q.put(out)
if q.empty() and bail:
break
def bash(cmd):
# Kick off the command
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
# Create the queue instance
q = queue.Queue()
# Kick off the monitoring thread
thread = threading.Thread(target=_monitor_readline, args=(process, q))
thread.daemon = True
thread.start()
start = datetime.now()
while True:
bail = True
if process.poll() is None:
bail = False
# Re-set the thread timer
start = datetime.now()
out = ""
while not q.empty():
out += q.get()
if out:
print(out)
# In the case where the thread is still alive and reading, and
# the process has exited and finished, give it up to 3 seconds
# to finish reading
if bail and thread.is_alive() and (datetime.now() - start).total_seconds() < 3:
bail = False
if bail:
break
# To demonstrate output in realtime, sleep is called in between these echos
bash("echo lol;sleep 2;echo bbq")
I use Tornado as a web server, user can submit a task through the front end page, after auditing they can start the submitted task. In this situation, i want to start an asynchronous sub process to handle the task, so i write the following code in an request handler:
def task_handler():
// handle task here
def start_a_process_for_task():
p = multiprocessing.Process(target=task_handler,args=())
p.start()
return 0
I don't care about the sub process and just start a process for it and return to the front end page and tell user the task is started. The task itself will run in the background and will record it's status or results to database so user
can view on the web page later. So here i don't want to use p.join() which is blocking, but without p.join() after the task finished,the sub process becomes a defunct process and as Tornado runs as a daemon and never exits, the defunct process will never disappear.
Anyone knows how to fix this problem, thanks.
The proper way to avoid defunct children is for the parent to gracefully clean up and close all resources of the exited child. This is normally done by join(), but if you want to avoid that, another approach could be to set up a global handler for the SIGCHLD signal on the parent.
SIGCHLD will be emitted whenever a child exits, and in the handler function you should either call Process.join() if you still have access to the process object, or even use os.wait() to "wait" for any child process to terminate and properly reap it. The wait time here should be 0 as you know for sure a child process has just exited. You will also be able to get the process' exit code / termination signal so it can also be a useful method to handle / log child process crashes.
Here's a quick example of doing this:
from __future__ import print_function
import os
import signal
import time
from multiprocessing import Process
def child_exited(sig, frame):
pid, exitcode = os.wait()
print("Child process {pid} exited with code {exitcode}".format(
pid=pid, exitcode=exitcode
))
def worker():
time.sleep(5)
print("Process {pid} has completed it's work".format(pid=os.getpid()))
def parent():
children = []
# Comment out the following line to see zombie children
signal.signal(signal.SIGCHLD, child_exited)
for i in range(5):
c = Process(target=worker)
c.start()
print("Parent forked out worker process {pid}".format(pid=c.pid))
children.append(c)
time.sleep(1)
print("Forked out {c} workers, hit Ctrl+C to end...".format(c=len(children)))
while True:
time.sleep(5)
if __name__ == '__main__':
parent()
One caveat is that I am not sure if this process works on non-Unix operating systems. It should work on Linux, Mac and other Unixes.
You need to join your subprocesses if you do not want to create zombies. You can do it in threads.
This is a dummy example. After 10 seconds, all your subprocesses are gone instead of being zombies. This launches a thread for every subprocess. Threads do not need to be joined or waited. A thread executes subprocess, joins it and then exits the thread as soon as the subprocess is completed.
import multiprocessing
import threading
from time import sleep
def task_processor():
sleep(10)
class TaskProxy(threading.Thread):
def __init__(self):
super(TaskProxy, self).__init__()
def run(self):
p = multiprocessing.Process(target=task_processor,args=())
p.start()
p.join()
def task_handler():
t = TaskProxy()
t.daemon = True
t.start()
return
for _ in xrange(0,20):
task_handler()
sleep(60)
I'm developing a process scheduler in Python. The idea is to create several threads from the main function and start an external process in each of these threads. The external process should continue to run until either it's finished or the main thread decides to stop it (by sending a kill signal) because the process' CPU time limit is exceeded.
The problem is that sometimes the Popen call blocks and fails to return. This code reproduces the problem with ~50% probability on my system (Ubuntu 14.04.3 LTS):
import os, time, threading, sys
from subprocess import Popen
class Process:
def __init__(self, args):
self.args = args
def run(self):
print("Run subprocess: " + " ".join(self.args))
retcode = -1
try:
self.process = Popen(self.args)
print("started a process")
while self.process.poll() is None:
# in the real code, check for the end condition here and send kill signal if required
time.sleep(1.0)
retcode = self.process.returncode
except:
print("unexpected error:", sys.exc_info()[0])
print("process done, returned {}".format(retcode))
return retcode
def main():
processes = [Process(["/bin/cat"]) for _ in range(4)]
# start all processes
for p in processes:
t = threading.Thread(target=Process.run, args=(p,))
t.daemon = True
t.start()
print("all threads started")
# wait for Ctrl+C
while True:
time.sleep(1.0)
main()
The output indicates that only 3 Popen() calls have returned:
Run subprocess: /bin/cat
Run subprocess: /bin/cat
Run subprocess: /bin/cat
Run subprocess: /bin/cat
started a process
started a process
started a process
all threads started
However, running ps shows that all four processes have in fact been started!
The problem does not show up when using Python 3.4, but I want to keep Python 2.7 compatibility.
Edit: the problem also goes away if I add some delay before starting each subsequent thread.
Edit 2: I did a bit of investigation and the blocking is caused by line 1308 in subprocess.py module, which tries to do some reading from a pipe in the parent process:
data = _eintr_retry_call(os.read, errpipe_read, 1048576)
There are a handful of bugs in python 2.7's subprocess module that can result in deadlock when calling the Popen constructor from multiple threads. They are fixed in later versions of Python, 3.2+ IIRC.
You may find that using the subprocess32 backport of Python 3.2/3.3's subprocess module resolves your issue.
*I was unable to locate the link to the actual bug report, but encountered it recently when dealing with a similar issue.