concurrent.futures.ThreadPoolExecutor max_workers can't be 0 - python

If I spin up a ThreadPoolExecutor(max_workers=0) it works with Python3.4 and Python2.7 but raises an error with Python3.5 and Python3.6. I'm trying to create a ThreadPoolExecutor where I want to ensure that no task gets added to the threadpool. Currently, I created a subclass from ThreadPoolExecutor and raised and exception in the overloaded submit method. Is there a better way to do this?

Simply put, with Python3.5 and 3.6, the max_workers argument is not allowed to be 0 for sanity reasons. So my solution was to make a more "mocky" version of the ThreadPoolExecutor that would record the activity of the ThreadPool in case something is added to the queue and then make assertions about that. I'll share the code here in case someone wants to reuse it for their purposes.
import threading
from concurrent import futures
class RecordingThreadPool(futures.Executor):
"""A thread pool that records if used."""
def __init__(self, max_workers):
self._tp_executor = futures.ThreadPoolExecutor(max_workers=max_workers)
self._lock = threading.Lock()
self._was_used = False
def submit(self, fn, *args, **kwargs):
with self._lock:
self._was_used = True
self._tp_executor.submit(fn, *args, **kwargs)
def was_used(self):
with self._lock:
return self._was_used

Related

Is it possible to get open telemetry tracing in python to record spans that happen within a ThreadPoolExecutor?

I have open telemetry running with a python FastApi application. Traces are being sent to Jaeger and I can view them.
I have a bunch of IO intensive work being done, so I'm doing it in parallel with a ThreadPoolExecutor. Spans created in the functions executed by ThreadPoolExecutor are not making their way into Jaeger.
Can anyone point me in the right direction to get this working? At the moment I'm resorting to disabling the concurrency to record traces for performance debugging, but that won't work in production.
I've had a bit more time to dig into this and created this class to overcome the issue:
from opentelemetry import context as otel_context
class TracedThreadPoolExecutor(ThreadPoolExecutor):
"""Implementation of :class:`ThreadPoolExecutor` that will pass context into sub tasks."""
def __init__(self, tracer: Tracer, *args, **kwargs):
self.tracer = tracer
super().__init__(*args, **kwargs)
def with_otel_context(self, context: otel_context.Context, fn: Callable):
otel_context.attach(context)
return fn()
def submit(self, fn, *args, **kwargs):
"""Submit a new task to the thread pool."""
# get the current otel context
context = otel_context.get_current()
if context:
return super().submit(
lambda: self.with_otel_context(
context, lambda: fn(*args, **kwargs)
),
)
else:
return super().submit(lambda: fn(*args, **kwargs))
This class can be used as follows:
from opentelemetry import trace
import multiprocessing
tracer = trace.get_tracer(__name__)
executor = TracedThreadPoolExecutor(tracer, max_workers=multiprocessing.cpu_count())
# from here you can use it as you would a regular ThreadPoolExecutor
result = executor.submit(some_work)
executor.shutdown(wait=True)
# do something with the result

patch object spawned in sub-processs

I am using multiprocessing package for crating sub-processes. I need to handle exceptions from sub-process. catch, report, terminate and re-spawn sub-process.
I struggle to create test for it.
I would like to patch object which represent my sub-process and raise exception to see if handling is correct.
But it looks like that object is patched only in main process and in spawned process is unchanged version. Any ideas how to accomplish requested functionality?
Example:
import multiprocessing
import time
class SubprocessClass(multiprocessing.Process):
def __init__(self) -> None:
super().__init__()
def simple_method(self):
return 42
def run(self):
try:
self.simple_method()
except Exception:
# ok, exception handled
pass
else:
# I wanted exception ! <- code goes here
assert False
#mock.patch.object(SubprocessClass, "simple_method")
def test_patch_subprocess(mock_simple_method):
mock_simple_method.side_effect = Exception("exception from mock")
subprocess = SubprocessClass()
subprocess.run()
subprocess.start()
time.sleep(0.1)
subprocess.join()
you can monkey-patch the object before it is started
(it is a bit iffy but you will get actual process running that code)
def _this_always_raises(*args, **kwargs):
raise RuntimeError("I am overridden")
def test_patch_subprocess():
subprocess = SubprocessClass()
subprocess.simple_method = _this_always_raises
subprocess.start()
time.sleep(0.1)
subprocess.join()
assert subprocess.exitcode == 0
you could also mock multiprocessing to behave like threading but that is a bit unpredictable
if you want to do it genericly for all objects you can mock the class for another one derived from the original one with only one method overriden
class SubprocessClassThatRaisesInSimpleMethod(SubprocessClass):
def simple_method(self):
raise RuntimeError("I am overridden")
# then mock with unittest mock the process spawner to use this class instead of SubprocessClass

Django: A common/global variable that is constructed on server start and destroyed on server exit

I'd like to use Django as a relay to another layer. When the user makes a request, a thrift client calls a function that goes to another software to retrieve the result.
I understand the potential thread-safety issues, which is why there will be an object pool of clients based on Python's multiprocessing, and every user request will pop one, use it, and put it back.
The object pool has to be initialized at the beginning of the program, and destroyed (hopefully cleanly) when the program exits (when the server is interrupted with SIGINT). The motivation to this question is to know how to do this in the correct way without hacks.
The only way I found so far is the following prototype with global variables, given that the following is the views file with function based views:
clients_pool = ClientsPool("127.0.0.1", 9090, size=64) # pool size
def get_data(request):
global clients_pool
c = clients_pool.pop() # borrow a client from the pool
res = c.get_data(request.body) # call the remote function with thrift
clients_pool.release(c) # restore client to the pool
return HttpResponse(res)
def get_something(request):
global clients_pool
c = clients_pool.pop() # borrow a client from the pool
res = c.get_something(request.body) # call the remote function with thrift
clients_pool.release(c) # restore client to the pool
return HttpResponse(res)
Is this the correct way to do this? Or does Django offer a better mechanism to do this?
I would make sure multiple goals are satisfied here. First make sure always only one instance of your pool manager is created. You could place it in your app's __init__.py. I don't know if this threadsafe. I doubt it.
Second you want to delegate the management of the pool to the application layer. Right now you are managing the pool in the view layer. You always want to make sure pool clients are released I suppose. This would lead to something like this:
# [app_name]/__init__.py
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class PoolManager(metaclass=Singleton):
def __init__(self, *args, **kwargs)
self.pool = ClientsPool(*args, **kwargs)
pool_manager = PoolManager(host='127.0.0.1', port=9090, size=64)
# views.py / consider moving the context manager to seperate file
from [app_name] import pool_manager
class PoolContextManager:
def __init__(self):
self.pool = pool_manager
def __enter__(self):
return self.pool.pop()
def __exit__(self, type, value, traceback):
self.pool.release(c)
def get_data(request):
with PoolContextManager() as pm:
res = pm.get_data(request.body)
return HttpResponse(res)
def get_something(request):
with PoolContextManager() as pm:
res = pm.get_something(request.body)
return HttpResponse(res)

Pause and resume thread in python

I need to pause and resume thread, which continuously executes some task. Execution begins when start() is called, it should not be interrupted and must continue from the point when pause() is called.
How can I do this?
Please remember that using threads in Python will not grant you a parallel processing, except for the case of IO blocking operations. For more information on this, take a look at this and this
You cannot pause a Thread arbitrarily in Python (please keep that in mind before reading further). I am neither sure you have a way to do that at an OS level (e.g. by using pure-C). What you can do is allow the thread to be paused at specific points you consider beforehand. I will give you an example:
class MyThread(threading.Thread):
def __init__(self, *args, **kwargs):
super(MyThread, self).__init__(*args, **kwargs)
self._event = threading.Event()
def run(self):
while True:
self.foo() # please, implement this.
self._event.wait()
self.bar() # please, implement this.
self._event.wait()
self.baz() # please, implement this.
self._event.wait()
def pause(self):
self._event.clear()
def resume(self):
self._event.set()
This approach will work but:
Threading is usually a bad idea, based on the links I gave you.
You have to code the run method by yourself, with this approach. This is because you need to have control over the exact points you'd like to check for pause, and this implies accessing the Thread object (perhaps you'd like to create an additional method instead of calling self._event.wait()).
The former point makes clear that you cannot pause arbitrarily, but just when you specified you could pause. Avoid having long operations between pause points.
Edit I did not test this one, but perhaps this will work without so much subclassing if you need more than one thread like this:
class MyPausableThread(threading.Thread):
def __init__(self, group=None, target=None, name=None, args=(), kwargs={}):
self._event = threading.Event()
if target:
args = (self,) + args
super(MyPausableThread, self).__init__(group, target, name, args, kwargs)
def pause(self):
self._event.clear()
def resume(self):
self._event.set()
def _wait_if_paused(self):
self._event.wait()
This should allow you to create a custom thread without more subclassing, by calling MyPausableThread(target=myfunc).start(), and your callable's first parameter will receive the thread object, from which you can call self._wait_if_paused() when you need to pause-check.
Or even better, if you want to isolate the target from accessing the thread object:
class MyPausableThread(threading.Thread):
def __init__(self, group=None, target=None, name=None, args=(), kwargs={}):
self._event = threading.Event()
if target:
args = ((lambda: self._event.wait()),) + args
super(MyPausableThread, self).__init__(group, target, name, args, kwargs)
def pause(self):
self._event.clear()
def resume(self):
self._event.set()
And your target callable will receive in the first parameter a function that can be called like this: pause_checker() (provided the first param in the target callable is named pause_checker).
You can do this by attaching a trace function that causes all other threads to wait for a signal:
import sys
import threading
import contextlib
# needed to enable tracing
if not sys.gettrace():
sys.settrace(lambda *args: None)
def _thread_frames(thread):
for thread_id, frame in sys._current_frames().items():
if thread_id == thread.ident:
break
else:
raise ValueError("No thread found")
# walk up to the root
while frame:
yield frame
frame = frame.f_back
#contextlib.contextmanager
def thread_paused(thread):
""" Context manager that pauses a thread for its duration """
# signal for the thread to wait on
e = threading.Event()
for frame in _thread_frames(thread):
# attach a new temporary trace handler that pauses the thread
def new(frame, event, arg, old = frame.f_trace):
e.wait()
# call the old one, to keep debuggers working
if old is not None:
return old(frame, event, arg)
frame.f_trace = new
try:
yield
finally:
# wake the other thread
e.set()
Which you can use as:
import time
def run_after_delay(func, delay):
""" Simple helper spawning a thread that runs a function in the future """
def wrapped():
time.sleep(delay)
func()
threading.Thread(target=wrapped).start()
main_thread = threading.current_thread()
def interrupt():
with thread_paused(main_thread):
print("interrupting")
time.sleep(2)
print("done")
run_after_delay(interrupt, 1)
start = time.time()
def actual_time(): return time.time() - start
print("{:.1f} == {:.1f}".format(0.0, actual_time()))
time.sleep(0.5)
print("{:.1f} == {:.1f}".format(0.5, actual_time()))
time.sleep(2)
print("{:.1f} != {:.1f}".format(2.5, actual_time()))
Giving
0.0 0.0
0.5 0.5
interrupting
done
2.5 3.0
Note how the interrupt causes the sleep on the main thread to wait longer
You can do this using Process class from psutil library.
Example:
>>> import psutil
>>> pid = 7012
>>> p = psutil.Process(pid)
>>> p.suspend()
>>> p.resume()
See this answer: https://stackoverflow.com/a/14053933
Edit: This method will suspend the whole process, not only one thread. ( I don't delete this answer, so others can know this method won't work.)
while(int(any) < 2000):
sleep(20)
print(waiting any...)

How to best perform Multiprocessing within requests with the python Tornado server?

I am using the I/O non-blocking python server Tornado. I have a class of GET requests which may take a significant amount of time to complete (think in the range of 5-10 seconds). The problem is that Tornado blocks on these requests so that subsequent fast requests are held up until the slow request completes.
I looked at: https://github.com/facebook/tornado/wiki/Threading-and-concurrency and came to the conclusion that I wanted some combination of #3 (other processes) and #4 (other threads). #4 on its own had issues and I was unable to get reliable control back to the ioloop when there was another thread doing the "heavy_lifting". (I assume that this was due to the GIL and the fact that the heavy_lifting task has high CPU load and keeps pulling control away from the main ioloop, but thats a guess).
So I have been prototyping how to solve this by doing "heavy lifting" tasks within these slow GET requests in a separate process and then place a callback back into the Tornado ioloop when the process is done to finish the request. This frees up the ioloop to handle other requests.
I have created a simple example demonstrating a possible solution, but am curious to get feedback from the community on it.
My question is two-fold: How can this current approach be simplified? What pitfalls potentially exist with it?
The Approach
Utilize Tornado's builtin asynchronous decorator which allows a request to stay open and for the ioloop to continue.
Spawn a separate process for "heavy lifting" tasks using python's multiprocessing module. I first attempted to use the threading module but was unable to get any reliable relinquishing of control back to the ioloop. It also appears that mutliprocessing would also take advantage of multicores.
Start a 'watcher' thread in the main ioloop process using the threading module who's job it is to watch a multiprocessing.Queue for the results of the "heavy lifting" task when it completes. This was needed because I needed a way to know that the heavy_lifting task had completed while being able to still notify the ioloop that this request was now finished.
Be sure that the 'watcher' thread relinquishes control to the main ioloop loop often with time.sleep(0) calls so that other requests continue to get readily processed.
When there is a result in the queue then add a callback from the "watcher" thread using tornado.ioloop.IOLoop.instance().add_callback() which is documented to be the only safe way to call ioloop instances from other threads.
Be sure to then call finish() in the callback to complete the request and hand over a reply.
Below is some sample code showing this approach. multi_tornado.py is the server implementing the above outline and call_multi.py is a sample script that calls the server in two different ways to test the server. Both tests call the server with 3 slow GET requests followed by 20 fast GET requests. The results are shown for both running with and without the threading turned on.
In the case of running it with "no threading" the 3 slow requests block (each taking a little over a second to complete). A few of the 20 fast requests squeeze through in between some of the slow requests within the ioloop (not totally sure how that occurs - but could be an artifact that I am running both the server and client test script on the same machine). The point here being that all of the fast requests are held up to varying degrees.
In the case of running it with threading enabled the 20 fast requests all complete first immediately and the three slow requests complete at about the same time afterwards as they have each been running in parallel. This is the desired behavior. The three slow requests take 2.5 seconds to complete in parallel - whereas in the non threaded case the three slow requests take about 3.5 seconds in total. So there is about 35% speed up overall (I assume due to multicore sharing). But more importantly - the fast requests were immediately handled in leu of the slow ones.
I do not have a lot experience with multithreaded programming - so while this seemingly works here I am curious to learn:
Is there a simpler way to accomplish this? What monster's may lurk within this approach?
(Note: A future tradeoff may be to just run more instances of Tornado with a reverse proxy like nginx doing load balancing. No matter what I will be running multiple instances with a load balancer - but I am concerned about just throwing hardware at this problem since it seems that the hardware is so directly coupled to the problem in terms of the blocking.)
Sample Code
multi_tornado.py (sample server):
import time
import threading
import multiprocessing
import math
from tornado.web import RequestHandler, Application, asynchronous
from tornado.ioloop import IOLoop
# run in some other process - put result in q
def heavy_lifting(q):
t0 = time.time()
for k in range(2000):
math.factorial(k)
t = time.time()
q.put(t - t0) # report time to compute in queue
class FastHandler(RequestHandler):
def get(self):
res = 'fast result ' + self.get_argument('id')
print res
self.write(res)
self.flush()
class MultiThreadedHandler(RequestHandler):
# Note: This handler can be called with threaded = True or False
def initialize(self, threaded=True):
self._threaded = threaded
self._q = multiprocessing.Queue()
def start_process(self, worker, callback):
# method to start process and watcher thread
self._callback = callback
if self._threaded:
# launch process
multiprocessing.Process(target=worker, args=(self._q,)).start()
# start watching for process to finish
threading.Thread(target=self._watcher).start()
else:
# threaded = False just call directly and block
worker(self._q)
self._watcher()
def _watcher(self):
# watches the queue for process result
while self._q.empty():
time.sleep(0) # relinquish control if not ready
# put callback back into the ioloop so we can finish request
response = self._q.get(False)
IOLoop.instance().add_callback(lambda: self._callback(response))
class SlowHandler(MultiThreadedHandler):
#asynchronous
def get(self):
# start a thread to watch for
self.start_process(heavy_lifting, self._on_response)
def _on_response(self, delta):
_id = self.get_argument('id')
res = 'slow result {} <--- {:0.3f} s'.format(_id, delta)
print res
self.write(res)
self.flush()
self.finish() # be sure to finish request
application = Application([
(r"/fast", FastHandler),
(r"/slow", SlowHandler, dict(threaded=False)),
(r"/slow_threaded", SlowHandler, dict(threaded=True)),
])
if __name__ == "__main__":
application.listen(8888)
IOLoop.instance().start()
call_multi.py (client tester):
import sys
from tornado.ioloop import IOLoop
from tornado import httpclient
def run(slow):
def show_response(res):
print res.body
# make 3 "slow" requests on server
requests = []
for k in xrange(3):
uri = 'http://localhost:8888/{}?id={}'
requests.append(uri.format(slow, str(k + 1)))
# followed by 20 "fast" requests
for k in xrange(20):
uri = 'http://localhost:8888/fast?id={}'
requests.append(uri.format(k + 1))
# show results as they return
http_client = httpclient.AsyncHTTPClient()
print 'Scheduling Get Requests:'
print '------------------------'
for req in requests:
print req
http_client.fetch(req, show_response)
# execute requests on server
print '\nStart sending requests....'
IOLoop.instance().start()
if __name__ == '__main__':
scenario = sys.argv[1]
if scenario == 'slow' or scenario == 'slow_threaded':
run(scenario)
Test Results
By running python call_multi.py slow (the blocking behavior):
Scheduling Get Requests:
------------------------
http://localhost:8888/slow?id=1
http://localhost:8888/slow?id=2
http://localhost:8888/slow?id=3
http://localhost:8888/fast?id=1
http://localhost:8888/fast?id=2
http://localhost:8888/fast?id=3
http://localhost:8888/fast?id=4
http://localhost:8888/fast?id=5
http://localhost:8888/fast?id=6
http://localhost:8888/fast?id=7
http://localhost:8888/fast?id=8
http://localhost:8888/fast?id=9
http://localhost:8888/fast?id=10
http://localhost:8888/fast?id=11
http://localhost:8888/fast?id=12
http://localhost:8888/fast?id=13
http://localhost:8888/fast?id=14
http://localhost:8888/fast?id=15
http://localhost:8888/fast?id=16
http://localhost:8888/fast?id=17
http://localhost:8888/fast?id=18
http://localhost:8888/fast?id=19
http://localhost:8888/fast?id=20
Start sending requests....
slow result 1 <--- 1.338 s
fast result 1
fast result 2
fast result 3
fast result 4
fast result 5
fast result 6
fast result 7
slow result 2 <--- 1.169 s
slow result 3 <--- 1.130 s
fast result 8
fast result 9
fast result 10
fast result 11
fast result 13
fast result 12
fast result 14
fast result 15
fast result 16
fast result 18
fast result 17
fast result 19
fast result 20
By running python call_multi.py slow_threaded (the desired behavior):
Scheduling Get Requests:
------------------------
http://localhost:8888/slow_threaded?id=1
http://localhost:8888/slow_threaded?id=2
http://localhost:8888/slow_threaded?id=3
http://localhost:8888/fast?id=1
http://localhost:8888/fast?id=2
http://localhost:8888/fast?id=3
http://localhost:8888/fast?id=4
http://localhost:8888/fast?id=5
http://localhost:8888/fast?id=6
http://localhost:8888/fast?id=7
http://localhost:8888/fast?id=8
http://localhost:8888/fast?id=9
http://localhost:8888/fast?id=10
http://localhost:8888/fast?id=11
http://localhost:8888/fast?id=12
http://localhost:8888/fast?id=13
http://localhost:8888/fast?id=14
http://localhost:8888/fast?id=15
http://localhost:8888/fast?id=16
http://localhost:8888/fast?id=17
http://localhost:8888/fast?id=18
http://localhost:8888/fast?id=19
http://localhost:8888/fast?id=20
Start sending requests....
fast result 1
fast result 2
fast result 3
fast result 4
fast result 5
fast result 6
fast result 7
fast result 8
fast result 9
fast result 10
fast result 11
fast result 12
fast result 13
fast result 14
fast result 15
fast result 19
fast result 20
fast result 17
fast result 16
fast result 18
slow result 2 <--- 2.485 s
slow result 3 <--- 2.491 s
slow result 1 <--- 2.517 s
If you're willing to use concurrent.futures.ProcessPoolExecutor instead of multiprocessing, this is actually very simple. Tornado's ioloop already supports concurrent.futures.Future, so they'll play nicely together out of the box. concurrent.futures is included in Python 3.2+, and has been backported to Python 2.x.
Here's an example:
import time
from concurrent.futures import ProcessPoolExecutor
from tornado.ioloop import IOLoop
from tornado import gen
def f(a, b, c, blah=None):
print "got %s %s %s and %s" % (a, b, c, blah)
time.sleep(5)
return "hey there"
#gen.coroutine
def test_it():
pool = ProcessPoolExecutor(max_workers=1)
fut = pool.submit(f, 1, 2, 3, blah="ok") # This returns a concurrent.futures.Future
print("running it asynchronously")
ret = yield fut
print("it returned %s" % ret)
pool.shutdown()
IOLoop.instance().run_sync(test_it)
Output:
running it asynchronously
got 1 2 3 and ok
it returned hey there
ProcessPoolExecutor has a more limited API than multiprocessing.Pool, but if you don't need the more advanced features of multiprocessing.Pool, it's worth using because the integration is so much simpler.
multiprocessing.Pool can be integrated into the tornado I/O loop, but it's a bit messy. A much cleaner integration can be done using concurrent.futures (see my other answer for details), but if you're stuck on Python 2.x and can't install the concurrent.futures backport, here is how you can do it strictly using multiprocessing:
The multiprocessing.Pool.apply_async and multiprocessing.Pool.map_async methods both have an optional callback parameter, which means that both can potentially be plugged into a tornado.gen.Task. So in most cases, running code asynchronously in a sub-process is as simple as this:
import multiprocessing
import contextlib
from tornado import gen
from tornado.gen import Return
from tornado.ioloop import IOLoop
from functools import partial
def worker():
print "async work here"
#gen.coroutine
def async_run(func, *args, **kwargs):
result = yield gen.Task(pool.apply_async, func, args, kwargs)
raise Return(result)
if __name__ == "__main__":
pool = multiprocessing.Pool(multiprocessing.cpu_count())
func = partial(async_run, worker)
IOLoop().run_sync(func)
As I mentioned, this works well in most cases. But if worker() throws an exception, callback is never called, which means the gen.Task never finishes, and you hang forever. Now, if you know that your work will never throw an exception (because you wrapped the whole thing in a try/except, for example), you can happily use this approach. However, if you want to let exceptions escape from your worker, the only solution I found was to subclass some multiprocessing components, and make them call callback even if the worker sub-process raised an exception:
from multiprocessing.pool import ApplyResult, Pool, RUN
import multiprocessing
class TornadoApplyResult(ApplyResult):
def _set(self, i, obj):
self._success, self._value = obj
if self._callback:
self._callback(self._value)
self._cond.acquire()
try:
self._ready = True
self._cond.notify()
finally:
self._cond.release()
del self._cache[self._job]
class TornadoPool(Pool):
def apply_async(self, func, args=(), kwds={}, callback=None):
''' Asynchronous equivalent of `apply()` builtin
This version will call `callback` even if an exception is
raised by `func`.
'''
assert self._state == RUN
result = TornadoApplyResult(self._cache, callback)
self._taskqueue.put(([(result._job, None, func, args, kwds)], None))
return result
...
if __name__ == "__main__":
pool = TornadoPool(multiprocessing.cpu_count())
...
With these changes, the exception object will be returned by the gen.Task, rather than the gen.Task hanging indefinitely. I also updated my async_run method to re-raise the exception when its returned, and made some other changes to provide better tracebacks for exceptions thrown in the worker sub-processes. Here's the full code:
import multiprocessing
from multiprocessing.pool import Pool, ApplyResult, RUN
from functools import wraps
import tornado.web
from tornado.ioloop import IOLoop
from tornado.gen import Return
from tornado import gen
class WrapException(Exception):
def __init__(self):
exc_type, exc_value, exc_tb = sys.exc_info()
self.exception = exc_value
self.formatted = ''.join(traceback.format_exception(exc_type, exc_value, exc_tb))
def __str__(self):
return '\n%s\nOriginal traceback:\n%s' % (Exception.__str__(self), self.formatted)
class TornadoApplyResult(ApplyResult):
def _set(self, i, obj):
self._success, self._value = obj
if self._callback:
self._callback(self._value)
self._cond.acquire()
try:
self._ready = True
self._cond.notify()
finally:
self._cond.release()
del self._cache[self._job]
class TornadoPool(Pool):
def apply_async(self, func, args=(), kwds={}, callback=None):
''' Asynchronous equivalent of `apply()` builtin
This version will call `callback` even if an exception is
raised by `func`.
'''
assert self._state == RUN
result = TornadoApplyResult(self._cache, callback)
self._taskqueue.put(([(result._job, None, func, args, kwds)], None))
return result
#gen.coroutine
def async_run(func, *args, **kwargs):
""" Runs the given function in a subprocess.
This wraps the given function in a gen.Task and runs it
in a multiprocessing.Pool. It is meant to be used as a
Tornado co-routine. Note that if func returns an Exception
(or an Exception sub-class), this function will raise the
Exception, rather than return it.
"""
result = yield gen.Task(pool.apply_async, func, args, kwargs)
if isinstance(result, Exception):
raise result
raise Return(result)
def handle_exceptions(func):
""" Raise a WrapException so we get a more meaningful traceback"""
#wraps(func)
def inner(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception:
raise WrapException()
return inner
# Test worker functions
#handle_exceptions
def test2(x):
raise Exception("eeee")
#handle_exceptions
def test(x):
print x
time.sleep(2)
return "done"
class TestHandler(tornado.web.RequestHandler):
#gen.coroutine
def get(self):
try:
result = yield async_run(test, "inside get")
self.write("%s\n" % result)
result = yield async_run(test2, "hi2")
except Exception as e:
print("caught exception in get")
self.write("Caught an exception: %s" % e)
finally:
self.finish()
app = tornado.web.Application([
(r"/test", TestHandler),
])
if __name__ == "__main__":
pool = TornadoPool(4)
app.listen(8888)
IOLoop.instance().start()
Here's how it behaves for the client:
dan#dan:~$ curl localhost:8888/test
done
Caught an exception:
Original traceback:
Traceback (most recent call last):
File "./mutli.py", line 123, in inner
return func(*args, **kwargs)
File "./mutli.py", line 131, in test2
raise Exception("eeee")
Exception: eeee
And if I send two simultaneous curl requests, we can see they're handled asynchronously on the server-side:
dan#dan:~$ ./mutli.py
inside get
inside get
caught exception inside get
caught exception inside get
Edit:
Note that this code becomes simpler with Python 3, because it introduces an error_callback keyword argument to all asynchronous multiprocessing.Pool methods. This makes it much easier to integrate with Tornado:
class TornadoPool(Pool):
def apply_async(self, func, args=(), kwds={}, callback=None):
''' Asynchronous equivalent of `apply()` builtin
This version will call `callback` even if an exception is
raised by `func`.
'''
super().apply_async(func, args, kwds, callback=callback,
error_callback=callback)
#gen.coroutine
def async_run(func, *args, **kwargs):
""" Runs the given function in a subprocess.
This wraps the given function in a gen.Task and runs it
in a multiprocessing.Pool. It is meant to be used as a
Tornado co-routine. Note that if func returns an Exception
(or an Exception sub-class), this function will raise the
Exception, rather than return it.
"""
result = yield gen.Task(pool.apply_async, func, args, kwargs)
raise Return(result)
All we need to do in our overridden apply_async is call the parent with the error_callback keyword argument, in addition to the callback kwarg. No need to override ApplyResult.
We can get even fancier by using a MetaClass in our TornadoPool, to allow its *_async methods to be called directly as if they were coroutines:
import time
from functools import wraps
from multiprocessing.pool import Pool
import tornado.web
from tornado import gen
from tornado.gen import Return
from tornado import stack_context
from tornado.ioloop import IOLoop
from tornado.concurrent import Future
def _argument_adapter(callback):
def wrapper(*args, **kwargs):
if kwargs or len(args) > 1:
callback(Arguments(args, kwargs))
elif args:
callback(args[0])
else:
callback(None)
return wrapper
def PoolTask(func, *args, **kwargs):
""" Task function for use with multiprocessing.Pool methods.
This is very similar to tornado.gen.Task, except it sets the
error_callback kwarg in addition to the callback kwarg. This
way exceptions raised in pool worker methods get raised in the
parent when the Task is yielded from.
"""
future = Future()
def handle_exception(typ, value, tb):
if future.done():
return False
future.set_exc_info((typ, value, tb))
return True
def set_result(result):
if future.done():
return
if isinstance(result, Exception):
future.set_exception(result)
else:
future.set_result(result)
with stack_context.ExceptionStackContext(handle_exception):
cb = _argument_adapter(set_result)
func(*args, callback=cb, error_callback=cb)
return future
def coro_runner(func):
""" Wraps the given func in a PoolTask and returns it. """
#wraps(func)
def wrapper(*args, **kwargs):
return PoolTask(func, *args, **kwargs)
return wrapper
class MetaPool(type):
""" Wrap all *_async methods in Pool with coro_runner. """
def __new__(cls, clsname, bases, dct):
pdct = bases[0].__dict__
for attr in pdct:
if attr.endswith("async") and not attr.startswith('_'):
setattr(bases[0], attr, coro_runner(pdct[attr]))
return super().__new__(cls, clsname, bases, dct)
class TornadoPool(Pool, metaclass=MetaPool):
pass
# Test worker functions
def test2(x):
print("hi2")
raise Exception("eeee")
def test(x):
print(x)
time.sleep(2)
return "done"
class TestHandler(tornado.web.RequestHandler):
#gen.coroutine
def get(self):
try:
result = yield pool.apply_async(test, ("inside get",))
self.write("%s\n" % result)
result = yield pool.apply_async(test2, ("hi2",))
self.write("%s\n" % result)
except Exception as e:
print("caught exception in get")
self.write("Caught an exception: %s" % e)
raise
finally:
self.finish()
app = tornado.web.Application([
(r"/test", TestHandler),
])
if __name__ == "__main__":
pool = TornadoPool()
app.listen(8888)
IOLoop.instance().start()
If your get requests are taking that long then tornado is the wrong framework.
I suggest you use nginx to route the fast gets to tornado and the slower ones to a different server.
PeterBe has an interesting article where he runs multiple Tornado servers and sets one of them to be 'the slow one' for handling the long running requests see: worrying-about-io-blocking I would try this method.

Categories

Resources