How to properly use asyncio run_coroutine_threadsafe function? - python

I am trying to understand asyncio module and spend about one hour with run_coroutine_threadsafe function, I even came to the working example, it works as expected, but works with several limitations.
First of all I do not understand how should I properly call asyncio loop in main (any other) thread, in the example I call it with run_until_complete and give it a coroutine to make it busy with something until another thread will not give it a coroutine. What are other options I have?
What are situations when I have to mix asyncio and threading (in Python) in real life? Since as far as I understand asyncio is supposed to take place of threading in Python (due to GIL for not IO ops), if I am wrong, do not be angry and share your suggestions.
Python version is 3.7/3.8
import asyncio
import threading
import time
async def coro_func():
return await asyncio.sleep(3, 42)
def another_thread(_loop):
coro = coro_func() # is local thread coroutine which we would like to run in another thread
# _loop is a loop which was created in another thread
future = asyncio.run_coroutine_threadsafe(coro, _loop)
print(f"{threading.current_thread().name}: {future.result()}")
time.sleep(15)
print(f"{threading.current_thread().name} is Finished")
if __name__ == '__main__':
loop = asyncio.get_event_loop()
main_th_cor = asyncio.sleep(10)
# main_th_cor is used to make loop busy with something until another_thread will not send coroutine to it
print("START MAIN")
x = threading.Thread(target=another_thread, args=(loop, ), name="Some_Thread")
x.start()
time.sleep(1)
loop.run_until_complete(main_th_cor)
print("FINISH MAIN")

First of all I do not understand how should I properly call asyncio loop in main (any other) thread, in the example I call it with run_until_complete and give it a coroutine to make it busy with something until another thread will not give it a coroutine. What are other options I have?
This is a good use case for loop.run_forever(). The loop will run and serve the coroutines you submit using run_coroutine_threadsafe. (You can even submit such coroutines from multiple threads in parallel; you never need to instantiate more than one event loop.)
You can stop the loop from a different thread by calling loop.call_soon_threadsafe(loop.stop).
What are situations when I have to mix asyncio and threading (in Python) in real life?
Ideally there should be none. But in the real world, they do crop up; for example:
When you are introducing asyncio into an existing large program that uses threads and blocking calls and cannot be converted to asyncio all at once. run_coroutine_threadsafe allows regular blocking code to make use of asyncio.
When you are dealing with older "async" APIs which use threads under the hood and call the user-supplied APIs from other threads. There are many examples, such as Python's own multiprocessing.
When you need to call blocking functions that have no async equivalent from asyncio - e.g. CPU-bound functions, legacy database drivers, things like that. This is not a use case for run_coroutine_threadsafe, here you'd use run_in_executor, but it is another example of mixing threads and asyncio.

Related

AsyncIO and concurrent.futures.ThreadPoolExecutor

I'm building a web scraping API, and most of my scraping is done with AsyncIO coroutines, like this:
async def parse_event(self):
do scraping
# call the func
asyncio.run(b.parse_event())
This works perfectly fine, but as I'm scraping multiple websites at the same time, I was using concurrent.futures.ThreadPoolExecutor at first to scrape with multiple threads.
But since I've implemented the coroutine logic, I cannot now use the asyncio.run method in my thread directly.
Before (without coroutine):
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
w1_future = executor.submit(self.w1.parse_event)
w2_future = executor.submit(self.w2.parse_event)
w3_future = executor.submit(self.w3.parse_event)
After, I would have expected something like below
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
w1_future = executor.submit(asyncio.run(self.w1.parse_event))
w2_future = executor.submit(asyncio.run(self.w2.parse_event))
w3_future = executor.submit(asyncio.run(self.w3.parse_event))
Unfortunately it is not working.
Both asyncio and threading are a means to use a single core for concurrent operations. However, this works via different mechanisms: asyncio uses the cooperative concurrency of async/await whereas threading uses the preemptive concurrency of the GIL.
Mixing the two does not speed up execution since both still use the same single core; instead, overhead of both mechanisms will slow down the program and the interaction of the mechanisms complicates writing correct code.
To achieve concurrency between multiple tasks, submit them all to a single asyncio event loop. The equivalent to executor.submit is asyncio.create_task; multiple tasks can be submitted at once using asyncio.gather. Note that both are called inside the loop as opposed to outside the executor.
async def parse_all():
return await asyncio.gather(
# all the parsing tasks that should run concurrently
self.w1.parse_event,
self.w2.parse_event,
self.w3.parse_event,
)
asyncio.run(parse_all())
If you absolutely do want to use separate, threaded event loops for each parse, you must use executor.submit(func, *args) instead of executor.submit(func(args)).
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
w1_future = executor.submit(asyncio.run, self.w1.parse_event())
w2_future = executor.submit(asyncio.run, self.w2.parse_event())
w3_future = executor.submit(asyncio.run, self.w3.parse_event())
Note that mixing asyncio and threading adds complexity and constraints. You might want to use debug mode to detect some thread and context safety issues. However, thread and context safety constraints/guarantees are often not documented; manually test or inspect the operations for safety if needed.

Will run_in_executor ever block?

suppose if I have a web server like this:
from fastapi import FastAPI
import uvicorn
import asyncio
app = FastAPI()
def blocking_function():
import time
time.sleep(5)
return 42
#app.get("/")
async def root():
loop = asyncio.get_running_loop()
result = await loop.run_in_executor(None, blocking_function)
return result
#app.get("/ok")
async def ok():
return {"ok": 1}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", workers=1)
As I understand, the code will spawn another thread in the default ThreadExecutorPool and then execute the blocking function in the thread pool. On the other side, thinking about how the GIL works, the CPython interpreter will only execute a thread for 100 ticks and then it will switch to another thread to be fair and give other threads a chance to progress. In this case, what if the Python interpreter decides to switch to the threads where the blocking_function is executing? Will it block the who interpreter to wait for whatever remaining on the time.sleep(5)?
The reason I am asking this is that I have observed sometimes my application will block on the blocking_function, however I am not entirely sure what's in play here as my blocking_function is quite special -- it talks to a COM API object through the win32com library. I am trying to rule out that this is some GIL pitfalls I am falling into.
Both time.sleep (as explained in this question) and the win32com library (according to this mailing list post) release the GIL when they are called, so they will not prevent other threads from making progress while they are blocking.
To answer the "high-level" question - "can run_in_executor ever (directly or indirectly) block the event-loop?" - the answer would only be "yes" if you used a ThreadPoolExecutor, and the code you executed in run_in_executor did blocking work that didn't release the GIL. While that wouldn't completely block the event loop, it would mean both your event loop thread and the executor thread could not run in parallel, since both would need to acquire the GIL to make progress.

Python asyncio: Enter into a temporary async context?

I want to write a library that mixes synchronous and asynchronous work, like:
def do_things():
# 1) do sync things
# 2) launch a bunch of slow async tasks and block until they are all complete or an exception is thrown
# 3) more sync work
# ...
I started implementing this using asyncio as an excuse to learn the learn the library, but as I learn more it seems like this may be the wrong approach. My problem is that there doesn't seem to be a clean way to do 2, because it depends on the context of the caller. For example:
I can't use asyncio.run(), because the caller could already have a running event loop and you can only have one loop per thread.
Marking do_things as async is too heavy because it shouldn't require the caller to be async. Plus, if do_things was async, calling synchronous code (1 & 3) from an async function seems to be bad practice.
asyncio.get_event_loop() also seems wrong, because it may create a new loop, which if left running would prevent the caller from creating their own loop after calling do_things (though arguably they shouldn't do that). And based on the documentation of loop.close, it looks like starting/stopping multiple loops in a single thread won't work.
Basically it seems like if I want to use asyncio at all, I am forced to use it for the entire lifetime of the program, and therefore all libraries like this one have to be written as either 100% synchronous or 100% asynchronous. But the behavior I want is: Use the current event loop if one is running, otherwise create a temporary one just for the scope of 2, and don't break client code in doing so. Does something like this exist, or is asyncio the wrong choice?
I can't use asyncio.run(), because the caller could already have a running event loop and you can only have one loop per thread.
If the caller has a running event loop, you shouldn't run blocking code in the first place because it will block the caller's loop!
With that in mind, your best option is to indeed make do_things async and call sync code using run_in_executor which is designed precisely for that use case:
async def do_things():
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, sync_stuff)
await async_func()
await loop.run_in_executor(None, more_sync_stuff)
This version of do_things is usable from async code as await do_things() and from sync code as asyncio.run(do_things()).
Having said that... if you know that the sync code will run very briefly, or you are for some reason willing to block the caller's event loop, you can work around the limitation by starting an event loop in a separate thread:
def run_async(aw):
result = None
async def run_and_store_result():
nonlocal result
result = await aw
t = threading.Thread(target=asyncio.run, args=(run_and_store_result(),))
t.start()
t.join()
return result
do_things can then look like this:
async def do_things():
sync_stuff()
run_async(async_func())
more_sync_stuff()
It will be callable from both sync and async code, but the cost will be that:
it will create a brand new event loop each and every time. (Though you can cache the event loop and never exit it.)
when called from async code, it will block the caller's event loop, thus effectively breaking its asyncio usage, even if most time is actually spent inside its own async code.

When you need to use AsyncIO and ThreadPoolExecutor, do you need to call loop.close() manually?

In the Python docs, it states:
Application developers should typically use the high-level asyncio
functions, such as asyncio.run(), and should rarely need to reference
the loop object or call its methods.
This section is intended mostly for authors of lower-level code, libraries, and frameworks, who need finer control over the event loop behavior.
When using both async and a threadpoolexecutor, as show in the example code (from the docs):
import asyncio
import concurrent.futures
def blocking_io():
# File operations (such as logging) can block the
# event loop: run them in a thread pool.
with open('/dev/urandom', 'rb') as f:
return f.read(100)
async def main():
loop = asyncio.get_running_loop()
# 2. Run in a custom thread pool:
with concurrent.futures.ThreadPoolExecutor() as pool:
result = await loop.run_in_executor(
pool, blocking_io)
print('custom thread pool', result)
asyncio.run(main())
Do I need to call loop.close(), or would the asyncio.run() close the loop for me?
Is using both asyncio and threadpoolexecutor together, one of those situations where you need finer control over the event loop? Can using both, asyncio and threadpoolexecutor together be done without referencing the loop?
For question #1, The coroutines and tasks documentation linked in the Event Loop documentation you reference indicates that asyncio.run closes the loop:
asyncio.run(coro, *, debug=False)
Execute the coroutine coro and return the result.
This function runs the passed coroutine, taking care of managing the
asyncio event loop and finalizing asynchronous generators.
This function cannot be called when another asyncio event loop is
running in the same thread.
If debug is True, the event loop will be run in debug mode.
This function always creates a new event loop and closes it at the
end. It should be used as a main entry point for asyncio programs, and
should ideally only be called once.
For #2, that use of get_running_loop with ThreadExecutor is a way to run blocking code without blocking the OS thread. In Developing with asyncio, they indicate:
The loop.run_in_executor() method can be used with a concurrent.futures.ThreadPoolExecutor to execute blocking code in a different OS thread without blocking the OS thread that the event loop runs in.
It is an exception to the general warning they give in the Event documentation around calling the lower-level methods. So the answer to question #2, the answer is yes, it is one of those situations where you need a small amount of finer control in order to execute this recipe that handles blocking code in async scenarios.

asyncio with multiple processors [duplicate]

As almost everyone is aware when they first look at threading in Python, there is the GIL that makes life miserable for people who actually want to do processing in parallel - or at least give it a chance.
I am currently looking at implementing something like the Reactor pattern. Effectively I want to listen for incoming socket connections on one thread-like, and when someone tries to connect, accept that connection and pass it along to another thread-like for processing.
I'm not (yet) sure what kind of load I might be facing. I know there is currently setup a 2MB cap on incoming messages. Theoretically we could get thousands per second (though I don't know if practically we've seen anything like that). The amount of time spent processing a message isn't terribly important, though obviously quicker would be better.
I was looking into the Reactor pattern, and developed a small example using the multiprocessing library that (at least in testing) seems to work just fine. However, now/soon we'll have the asyncio library available, which would handle the event loop for me.
Is there anything that could bite me by combining asyncio and multiprocessing?
You should be able to safely combine asyncio and multiprocessing without too much trouble, though you shouldn't be using multiprocessing directly. The cardinal sin of asyncio (and any other event-loop based asynchronous framework) is blocking the event loop. If you try to use multiprocessing directly, any time you block to wait for a child process, you're going to block the event loop. Obviously, this is bad.
The simplest way to avoid this is to use BaseEventLoop.run_in_executor to execute a function in a concurrent.futures.ProcessPoolExecutor. ProcessPoolExecutor is a process pool implemented using multiprocessing.Process, but asyncio has built-in support for executing a function in it without blocking the event loop. Here's a simple example:
import time
import asyncio
from concurrent.futures import ProcessPoolExecutor
def blocking_func(x):
time.sleep(x) # Pretend this is expensive calculations
return x * 5
#asyncio.coroutine
def main():
#pool = multiprocessing.Pool()
#out = pool.apply(blocking_func, args=(10,)) # This blocks the event loop.
executor = ProcessPoolExecutor()
out = yield from loop.run_in_executor(executor, blocking_func, 10) # This does not
print(out)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
For the majority of cases, this is function alone is good enough. If you find yourself needing other constructs from multiprocessing, like Queue, Event, Manager, etc., there is a third-party library called aioprocessing (full disclosure: I wrote it), that provides asyncio-compatible versions of all the multiprocessing data structures. Here's an example demoing that:
import time
import asyncio
import aioprocessing
import multiprocessing
def func(queue, event, lock, items):
with lock:
event.set()
for item in items:
time.sleep(3)
queue.put(item+5)
queue.close()
#asyncio.coroutine
def example(queue, event, lock):
l = [1,2,3,4,5]
p = aioprocessing.AioProcess(target=func, args=(queue, event, lock, l))
p.start()
while True:
result = yield from queue.coro_get()
if result is None:
break
print("Got result {}".format(result))
yield from p.coro_join()
#asyncio.coroutine
def example2(queue, event, lock):
yield from event.coro_wait()
with (yield from lock):
yield from queue.coro_put(78)
yield from queue.coro_put(None) # Shut down the worker
if __name__ == "__main__":
loop = asyncio.get_event_loop()
queue = aioprocessing.AioQueue()
lock = aioprocessing.AioLock()
event = aioprocessing.AioEvent()
tasks = [
asyncio.async(example(queue, event, lock)),
asyncio.async(example2(queue, event, lock)),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
Yes, there are quite a few bits that may (or may not) bite you.
When you run something like asyncio it expects to run on one thread or process. This does not (by itself) work with parallel processing. You somehow have to distribute the work while leaving the IO operations (specifically those on sockets) in a single thread/process.
While your idea to hand off individual connections to a different handler process is nice, it is hard to implement. The first obstacle is that you need a way to pull the connection out of asyncio without closing it. The next obstacle is that you cannot simply send a file descriptor to a different process unless you use platform-specific (probably Linux) code from a C-extension.
Note that the multiprocessing module is known to create a number of threads for communication. Most of the time when you use communication structures (such as Queues), a thread is spawned. Unfortunately those threads are not completely invisible. For instance they can fail to tear down cleanly (when you intend to terminate your program), but depending on their number the resource usage may be noticeable on its own.
If you really intend to handle individual connections in individual processes, I suggest to examine different approaches. For instance you can put a socket into listen mode and then simultaneously accept connections from multiple worker processes in parallel. Once a worker is finished processing a request, it can go accept the next connection, so you still use less resources than forking a process for each connection. Spamassassin and Apache (mpm prefork) can use this worker model for instance. It might end up easier and more robust depending on your use case. Specifically you can make your workers die after serving a configured number of requests and be respawned by a master process thereby eliminating much of the negative effects of memory leaks.
Based on #dano's answer above I wrote this function to replace places where I used to use multiprocess pool + map.
def asyncio_friendly_multiproc_map(fn: Callable, l: list):
"""
This is designed to replace the use of this pattern:
with multiprocessing.Pool(5) as p:
results = p.map(analyze_day, list_of_days)
By letting caller drop in replace:
asyncio_friendly_multiproc_map(analyze_day, list_of_days)
"""
tasks = []
with ProcessPoolExecutor(5) as executor:
for e in l:
tasks.append(asyncio.get_event_loop().run_in_executor(executor, fn, e))
res = asyncio.get_event_loop().run_until_complete(asyncio.gather(*tasks))
return res
See PEP 3156, in particular the section on Thread interaction:
http://www.python.org/dev/peps/pep-3156/#thread-interaction
This documents clearly the new asyncio methods you might use, including run_in_executor(). Note that the Executor is defined in concurrent.futures, I suggest you also have a look there.

Categories

Resources