FastAPI runs api-calls in serial instead of parallel fashion - python

I have the following code:
import time
from fastapi import FastAPI, Request
app = FastAPI()
#app.get("/ping")
async def ping(request: Request):
print("Hello")
time.sleep(5)
print("bye")
return {"ping": "pong!"}
If I run my code on localhost - e.g., http://localhost:8501/ping - in different tabs of the same browser window, I get:
Hello
bye
Hello
bye
instead of:
Hello
Hello
bye
bye
I have read about using httpx, but still, I cannot have a true parallelization. What's the problem?

As per FastAPI's documentation:
When you declare a path operation function with normal def instead
of async def, it is run in an external threadpool that is then
awaited, instead of being called directly (as it would block the
server).
also, as described here:
If you are using a third party library that communicates with
something (a database, an API, the file system, etc.) and doesn't have
support for using await, (this is currently the case for most
database libraries), then declare your path operation functions as
normally, with just def.
If your application (somehow) doesn't have to communicate with
anything else and wait for it to respond, use async def.
If you just don't know, use normal def.
Note: You can mix def and async def in your path operation functions as much as you need and define each one using the best
option for you. FastAPI will do the right thing with them.
Anyway, in any of the cases above, FastAPI will still work
asynchronously and be extremely fast.
But by following the steps above, it will be able to do some
performance optimizations.
Thus, def endpoints (in the context of asynchronous programming, a function defined with just def is called synchronous function) run in a separate thread from an external threadpool (that is then awaited, and hence, FastAPI will still work asynchronously), or, in other words, the server processes the requests concurrently, whereas async def endpoints run in the event loop—on the main (single) thread—that is, the server processes the requests sequentially, as long as there is no await call to (normally) non-blocking I/O-bound operations inside such endpoints/routes, such as waiting for (1) data from the client to be sent through the network, (2) contents of a file in the disk to be read, (3) a database operation to finish, etc., (have a look here), in which cases, the server will process the requests concurrently/asynchronously (Note that the same concept not only applies to FastAPI endpoints, but to Background Tasks as well—see Starlette's BackgroundTask class implementation—hence, after reading this answer to the end, you should be able to decide whether you should define a FastAPI endpoint or background task function with def or async def). The keyword await (which works only within an async def function) passes function control back to the event loop. In other words, it suspends the execution of the surrounding coroutine (i.e., a coroutine object is the result of calling an async def function), and tells the event loop to let something else run, until that awaited task completes. Note that just because you may define a custom function with async def and then await it inside your endpoint, it doesn't mean that your code will work asynchronously, if that custom function contains, for example, calls to time.sleep(), CPU-bound tasks, non-async I/O libraries, or any other blocking call that is incompatible with asynchronous Python code. In FastAPI, for example, when using the async methods of UploadFile, such as await file.read() and await file.write(), FastAPI/Starlette, behind the scenes, actually runs such methods of File objects in an external threadpool (using the async run_in_threadpool() function) and awaits it, otherwise, such methods/operations would block the event loop. You can find out more by having a look at the implementation of the UploadFile class.
Asynchronous code with async and await is many times summarised as using coroutines. Coroutines are collaborative (or cooperatively multitasked), meaning that "at any given time, a program with coroutines is running only one of its coroutines, and this running coroutine suspends its execution only when it explicitly requests to be suspended" (see here and here for more info on coroutines). As described in this article:
Specifically, whenever execution of a currently-running coroutine
reaches an await expression, the coroutine may be suspended, and
another previously-suspended coroutine may resume execution if what it
was suspended on has since returned a value. Suspension can also
happen when an async for block requests the next value from an
asynchronous iterator or when an async with block is entered or
exited, as these operations use await under the hood.
If, however, a blocking I/O-bound or CPU-bound operation was directly executed/called inside an async def function/endpoint, it would block the main thread (i.e., the event loop). Hence, a blocking operation such as time.sleep() in an async def endpoint would block the entire server (as in the example provided in your question). Thus, if your endpoint is not going to make any async calls, you could declare it with just def instead, which would be run in an external threadpool that would then be awaited, as explained earlier (more solutions are given in the following sections). Example:
#app.get("/ping")
def ping(request: Request):
#print(request.client)
print("Hello")
time.sleep(5)
print("bye")
return "pong"
Otherwise, if the functions that you had to execute inside the endpoint are async functions that you had to await, you should define your endpoint with async def. To demonstrate this, the example below uses the asyncio.sleep() function (from the asyncio library), which provides a non-blocking sleep operation. The await asyncio.sleep() method will suspend the execution of the surrounding coroutine (until the sleep operation completes), thus allowing other tasks in the event loop to run. Similar examples are given here and here as well.
import asyncio
#app.get("/ping")
async def ping(request: Request):
#print(request.client)
print("Hello")
await asyncio.sleep(5)
print("bye")
return "pong"
Both the path operation functions above will print out the specified messages to the screen in the same order as mentioned in your question—if two requests arrived at around the same time—that is:
Hello
Hello
bye
bye
Important Note
When you call your endpoint for the second (third, and so on) time, please remember to do that from a tab that is isolated from the browser's main session; otherwise, succeeding requests (i.e., coming after the first one) will be blocked by the browser (on client side), as the browser will be waiting for response from the server for the previous request before sending the next one. You can confirm that by using print(request.client) inside the endpoint, where you would see the hostname and port number being the same for all incoming requests—if requests were initiated from tabs opened in the same browser window/session)—and hence, those requests would be processed sequentially, because of the browser sending them sequentially in the first place. To solve this, you could either:
Reload the same tab (as is running), or
Open a new tab in an Incognito Window, or
Use a different browser/client to send the request, or
Use the httpx library to make asynchronous HTTP requests, along with the awaitable asyncio.gather(), which allows executing multiple asynchronous operations concurrently and then returns a list of results in the same order the awaitables (tasks) were passed to that function (have a look at this answer for more details).
Example:
import httpx
import asyncio
URLS = ['http://127.0.0.1:8000/ping'] * 2
async def send(url, client):
return await client.get(url, timeout=10)
async def main():
async with httpx.AsyncClient() as client:
tasks = [send(url, client) for url in URLS]
responses = await asyncio.gather(*tasks)
print(*[r.json() for r in responses], sep='\n')
asyncio.run(main())
In case you had to call different endpoints that may take different time to process a request, and you would like to print the response out on client side as soon as it is returned from the server—instead of waiting for asyncio.gather() to gather the results of all tasks and print them out in the same order the tasks were passed to the send() function—you could replace the send() function of the example above with the one shown below:
async def send(url, client):
res = await client.get(url, timeout=10)
print(res.json())
return res
Async/await and Blocking I/O-bound or CPU-bound Operations
If you are required to use async def (as you might need to await for coroutines inside your endpoint), but also have some synchronous I/O-bound or CPU-bound operation (long-running computation task) that will block the event loop (essentially, the entire server) and won't let other requests to go through, for example:
#app.post("/ping")
async def ping(file: UploadFile = File(...)):
print("Hello")
try:
contents = await file.read()
res = cpu_bound_task(contents) # this will block the event loop
finally:
await file.close()
print("bye")
return "pong"
then:
You should check whether you could change your endpoint's definition to normal def instead of async def. For example, if the only method in your endpoint that has to be awaited is the one reading the file contents (as you mentioned in the comments section below), you could instead declare the type of the endpoint's parameter as bytes (i.e., file: bytes = File()) and thus, FastAPI would read the file for you and you would receive the contents as bytes. Hence, there would be no need to use await file.read(). Please note that the above approach should work for small files, as the enitre file contents would be stored into memory (see the documentation on File Parameters); and hence, if your system does not have enough RAM available to accommodate the accumulated data (if, for example, you have 8GB of RAM, you can’t load a 50GB file), your application may end up crashing. Alternatively, you could call the .read() method of the SpooledTemporaryFile directly (which can be accessed through the .file attribute of the UploadFile object), so that again you don't have to await the .read() method—and as you can now declare your endpoint with normal def, each request will run in a separate thread (example is given below). For more details on how to upload a File, as well how Starlette/FastAPI uses SpooledTemporaryFile behind the scenes, please have a look at this answer and this answer.
#app.post("/ping")
def ping(file: UploadFile = File(...)):
print("Hello")
try:
contents = file.file.read()
res = cpu_bound_task(contents)
finally:
file.file.close()
print("bye")
return "pong"
Use FastAPI's (Starlette's) run_in_threadpool() function from the concurrency module—as #tiangolo suggested here—which "will run the function in a separate thread to ensure that the main thread (where coroutines are run) does not get blocked" (see here). As described by #tiangolo here, "run_in_threadpool is an awaitable function, the first parameter is a normal function, the next parameters are passed to that function directly. It supports both sequence arguments and keyword arguments".
from fastapi.concurrency import run_in_threadpool
res = await run_in_threadpool(cpu_bound_task, contents)
Alternatively, use asyncio's loop.run_in_executor()—after obtaining the running event loop using asyncio.get_running_loop()—to run the task, which, in this case, you can await for it to complete and return the result(s), before moving on to the next line of code. Passing None as the executor argument, the default executor will be used; that is ThreadPoolExecutor:
import asyncio
loop = asyncio.get_running_loop()
res = await loop.run_in_executor(None, cpu_bound_task, contents)
or, if you would like to pass keyword arguments instead, you could use a lambda expression, or, preferably, functools.partial(), which is specifically recommended in the documentation for loop.run_in_executor():
import asyncio
from functools import partial
loop = asyncio.get_running_loop()
res = await loop.run_in_executor(None, partial(cpu_bound_task, some_arg=contents))
You could also run your task in a custom ThreadPoolExecutor. For instance:
import asyncio
import concurrent.futures
loop = asyncio.get_running_loop()
with concurrent.futures.ThreadPoolExecutor() as pool:
res = await loop.run_in_executor(pool, cpu_bound_task, contents)
In Python 3.9+, you could also use asyncio.to_thread() to asynchronously run a synchronous function in a separate thread—which, essentially, uses await loop.run_in_executor(None, func_call) under the hood, as can been seen in the implementation of asyncio.to_thread(). The to_thread() function takes the name of a blocking function to execute, as well as any arguments (*args and/or **kwargs) to the function, and then returns a coroutine that can be awaited. Example:
import asyncio
res = await asyncio.to_thread(cpu_bound_task, contents)
ThreadPoolExecutor will successfully prevent the event loop from being blocked, but won't give you the performance improvement you would expect from running code in parallel; especially, when one needs to perform CPU-bound operations, such as the ones described here (e.g., audio or image processing, machine learning, and so on). It is thus preferable to run CPU-bound tasks in a separate process—using ProcessPoolExecutor, as shown below—which, again, you can integrate with asyncio, in order to await it to finish its work and return the result(s). As described here, on Windows, it is important to protect the main loop of code to avoid recursive spawning of subprocesses, etc. Basically, your code must be under if __name__ == '__main__':.
import concurrent.futures
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
res = await loop.run_in_executor(pool, cpu_bound_task, contents)
Use more workers. For example, uvicorn main:app --workers 4 (if you are using Gunicorn as a process manager with Uvicorn workers, please have a look at this answer). Note: Each worker "has its own things, variables and memory". This means that global variables/objects, etc., won't be shared across the processes/workers. In this case, you should consider using a database storage, or Key-Value stores (Caches), as described here and here. Additionally, note that "if you are consuming a large amount of memory in your code, each process will consume an equivalent amount of memory".
If you need to perform heavy background computation and you don't necessarily need it to be run by the same process (for example, you don't need to share memory, variables, etc), you might benefit from using other bigger tools like Celery, as described in FastAPI's documentation.

Q :" ... What's the problem? "
A :The FastAPI documentation is explicit to say the framework uses in-process tasks ( as inherited from Starlette ).
That, by itself, means, that all such task compete to receive ( from time to time ) the Python Interpreter GIL-lock - being efficiently a MUTEX-terrorising Global Interpreter Lock, which in effect re-[SERIAL]-ises any and all amounts of Python Interpreter in-process threads to work as one-and-only-one-WORKS-while-all-others-stay-waiting...
On fine-grain scale, you see the result -- if spawning another handler for the second ( manually initiated from a second FireFox-tab ) arriving http-request actually takes longer than a sleep has taken, the result of GIL-lock interleaved ~ 100 [ms] time-quanta round-robin ( all-wait-one-can-work ~ 100 [ms] before each next round of GIL-lock release-acquire-roulette takes place ) Python Interpreter internal work does not show more details, you may use more details ( depending on O/S type or version ) from here to see more in-thread LoD, like this inside the async-decorated code being performed :
import time
import threading
from fastapi import FastAPI, Request
TEMPLATE = "INF[{0:_>20d}]: t_id( {1: >20d} ):: {2:}"
print( TEMPLATE.format( time.perf_counter_ns(),
threading.get_ident(),
"Python Interpreter __main__ was started ..."
)
...
#app.get("/ping")
async def ping( request: Request ):
""" __doc__
[DOC-ME]
ping( Request ): a mock-up AS-IS function to yield
a CLI/GUI self-evidence of the order-of-execution
RETURNS: a JSON-alike decorated dict
[TEST-ME] ...
"""
print( TEMPLATE.format( time.perf_counter_ns(),
threading.get_ident(),
"Hello..."
)
#------------------------------------------------- actual blocking work
time.sleep( 5 )
#------------------------------------------------- actual blocking work
print( TEMPLATE.format( time.perf_counter_ns(),
threading.get_ident(),
"...bye"
)
return { "ping": "pong!" }
Last, but not least, do not hesitate to read more about all other sharks threads-based code may suffer from ... or even cause ... behind the curtains ...
Ad Memorandum
A mixture of GIL-lock, thread-based pools, asynchronous decorators, blocking and event-handling -- a sure mix to uncertainties & HWY2HELL ;o)

Related

How to perform a synchronous task in an asynchronous FastAPI REST endpoint without blocking the event loop? [duplicate]

I have the following code:
import time
from fastapi import FastAPI, Request
app = FastAPI()
#app.get("/ping")
async def ping(request: Request):
print("Hello")
time.sleep(5)
print("bye")
return {"ping": "pong!"}
If I run my code on localhost - e.g., http://localhost:8501/ping - in different tabs of the same browser window, I get:
Hello
bye
Hello
bye
instead of:
Hello
Hello
bye
bye
I have read about using httpx, but still, I cannot have a true parallelization. What's the problem?
As per FastAPI's documentation:
When you declare a path operation function with normal def instead
of async def, it is run in an external threadpool that is then
awaited, instead of being called directly (as it would block the
server).
also, as described here:
If you are using a third party library that communicates with
something (a database, an API, the file system, etc.) and doesn't have
support for using await, (this is currently the case for most
database libraries), then declare your path operation functions as
normally, with just def.
If your application (somehow) doesn't have to communicate with
anything else and wait for it to respond, use async def.
If you just don't know, use normal def.
Note: You can mix def and async def in your path operation functions as much as you need and define each one using the best
option for you. FastAPI will do the right thing with them.
Anyway, in any of the cases above, FastAPI will still work
asynchronously and be extremely fast.
But by following the steps above, it will be able to do some
performance optimizations.
Thus, def endpoints (in the context of asynchronous programming, a function defined with just def is called synchronous function) run in a separate thread from an external threadpool (that is then awaited, and hence, FastAPI will still work asynchronously), or, in other words, the server processes the requests concurrently, whereas async def endpoints run in the event loop—on the main (single) thread—that is, the server processes the requests sequentially, as long as there is no await call to (normally) non-blocking I/O-bound operations inside such endpoints/routes, such as waiting for (1) data from the client to be sent through the network, (2) contents of a file in the disk to be read, (3) a database operation to finish, etc., (have a look here), in which cases, the server will process the requests concurrently/asynchronously (Note that the same concept not only applies to FastAPI endpoints, but to Background Tasks as well—see Starlette's BackgroundTask class implementation—hence, after reading this answer to the end, you should be able to decide whether you should define a FastAPI endpoint or background task function with def or async def). The keyword await (which works only within an async def function) passes function control back to the event loop. In other words, it suspends the execution of the surrounding coroutine (i.e., a coroutine object is the result of calling an async def function), and tells the event loop to let something else run, until that awaited task completes. Note that just because you may define a custom function with async def and then await it inside your endpoint, it doesn't mean that your code will work asynchronously, if that custom function contains, for example, calls to time.sleep(), CPU-bound tasks, non-async I/O libraries, or any other blocking call that is incompatible with asynchronous Python code. In FastAPI, for example, when using the async methods of UploadFile, such as await file.read() and await file.write(), FastAPI/Starlette, behind the scenes, actually runs such methods of File objects in an external threadpool (using the async run_in_threadpool() function) and awaits it, otherwise, such methods/operations would block the event loop. You can find out more by having a look at the implementation of the UploadFile class.
Asynchronous code with async and await is many times summarised as using coroutines. Coroutines are collaborative (or cooperatively multitasked), meaning that "at any given time, a program with coroutines is running only one of its coroutines, and this running coroutine suspends its execution only when it explicitly requests to be suspended" (see here and here for more info on coroutines). As described in this article:
Specifically, whenever execution of a currently-running coroutine
reaches an await expression, the coroutine may be suspended, and
another previously-suspended coroutine may resume execution if what it
was suspended on has since returned a value. Suspension can also
happen when an async for block requests the next value from an
asynchronous iterator or when an async with block is entered or
exited, as these operations use await under the hood.
If, however, a blocking I/O-bound or CPU-bound operation was directly executed/called inside an async def function/endpoint, it would block the main thread (i.e., the event loop). Hence, a blocking operation such as time.sleep() in an async def endpoint would block the entire server (as in the example provided in your question). Thus, if your endpoint is not going to make any async calls, you could declare it with just def instead, which would be run in an external threadpool that would then be awaited, as explained earlier (more solutions are given in the following sections). Example:
#app.get("/ping")
def ping(request: Request):
#print(request.client)
print("Hello")
time.sleep(5)
print("bye")
return "pong"
Otherwise, if the functions that you had to execute inside the endpoint are async functions that you had to await, you should define your endpoint with async def. To demonstrate this, the example below uses the asyncio.sleep() function (from the asyncio library), which provides a non-blocking sleep operation. The await asyncio.sleep() method will suspend the execution of the surrounding coroutine (until the sleep operation completes), thus allowing other tasks in the event loop to run. Similar examples are given here and here as well.
import asyncio
#app.get("/ping")
async def ping(request: Request):
#print(request.client)
print("Hello")
await asyncio.sleep(5)
print("bye")
return "pong"
Both the path operation functions above will print out the specified messages to the screen in the same order as mentioned in your question—if two requests arrived at around the same time—that is:
Hello
Hello
bye
bye
Important Note
When you call your endpoint for the second (third, and so on) time, please remember to do that from a tab that is isolated from the browser's main session; otherwise, succeeding requests (i.e., coming after the first one) will be blocked by the browser (on client side), as the browser will be waiting for response from the server for the previous request before sending the next one. You can confirm that by using print(request.client) inside the endpoint, where you would see the hostname and port number being the same for all incoming requests—if requests were initiated from tabs opened in the same browser window/session)—and hence, those requests would be processed sequentially, because of the browser sending them sequentially in the first place. To solve this, you could either:
Reload the same tab (as is running), or
Open a new tab in an Incognito Window, or
Use a different browser/client to send the request, or
Use the httpx library to make asynchronous HTTP requests, along with the awaitable asyncio.gather(), which allows executing multiple asynchronous operations concurrently and then returns a list of results in the same order the awaitables (tasks) were passed to that function (have a look at this answer for more details).
Example:
import httpx
import asyncio
URLS = ['http://127.0.0.1:8000/ping'] * 2
async def send(url, client):
return await client.get(url, timeout=10)
async def main():
async with httpx.AsyncClient() as client:
tasks = [send(url, client) for url in URLS]
responses = await asyncio.gather(*tasks)
print(*[r.json() for r in responses], sep='\n')
asyncio.run(main())
In case you had to call different endpoints that may take different time to process a request, and you would like to print the response out on client side as soon as it is returned from the server—instead of waiting for asyncio.gather() to gather the results of all tasks and print them out in the same order the tasks were passed to the send() function—you could replace the send() function of the example above with the one shown below:
async def send(url, client):
res = await client.get(url, timeout=10)
print(res.json())
return res
Async/await and Blocking I/O-bound or CPU-bound Operations
If you are required to use async def (as you might need to await for coroutines inside your endpoint), but also have some synchronous I/O-bound or CPU-bound operation (long-running computation task) that will block the event loop (essentially, the entire server) and won't let other requests to go through, for example:
#app.post("/ping")
async def ping(file: UploadFile = File(...)):
print("Hello")
try:
contents = await file.read()
res = cpu_bound_task(contents) # this will block the event loop
finally:
await file.close()
print("bye")
return "pong"
then:
You should check whether you could change your endpoint's definition to normal def instead of async def. For example, if the only method in your endpoint that has to be awaited is the one reading the file contents (as you mentioned in the comments section below), you could instead declare the type of the endpoint's parameter as bytes (i.e., file: bytes = File()) and thus, FastAPI would read the file for you and you would receive the contents as bytes. Hence, there would be no need to use await file.read(). Please note that the above approach should work for small files, as the enitre file contents would be stored into memory (see the documentation on File Parameters); and hence, if your system does not have enough RAM available to accommodate the accumulated data (if, for example, you have 8GB of RAM, you can’t load a 50GB file), your application may end up crashing. Alternatively, you could call the .read() method of the SpooledTemporaryFile directly (which can be accessed through the .file attribute of the UploadFile object), so that again you don't have to await the .read() method—and as you can now declare your endpoint with normal def, each request will run in a separate thread (example is given below). For more details on how to upload a File, as well how Starlette/FastAPI uses SpooledTemporaryFile behind the scenes, please have a look at this answer and this answer.
#app.post("/ping")
def ping(file: UploadFile = File(...)):
print("Hello")
try:
contents = file.file.read()
res = cpu_bound_task(contents)
finally:
file.file.close()
print("bye")
return "pong"
Use FastAPI's (Starlette's) run_in_threadpool() function from the concurrency module—as #tiangolo suggested here—which "will run the function in a separate thread to ensure that the main thread (where coroutines are run) does not get blocked" (see here). As described by #tiangolo here, "run_in_threadpool is an awaitable function, the first parameter is a normal function, the next parameters are passed to that function directly. It supports both sequence arguments and keyword arguments".
from fastapi.concurrency import run_in_threadpool
res = await run_in_threadpool(cpu_bound_task, contents)
Alternatively, use asyncio's loop.run_in_executor()—after obtaining the running event loop using asyncio.get_running_loop()—to run the task, which, in this case, you can await for it to complete and return the result(s), before moving on to the next line of code. Passing None as the executor argument, the default executor will be used; that is ThreadPoolExecutor:
import asyncio
loop = asyncio.get_running_loop()
res = await loop.run_in_executor(None, cpu_bound_task, contents)
or, if you would like to pass keyword arguments instead, you could use a lambda expression, or, preferably, functools.partial(), which is specifically recommended in the documentation for loop.run_in_executor():
import asyncio
from functools import partial
loop = asyncio.get_running_loop()
res = await loop.run_in_executor(None, partial(cpu_bound_task, some_arg=contents))
You could also run your task in a custom ThreadPoolExecutor. For instance:
import asyncio
import concurrent.futures
loop = asyncio.get_running_loop()
with concurrent.futures.ThreadPoolExecutor() as pool:
res = await loop.run_in_executor(pool, cpu_bound_task, contents)
In Python 3.9+, you could also use asyncio.to_thread() to asynchronously run a synchronous function in a separate thread—which, essentially, uses await loop.run_in_executor(None, func_call) under the hood, as can been seen in the implementation of asyncio.to_thread(). The to_thread() function takes the name of a blocking function to execute, as well as any arguments (*args and/or **kwargs) to the function, and then returns a coroutine that can be awaited. Example:
import asyncio
res = await asyncio.to_thread(cpu_bound_task, contents)
ThreadPoolExecutor will successfully prevent the event loop from being blocked, but won't give you the performance improvement you would expect from running code in parallel; especially, when one needs to perform CPU-bound operations, such as the ones described here (e.g., audio or image processing, machine learning, and so on). It is thus preferable to run CPU-bound tasks in a separate process—using ProcessPoolExecutor, as shown below—which, again, you can integrate with asyncio, in order to await it to finish its work and return the result(s). As described here, on Windows, it is important to protect the main loop of code to avoid recursive spawning of subprocesses, etc. Basically, your code must be under if __name__ == '__main__':.
import concurrent.futures
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
res = await loop.run_in_executor(pool, cpu_bound_task, contents)
Use more workers. For example, uvicorn main:app --workers 4 (if you are using Gunicorn as a process manager with Uvicorn workers, please have a look at this answer). Note: Each worker "has its own things, variables and memory". This means that global variables/objects, etc., won't be shared across the processes/workers. In this case, you should consider using a database storage, or Key-Value stores (Caches), as described here and here. Additionally, note that "if you are consuming a large amount of memory in your code, each process will consume an equivalent amount of memory".
If you need to perform heavy background computation and you don't necessarily need it to be run by the same process (for example, you don't need to share memory, variables, etc), you might benefit from using other bigger tools like Celery, as described in FastAPI's documentation.
Q :" ... What's the problem? "
A :The FastAPI documentation is explicit to say the framework uses in-process tasks ( as inherited from Starlette ).
That, by itself, means, that all such task compete to receive ( from time to time ) the Python Interpreter GIL-lock - being efficiently a MUTEX-terrorising Global Interpreter Lock, which in effect re-[SERIAL]-ises any and all amounts of Python Interpreter in-process threads to work as one-and-only-one-WORKS-while-all-others-stay-waiting...
On fine-grain scale, you see the result -- if spawning another handler for the second ( manually initiated from a second FireFox-tab ) arriving http-request actually takes longer than a sleep has taken, the result of GIL-lock interleaved ~ 100 [ms] time-quanta round-robin ( all-wait-one-can-work ~ 100 [ms] before each next round of GIL-lock release-acquire-roulette takes place ) Python Interpreter internal work does not show more details, you may use more details ( depending on O/S type or version ) from here to see more in-thread LoD, like this inside the async-decorated code being performed :
import time
import threading
from fastapi import FastAPI, Request
TEMPLATE = "INF[{0:_>20d}]: t_id( {1: >20d} ):: {2:}"
print( TEMPLATE.format( time.perf_counter_ns(),
threading.get_ident(),
"Python Interpreter __main__ was started ..."
)
...
#app.get("/ping")
async def ping( request: Request ):
""" __doc__
[DOC-ME]
ping( Request ): a mock-up AS-IS function to yield
a CLI/GUI self-evidence of the order-of-execution
RETURNS: a JSON-alike decorated dict
[TEST-ME] ...
"""
print( TEMPLATE.format( time.perf_counter_ns(),
threading.get_ident(),
"Hello..."
)
#------------------------------------------------- actual blocking work
time.sleep( 5 )
#------------------------------------------------- actual blocking work
print( TEMPLATE.format( time.perf_counter_ns(),
threading.get_ident(),
"...bye"
)
return { "ping": "pong!" }
Last, but not least, do not hesitate to read more about all other sharks threads-based code may suffer from ... or even cause ... behind the curtains ...
Ad Memorandum
A mixture of GIL-lock, thread-based pools, asynchronous decorators, blocking and event-handling -- a sure mix to uncertainties & HWY2HELL ;o)

Learning asyncio: "coroutine was never awaited" warning error

I am trying to learn to use asyncio in Python to optimize scripts.
My example returns a coroutine was never awaited warning, can you help to understand and find how to solve it?
import time
import datetime
import random
import asyncio
import aiohttp
import requests
def requete_bloquante(num):
print(f'Get {num}')
uid = requests.get("https://httpbin.org/uuid").json()['uuid']
print(f"Res {num}: {uid}")
def faire_toutes_les_requetes():
for x in range(10):
requete_bloquante(x)
print("Bloquant : ")
start = datetime.datetime.now()
faire_toutes_les_requetes()
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requêtes, ça prend {exec_time}s\n")
async def requete_sans_bloquer(num, session):
print(f'Get {num}')
async with session.get("https://httpbin.org/uuid") as response:
uid = (await response.json()['uuid'])
print(f"Res {num}: {uid}")
async def faire_toutes_les_requetes_sans_bloquer():
loop = asyncio.get_event_loop()
with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
loop.run_until_complete(asyncio.gather(*futures))
loop.close()
print("Fin de la boucle !")
print("Non bloquant : ")
start = datetime.datetime.now()
faire_toutes_les_requetes_sans_bloquer()
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requêtes, ça prend {exec_time}s\n")
The first classic part of the code runs correctly, but the second half only produces:
synchronicite.py:43: RuntimeWarning: coroutine 'faire_toutes_les_requetes_sans_bloquer' was never awaited
You made faire_toutes_les_requetes_sans_bloquer an awaitable function, a coroutine, by using async def.
When you call an awaitable function, you create a new coroutine object. The code inside the function won't run until you then await on the function or run it as a task:
>>> async def foo():
... print("Running the foo coroutine")
...
>>> foo()
<coroutine object foo at 0x10b186348>
>>> import asyncio
>>> asyncio.run(foo())
Running the foo coroutine
You want to keep that function synchronous, because you don't start the loop until inside that function:
def faire_toutes_les_requetes_sans_bloquer():
loop = asyncio.get_event_loop()
# ...
loop.close()
print("Fin de la boucle !")
However, you are also trying to use a aiophttp.ClientSession() object, and that's an asynchronous context manager, you are expected to use it with async with, not just with, and so has to be run in aside an awaitable task. If you use with instead of async with a TypeError("Use async with instead") exception will be raised.
That all means you need to move the loop.run_until_complete() call out of your faire_toutes_les_requetes_sans_bloquer() function, so you can keep that as the main task to be run; you can call and await on asycio.gather() directly then:
async def faire_toutes_les_requetes_sans_bloquer():
async with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
await asyncio.gather(*futures)
print("Fin de la boucle !")
print("Non bloquant : ")
start = datetime.datetime.now()
asyncio.run(faire_toutes_les_requetes_sans_bloquer())
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requêtes, ça prend {exec_time}s\n")
I used the new asyncio.run() function (Python 3.7 and up) to run the single main task. This creates a dedicated loop for that top-level coroutine and runs it until complete.
Next, you need to move the closing ) parenthesis on the await resp.json() expression:
uid = (await response.json())['uuid']
You want to access the 'uuid' key on the result of the await, not the coroutine that response.json() produces.
With those changes your code works, but the asyncio version finishes in sub-second time; you may want to print microseconds:
exec_time = (datetime.datetime.now() - start).total_seconds()
print(f"Pour faire 10 requêtes, ça prend {exec_time:.3f}s\n")
On my machine, the synchronous requests code in about 4-5 seconds, and the asycio code completes in under .5 seconds.
Do not use loop.run_until_complete call inside async function. The purpose for that method is to run an async function inside sync context. Anyway here's how you should change the code:
async def faire_toutes_les_requetes_sans_bloquer():
async with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
await asyncio.gather(*futures)
print("Fin de la boucle !")
loop = asyncio.get_event_loop()
loop.run_until_complete(faire_toutes_les_requetes_sans_bloquer())
Note that alone faire_toutes_les_requetes_sans_bloquer() call creates a future that has to be either awaited via explicit await (for that you have to be inside async context) or passed to some event loop. When left alone Python complains about that. In your original code you do none of that.
Not sure if this was the issue for you, but for me the response from the coroutine was another coroutine, so my code started warning me (note not actually crashing) I had creating coroutines that weren't being called. After I actually called them (although I didn't realy use the response the error went away).
Note main code I added was:
content_from_url_as_str: list[str] = await asyncio.gather(*content_from_url, return_exceptions=True)
inspired after I saw:
response: str = await content_from_url[0]
Full code:
"""
-- Notes from [1]
Threading and asyncio both run on a single processor and therefore only run one at a time [1]. It's cooperative concurrency.
Note: threads.py has a very good block with good defintions for io-bound, cpu-bound if you need to recall it.
Note: coroutine is an important definition to understand before proceeding. Definition provided at the end of this tutorial.
General idea for asyncio is that there is a general event loop that controls how and when each tasks gets run.
The event loop is aware of each task and knows what states they are in.
For simplicitly of exponsition assume there are only two states:
a) Ready state
b) Waiting state
a) indicates that a task has work to do and can be run - while b) indicates that a task is waiting for a response from an
external thing (e.g. io, printer, disk, network, coq, etc). This simplified event loop has two lists of tasks
(ready_to_run_lst, waiting_lst) and runs things from the ready to run list. Once a task runs it is in complete control
until it cooperatively hands back control to the event loop.
The way it works is that the task that was ran does what it needs to do (usually an io operation, or an interleaved op
or something like that) but crucially it gives control back to the event loop when the running task (with control) thinks is best.
(Note that this means the task might not have fully completed getting what is "fully needs".
This is probably useful when the user whats to implement the interleaving himself.)
Once the task cooperatively gives back control to the event loop it is placed by the event loop in either the
ready to run list or waiting list (depending how fast the io ran, etc). Then the event loop goes through the waiting
loop to see if anything waiting has "returned".
Once all the tasks have been sorted into the right list the event loop is able to choose what to run next (e.g. by
choosing the one that has been waiting to be ran the longest). This repeats until the event loop code you wrote is done.
The crucial point (and distinction with threads) that we want to emphasizes is that in asyncio, an operation is never
interrupted in the middle and every switching/interleaving is done deliberately by the programmer.
In a way you don't have to worry about making your code thread safe.
For more details see [2], [3].
Asyncio syntax:
i) await = this is where the code you wrote calls an expensive function (e.g. an io) and thus hands back control to the
event loop. Then the event loop will likely put it in the waiting loop and runs some other task. Likely eventually
the event loop comes back to this function and runs the remaining code given that we have the value from the io now.
await = the key word that does (mainly) two things 1) gives control back to the event loop to see if there is something
else to run if we called it on a real expensive io operation (e.g. calling network, printer, etc) 2) gives control to
the new coroutine (code that might give up control copperatively) that it is awaiting. If this is your own code with async
then it means it will go into this new async function (coroutine) you defined.
No real async benefits are being experienced until you call (await) a real io e.g. asyncio.sleep is the typical debug example.
todo: clarify, I think await doesn't actually give control back to the event loop but instead runs the "coroutine" this
await is pointing too. This means that if it's a real IO then it will actually give it back to the event loop
to do something else. In this case it is actually doing something "in parallel" in the async way.
Otherwise, it is your own python coroutine and thus gives it the control but "no true async parallelism" happens.
iii) async = approximately a flag that tells python the defined function might use await. This is not strictly true but
it gives you a simple model while your getting started. todo - clarify async.
async = defines a coroutine. This doesn't define a real io, it only defines a function that can give up and give the
execution power to other coroutines or the (asyncio) event loop.
todo - context manager with async
ii) awaiting = when you call something (e.g. a function) that usually requires waiting for the io response/return/value.
todo: though it seems it's also the python keyword to give control to a coroutine you wrote in python or give
control to the event loop assuming your awaiting an actual io call.
iv) async with = this creates a context manager from an object you would normally await - i.e. an object you would
wait to get the return value from an io. So usually we swap out (switch) from this object.
todo - e.g.
Note: - any function that calls await needs to be marked with async or you’ll get a syntax error otherwise.
- a task never gives up control without intentionally doing so e.g. never in the middle of an op.
Cons: - note how this also requires more thinking carefully (but feels less dangerous than threading due to no pre-emptive
switching) due to the concurrency. Another disadvantage is again the idisocyncracies of using this in python + learning
new syntax and details for it to actually work.
- understanding the semanics of new syntax + learning where to really put the syntax to avoid semantic errors.
- we needed a special asycio compatible lib for requests, since the normal requests is not designed to inform
the event loop that it's block (or done blocking)
- if one of the tasks doesn't cooperate properly then the whole code can be a mess and slow it down.
- not all libraries support the async IO paradigm in python (e.g. asyncio, trio, etc).
Pro: + despite learning where to put await and async might be annoying it forces your to think carefully about your code
which on itself can be an advantage (e.g. better, faster, less bugs due to thinking carefully)
+ often faster...? (skeptical)
1. https://realpython.com/python-concurrency/
2. https://realpython.com/async-io-python/
3. https://stackoverflow.com/a/51116910/6843734
todo - read [2] later (or [3] but thats not a tutorial and its more details so perhaps not a priority).
asynchronous = 1) dictionary def: not happening at the same time
e.g. happening indepedently 2) computing def: happening independently of the main program flow
couroutine = are computer program components that generalize subroutines for non-preemptive multitasking, by allowing execution to be suspended and resumed.
So basically it's a routine/"function" that can give up control in "a controlled way" (i.e. not randomly like with threads).
Usually they are associated with a single process -- so it's concurrent but not parallel.
Interesting note: Coroutines are well-suited for implementing familiar program components such as cooperative tasks, exceptions, event loops, iterators, infinite lists and pipes.
Likely we have an event loop in this document as an example. I guess yield and operators too are good examples!
Interesting contrast with subroutines: Subroutines are special cases of coroutines.[3] When subroutines are invoked, execution begins at the start,
and once a subroutine exits, it is finished; an instance of a subroutine only returns once, and does not hold state between invocations.
By contrast, coroutines can exit by calling other coroutines, which may later return to the point where they were invoked in the original coroutine;
from the coroutine's point of view, it is not exiting but calling another coroutine.
Coroutines are very similar to threads. However, coroutines are cooperatively multitasked, whereas threads are typically preemptively multitasked.
event loop = event loop is a programming construct or design pattern that waits for and dispatches events or messages in a program.
Appendix:
For I/O-bound problems, there’s a general rule of thumb in the Python community:
“Use asyncio when you can, threading when you must.”
asyncio can provide the best speed up for this type of program, but sometimes you will require critical libraries that
have not been ported to take advantage of asyncio.
Remember that any task that doesn’t give up control to the event loop will block all of the other tasks
-- Notes from [2]
see asyncio_example2.py file.
The sync fil should have taken longer e.g. in one run the async file took:
Downloaded 160 sites in 0.4063692092895508 seconds
While the sync option took:
Downloaded 160 in 3.351937770843506 seconds
"""
import asyncio
from asyncio import Task
from asyncio.events import AbstractEventLoop
import aiohttp
from aiohttp import ClientResponse
from aiohttp.client import ClientSession
from typing import Coroutine
import time
async def download_site(session: ClientSession, url: str) -> str:
async with session.get(url) as response:
print(f"Read {response.content_length} from {url}")
return response.text()
async def download_all_sites(sites: list[str]) -> list[str]:
# async with = this creates a context manager from an object you would normally await - i.e. an object you would wait to get the return value from an io. So usually we swap out (switch) from this object.
async with aiohttp.ClientSession() as session: # we will usually away session.FUNCS
# create all the download code a coroutines/task to be later managed/run by the event loop
tasks: list[Task] = []
for url in sites:
# creates a task from a coroutine todo: basically it seems it creates a callable coroutine? (i.e. function that is able to give up control cooperatively or runs an external io and also thus gives back control cooperatively to the event loop). read more? https://stackoverflow.com/questions/36342899/asyncio-ensure-future-vs-baseeventloop-create-task-vs-simple-coroutine
task: Task = asyncio.ensure_future(download_site(session, url))
tasks.append(task)
# runs tasks/coroutines in the event loop and aggrates the results. todo: does this halt until all coroutines have returned? I think so due to the paridgm of how async code works.
content_from_url: list[ClientResponse.text] = await asyncio.gather(*tasks, return_exceptions=True)
assert isinstance(content_from_url[0], Coroutine) # note allresponses are coroutines
print(f'result after aggregating/doing all coroutine tasks/jobs = {content_from_url=}')
# this is needed since the response is in a coroutine object for some reason
content_from_url_as_str: list[str] = await asyncio.gather(*content_from_url, return_exceptions=True)
print(f'result after getting response from coroutines that hold the text = {content_from_url_as_str=}')
return content_from_url_as_str
if __name__ == "__main__":
# - args
num_sites: int = 80
sites: list[str] = ["https://www.jython.org", "http://olympus.realpython.org/dice"] * num_sites
start_time: float = time.time()
# - run the same 160 tasks but without async paradigm, should be slower!
# note: you can't actually do this here because you have the async definitions to your functions.
# to test the synchronous version see the synchronous.py file. Then compare the two run times.
# await download_all_sites(sites)
# download_all_sites(sites)
# - Execute the coroutine coro and return the result.
asyncio.run(download_all_sites(sites))
# - run event loop manager and run all tasks with cooperative concurrency
# asyncio.get_event_loop().run_until_complete(download_all_sites(sites))
# makes explicit the creation of the event loop that manages the coroutines & external ios
# event_loop: AbstractEventLoop = asyncio.get_event_loop()
# asyncio.run(download_all_sites(sites))
# making creating the coroutine that hasn't been ran yet with it's args explicit
# event_loop: AbstractEventLoop = asyncio.get_event_loop()
# download_all_sites_coroutine: Coroutine = download_all_sites(sites)
# asyncio.run(download_all_sites_coroutine)
# - print stats about the content download and duration
duration = time.time() - start_time
print(f"Downloaded {len(sites)} sites in {duration} seconds")
print('Success.\a')

multiple nonblocking tasks using asyncio and aiohttp

I am trying to perform several non blocking tasks with asyncio and aiohttp and I don't think the way I am doing it is efficient. I think it would be best to use await instead of yield. can anyone help?
def_init__(self):
self.event_loop = asyncio.get_event_loop()
def run(self):
tasks = [
asyncio.ensure_future(self.subscribe()),
asyncio.ensure_future(self.getServer()),]
self.event_loop.run_until_complete(asyncio.gather(*tasks))
try:
self.event_loop.run_forever()
#asyncio.coroutine
def getServer(self):
server = yield from self.event_loop.create_server(handler, ip, port)
return server
#asyncio.coroutine
def sunbscribe(self):
while True:
yield from asyncio.sleep(10)
self.sendNotification(self.sub.recieve())
def sendNotification(msg):
# send message as a client
I have to listen to a server and subscribe to listen to broadcasts and depending on the broadcasted message POST to a different server.
According to the PEP 492:
await , similarly to yield from , suspends execution of read_data
coroutine until db.fetch awaitable completes and returns the result
data.
It uses the yield from implementation with an extra step of validating
its argument. await only accepts an awaitable , which can be one of:
So I don't see an efficiency problem in your code, as they use the same implementation.
However, I do wonder why you return the server but never use it.
The main design mistake I see in your code is that you use both:
self.event_loop.run_until_complete(asyncio.gather(*tasks))
try:
self.event_loop.run_forever()
From what I can see you just need the run_forever()
Some extra tips:
In my implementations using asyncio I usually make sure that the loop is closed in case of error, or this can cause a massive leak depending on your app type.
try:
loop.run_until_complete(asyncio.gather(*tasks))
finally: # close the loop no matter what or you leak FDs
loop.close()
I also use Uvloop instead of the builtin one, according to benchmarks it's much more efficient.
import uvloop
...
loop = uvloop.new_event_loop()
asyncio.set_event_loop(loop)
Await will not be more efficient than yield from. It may be more pythonic, but
async def foo():
await some_future
and
#asyncio.coroutine
def foo()
yield from some_future
are approximately the same. Certainly in terms of efficiency, they are very close. Await is implemented using logic very similar to yield from. (There's an additional method call to await involved, but that is typically lost in the noise)
In terms of efficiency, removing the explicit sleep and polling in your subscribe method seems like the primary target in this design. Rather than sleeping for a fixed period of time it would be better to get a future that indicates when the receive call will succeed and only running subscribe's task when receive has data.

Make my own function as asyncio function in python

I would like to use asyncio module in Python to achieve doing request tasks in parallel because my current request tasks works in sequence, which means it is blocking.
I have read the documents of asyncio module in Python, and I have wrote some simple code as follows, however it doesn't work as I thought.
import asyncio
class Demo(object):
def demo(self):
loop = asyncio.get_event_loop()
tasks = [task1.verison(), task2.verison()]
result = loop.run_until_complete(asyncio.wait(tasks))
loop.close()
print(result)
class Task():
#asyncio.coroutine
def version(self):
print('before')
result = yield from differenttask.GetVersion()
# result = yield from asyncio.sleep(1)
print('after')
I found out that all the example they give use asyncio function to make the non-blocking works, how to make own function works as a asyncio?
What I want to achieve is that for a task it will execute the request and doesn't wait the response then it switch to next task. When I tried this: I get RuntimeError: Task got bad yield: 'hostname', which hostname is one item in my expected result.
so as #AndrewSvetlov said, differentask.GetVersion() is a regular synchronous function. I have tried the second method suggested in similar post, --- the one Keep your synchronous implementation of searching...blabla
#asyncio.coroutine
def version(self):
return (yield from asyncio.get_event_loop().run_in_executor(None, self._proxy.GetVersion()))
And it still doesn't work, Now the error is
Task exception was never retrieved
future: <Task finished coro=<Task.version() done, defined at /root/syi.py:34> exception=TypeError("'dict' object is not callable",)>
I'm not sure if I understand if it right, please advice.
Change to
#asyncio.coroutine
def version(self):
return (yield from asyncio.get_event_loop()
.run_in_executor(None, self._proxy.GetVersion))
Please pay attention self._proxy.GetVersion is not called here but a reference to function is passed into the loop executor.
Now all IO performed by GetVersion() is still synchronous but executed in a thread pool.
It may have benefits for you or may not.
If the whole program uses thread pool based solution only you need concurrent.futures.ThreadPool perhaps, not asyncio.
If the most part of the application is built on top of asynchronous libraries but only relative small part uses thread pools -- that's fine.

Simple async example with tornado python

I want find simple async server example.
I have got some function with lot of wait, database transactions ... etc:
def blocking_task(n):
for i in xrange(n):
print i
sleep(1)
return i
I need run it function in separated process without blocking. Is it possible?
Tornado is designed to run all your operations in a single thread, but utilize asynchronous I/O to avoid blocking as much as possible. If the DB you're using has asychronous Python bindings (ideally ones geared for Tornado specifically, like Motor for MongoDB or momoko for Postgres), then you'll be able to run your DB queries without blocking the server; no separate processes or threads needed.
To address the exact example you gave, where time.sleep(1) is called, you could use this approach to do it asynchronously via tornado coroutines:
#!/usr/bin/python
import tornado.web
from tornado.ioloop import IOLoop
from tornado import gen
import time
#gen.coroutine
def async_sleep(seconds):
yield gen.Task(IOLoop.instance().add_timeout, time.time() + seconds)
class TestHandler(tornado.web.RequestHandler):
#gen.coroutine
def get(self):
for i in xrange(100):
print i
yield async_sleep(1)
self.write(str(i))
self.finish()
application = tornado.web.Application([
(r"/test", TestHandler),
])
application.listen(9999)
IOLoop.instance().start()
The interesting part is async_sleep. That method is creating an asynchronous Task, which is calling the ioloop.add_timeout method. add_timeout will run a specified callback after a given number of seconds, without blocking the ioloop while waiting for the timeout to expire. It expects two arguments:
add_timeout(deadline, callback) # deadline is the number of seconds to wait, callback is the method to call after deadline.
As you can see in the example above, we're only actually providing one parameter to add_timeout explicitly in the code, which means we end up this this:
add_timeout(time.time() + seconds, ???)
We're not providing the expected callback parameter. In fact, when gen.Task executes add_timeout, it appends a callback keyword argument to the end of the explicitly provided parameters. So this:
yield gen.Task(loop.add_timeout, time.time() + seconds)
Results in this being executed inside gen.Task():
loop.add_timeout(time.time() + seconds, callback=gen.Callback(some_unique_key))
When gen.Callback is executed after the timeout, it signals that the gen.Task is complete, and the program execution will continue on to the next line. This flow is kind of difficult to fully understand, at least at first (it certainly was for me when I first read about it). It'll probably be helpful to read over the Tornado gen module documentation a few times.
import tornado.web
from tornado.ioloop import IOLoop
from tornado import gen
from tornado.concurrent import run_on_executor
from concurrent.futures import ThreadPoolExecutor # `pip install futures` for python2
MAX_WORKERS = 16
class TestHandler(tornado.web.RequestHandler):
executor = ThreadPoolExecutor(max_workers=MAX_WORKERS)
"""
In below function goes your time consuming task
"""
#run_on_executor
def background_task(self):
sm = 0
for i in range(10 ** 8):
sm = sm + 1
return sm
#tornado.gen.coroutine
def get(self):
""" Request that asynchronously calls background task. """
res = yield self.background_task()
self.write(str(res))
class TestHandler2(tornado.web.RequestHandler):
#gen.coroutine
def get(self):
self.write('Response from server')
self.finish()
application = tornado.web.Application([
(r"/A", TestHandler),
(r"/B", TestHandler2),
])
application.listen(5000)
IOLoop.instance().start()
When you run above code, you can run a computationally expensive operation at http://127.0.0.1:5000/A , which does not block execution, see by visiting http://127.0.0.1:5000/B immediately after you visit http://127.0.0.1:5000/A.
Here I update the information about Tornado 5.0. Tornado 5.0 add a new method IOLoop.run_in_executor. In the "Calling blocking functions" of Coroutine patterns Chapter:
The simplest way to call a blocking function from a coroutine is to use IOLoop.run_in_executor, which returns Futures that are compatible with coroutines:
#gen.coroutine
def call_blocking():
yield IOLoop.current().run_in_executor(blocking_func, args)
Also, in the documeng of run_on_executor, is says:
This decorator should not be confused with the similarly-named IOLoop.run_in_executor. In general, using run_in_executor when calling a blocking method is recommended instead of using this decorator when defining a method. If compatibility with older versions of Tornado is required, consider defining an executor and using executor.submit() at the call site.
In 5.0 version, IOLoop.run_in_executor is recommanded in use case of Calling blocking functions.
Python 3.5 introduced the async and await keywords (functions using these keywords are also called “native coroutines”). For compatibility with older versions of Python, you can use “decorated” or “yield-based” coroutines using the tornado.gen.coroutine decorator.
Native coroutines are the recommended form whenever possible. Only use decorated coroutines when compatibility with older versions of Python is required. Examples in the Tornado documentation will generally use the native form.
Translation between the two forms is generally straightforward:
# Decorated: # Native:
# Normal function declaration
# with decorator # "async def" keywords
#gen.coroutine
def a(): async def a():
# "yield" all async funcs # "await" all async funcs
b = yield c() b = await c()
# "return" and "yield"
# cannot be mixed in
# Python 2, so raise a
# special exception. # Return normally
raise gen.Return(b) return b
Other differences between the two forms of coroutine are outlined below.
Native coroutines:
are generally faster.
can use async for and async with statements which make some patterns much simpler.
do not run at all unless you await or yield them. Decorated coroutines can start running “in the background” as soon as they are called. Note that for both kinds of coroutines it is important to use await or yield so that any exceptions have somewhere to go.
Decorated coroutines:
have additional integration with the concurrent.futures package, allowing the result of executor.submit to be yielded directly. For native coroutines, use IOLoop.run_in_executor instead.
support some shorthand for waiting on multiple objects by yielding a list or dict. Use tornado.gen.multi to do this in native coroutines.
can support integration with other packages including Twisted via a registry of conversion functions. To access this functionality in native coroutines, use tornado.gen.convert_yielded.
always return a Future object. Native coroutines return an awaitable object that is not a Future. In Tornado the two are mostly interchangeable.
Worth to see:
Simplest async/await example

Categories

Resources