I am writing a server that must handle asynchronous tasks. I'd rather stick to asyncio for the asynchronous code, so I chose to use Quart[-OpenAPI] framework with Uvicorn.
Now, I need to run a task (master.resume() in the code below) when the server is starting up without waiting for it to finish, that is, firing and forgetting it.
I'm not sure if it's even possible with asyncio, as I cannot await for this task but if I don't I get a coroutine X was never awaited error. Using loop.run_until_complete() as suggested in this answer would block the server until the task completes.
Here's a skeleton of the code that I have:
import asyncio
from quart_openapi import Pint, Resource
app = Pint(__name__, title="Test")
class Master:
...
async def resume():
await coro()
async def handle_get()
...
#app.route("/test")
class TestResource(Resource):
async def get(self):
print("Received get")
asyncio.create_task(master.handle_get())
return {"message": "Request received"}
master = Master()
# How do I fire & forget master.resume()?
master.resume() # <== This throws "RuntimeWarning: coroutine 'Master.resume' was never awaited"
asyncio.get_event_loop().run_until_complete(master.resume()) # <== This prevents the server from starting
Should this not be achievable with asyncio/Quart, what would be the proper way to do it?
It is see these docs, in summary,
#app.before_serving
async def startup():
asyncio.ensure_future(master.resume())
I'd hold on to the task though, so that you can cancel it at shutdown,
#app.before_serving
async def startup():
app.background_task = asyncio.ensure_future(master.resume())
#app.after_serving
async def shutdown():
app.background_task.cancel() # Or something similar
Related
I am creating a server which needs to make an external request while responding. To handle concurrent requests I'm using Python's asyncio library. I have followed some examples from the standard library. It seems however that some of my tasks are destroyed, printing Task was destroyed but it is pending! to my terminal. After some debugging and research I found a stackoverflow answer which seemed to explain why.
I have created a minimal example demonstrating this effect below. My question is in what way should one counteract this effect? Storing a hard reference to the task, by for example storing asyncio.current_task() in a global variable mitigates the issue. It also seems to work fine if I wrap the future remote_read.read() as await asyncio.wait_for(remote_read.read(), 5). However I do feel like these solutions are ugly.
# run and visit http://localhost:8080/ in your browser
import asyncio
import gc
async def client_connected_cb(reader, writer):
remote_read, remote_write = await asyncio.open_connection("google.com", 443, ssl=True)
await remote_read.read()
async def cleanup():
while True:
gc.collect()
await asyncio.sleep(1)
async def main():
server = await asyncio.start_server(client_connected_cb, "localhost", 8080)
await asyncio.gather(server.serve_forever(), cleanup())
asyncio.run(main())
I am running Python 3.10 on macOS 10.15.7.
It looks that by the time being, the only way is actually keeping
a reference manually.
Maybe a decorator is something more convenient than having
to manually add the code in each async function.
I opted for the class design, so that a class attribute
can hold the hard-references while the tasks run. (A
local variable in the wrapper function would be part
of the task-reference cycle, and the garbage collection
would trigger all the same):
# run and visit http://localhost:8080/ in your browser
import asyncio
import gc
from functools import wraps
import weakref
class Shielded:
registry = set()
def __init__(self, func):
self.func = func
async def __call__(self, *args, **kw):
self.registry.add(task:=asyncio.current_task())
try:
result = await self.func(*args, **kw)
finally:
self.registry.remove(task)
return result
def _Shielded(func):
# Used along with the print sequence to assert the task was actually destroyed without commenting
async def wrapper(*args, **kwargs):
ref = weakref.finalize(asyncio.current_task(), lambda: print("task destroyed"))
return await func(*args, **kwargs)
return wrapper
#Shielded
async def client_connected_cb(reader, writer):
print("at task start")
#registry.append(asyncio.current_task())
# I've connected this to a socket in an interactive session, I'd explictly .close() for debugging:
remote_read, remote_write = await asyncio.open_connection("localhost", 8060, ssl=False)
print("comensing remote read")
await remote_read.read()
print("task complete")
async def cleanup():
while True:
gc.collect()
await asyncio.sleep(1)
async def main():
server = await asyncio.start_server(client_connected_cb, "localhost", 8080)
await asyncio.gather(server.serve_forever(), cleanup())
asyncio.run(main())
Moreover, I wanted to "really see it", so I created a "fake" _Shielded
decorator that would just log something when the underlying task
got deleted: "task complete" is never printed with it, indeed.
i have a program that basically does 2 things:
opens a websocket and remains on listening for messages and starting a video streaming in a forever loop.
I was trying to use multiprocess to manage both things but one piece stops the other from running.
The app is
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(start_client())
async def start_client():
async with WSClient() as client:
pass
class WSClient:
async def __aenter__(self):
async with websockets.connect(url,max_size= None) as websocket:
self.websocket = websocket
await self.on_open() ## it goes
p = Process(target = init(self)) # This is the streaming method
p.start()
async for message in websocket:
await on_message(message, websocket) ## listen for websocket messages
return self
the init method is
def init(ws):
logging.info('Firmware Version: ' + getVersion())
startStreaming(ws)
return
basically startStreaming has an infinite loop in it.
In this configuration, the stream starts but the on_message of the websocket it's not called because the Process function freezes the rest of the application.
How can I run both methods?
Thanks
In your code, you're telling multiprocessing.Process to take the function returned by init and call it in a new process. What you want is for the process to call init itself (with an argument). Here's how you can do that:
p = Process(target=init, args=(self,))
I have to note that you're passing an asynchronous websocket object to your init function. This will likely break as asyncio stuff aren't usually meant to be used in two threads, let alone two processes. Unless you're somehow recreating the websocket object in the new process and making a new loop there too, what you're actually looking for is how to create an asyncio task.
Assuming startStreaming is already an async function, you should change the init function to this:
async def init(ws): # note the async
logging.info('Firmware Version: ' + getVersion())
await startStreaming(ws) # note the await
return
and change the line creating and starting the process to this:
asyncio.create_task(init(self))
This will run your startStreaming function in a new task while you also read incoming messages at (basically) the same time.
Also, I'm not sure what you're trying to do with the async context manager as everything could be just in a normal async function. If you're interested in using one for learning purposes, I'd suggest you to check out contextlib.asynccontextmanager and have your message reading code inside the async with statement in start_client rather than inside __aenter__.
I want to run a simple background task in FastAPI, which involves some computation before dumping it into the database. However, the computation would block it from receiving any more requests.
from fastapi import BackgroundTasks, FastAPI
app = FastAPI()
db = Database()
async def task(data):
otherdata = await db.fetch("some sql")
newdata = somelongcomputation(data,otherdata) # this blocks other requests
await db.execute("some sql",newdata)
#app.post("/profile")
async def profile(data: Data, background_tasks: BackgroundTasks):
background_tasks.add_task(task, data)
return {}
What is the best way to solve this issue?
Your task is defined as async, which means fastapi (or rather starlette) will run it in the asyncio event loop.
And because somelongcomputation is synchronous (i.e. not waiting on some IO, but doing computation) it will block the event loop as long as it is running.
I see a few ways of solving this:
Use more workers (e.g. uvicorn main:app --workers 4). This will allow up to 4 somelongcomputation in parallel.
Rewrite your task to not be async (i.e. define it as def task(data): ... etc). Then starlette will run it in a separate thread.
Use fastapi.concurrency.run_in_threadpool, which will also run it in a separate thread. Like so:
from fastapi.concurrency import run_in_threadpool
async def task(data):
otherdata = await db.fetch("some sql")
newdata = await run_in_threadpool(lambda: somelongcomputation(data, otherdata))
await db.execute("some sql", newdata)
Or use asyncios's run_in_executor directly (which run_in_threadpool uses under the hood):
import asyncio
async def task(data):
otherdata = await db.fetch("some sql")
loop = asyncio.get_running_loop()
newdata = await loop.run_in_executor(None, lambda: somelongcomputation(data, otherdata))
await db.execute("some sql", newdata)
You could even pass in a concurrent.futures.ProcessPoolExecutor as the first argument to run_in_executor to run it in a separate process.
Spawn a separate thread / process yourself. E.g. using concurrent.futures.
Use something more heavy-handed like celery. (Also mentioned in the fastapi docs here).
If your task is CPU bound you could use multiprocessing, there is way to do that with Background task in FastAPI:
https://stackoverflow.com/a/63171013
Although you should consider to use something like Celery if there are lot of cpu-heavy tasks.
Read this issue.
Also in the example below, my_model.function_b could be any blocking function or process.
TL;DR
from starlette.concurrency import run_in_threadpool
#app.get("/long_answer")
async def long_answer():
rst = await run_in_threadpool(my_model.function_b, arg_1, arg_2)
return rst
This is a example of Background Task To FastAPI
from fastapi import FastAPI
import asyncio
app = FastAPI()
x = [1] # a global variable x
#app.get("/")
def hello():
return {"message": "hello", "x":x}
async def periodic():
while True:
# code to run periodically starts here
x[0] += 1
print(f"x is now {x}")
# code to run periodically ends here
# sleep for 3 seconds after running above code
await asyncio.sleep(3)
#app.on_event("startup")
async def schedule_periodic():
loop = asyncio.get_event_loop()
loop.create_task(periodic())
if __name__ == "__main__":
import uvicorn
uvicorn.run(app)
How can I run this example through asyncio? The error I got on startup:
name col_snapshot is not defined
sample:
async def on_snapshot(col_snapshot, changes, read_time):
logger.debug('Received updates')
col_query = db.collection(u'cities')
query_watch = col_query.on_snapshot(await on_snapshot(col_snapshot, changes, read_time))
Update: Async callbacks example
import os
import asyncio
from loguru import logger
from google.cloud.firestore import Client
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='E:/Python/listener_firebase/creds.json'
async def callback(col_snapshot, changes, read_time):
logger.debug('Received updates')
for change in changes:
if change.type.name == 'ADDED':
logger.debug(f'New document: {change.document.id}')
await asyncio.sleep(1)
logger.debug('Finished handling the updates')
Client().collection('cities').on_snapshot(callback)
while True:
pass
This doesn't work for me:
tch.py:568: RuntimeWarning: coroutine 'callback' was never awaited
self._snapshot_callback(keys, appliedChanges, read_time)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
As it's written, your code shouldn't use asyncio at all. If, however, await asyncio.sleep(1) is there to simulate some I/O heavy code that takes advantage of asyncio, your options here aren't great. The first solution would be to use an asyncio-aware client for Firestore. I have no idea if one exists. Assuming one doesn't, and, as you say, you need to use an event loop that is already running, you need to define a synchronous function (since that's what needs to be passed to the client's on_snapshot) that schedules your asynchronous callback with the event loop.
import os
import asyncio
from loguru import logger
from google.cloud.firestore import Client
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='E:/Python/listener_firebase/creds.json'
async def asynchronous_callback(changes):
logger.debug('Received updates')
for change in changes:
if change.type.name == 'ADDED':
logger.debug(f'New document: {change.document.id}')
await asyncio.sleep(1)
logger.debug('Finished handling the updates')
def synchronous_callback(col_snapshot, changes, read_time):
asyncio.create_task(asynchronous_callback(changes))
Client().collection('cities').on_snapshot(synchronous_callback)
while True:
pass
Really, though, your best option is not to use asyncio for your callback since the Firestore client is already running your code synchronously.
I'm setting up a listener for Hyperledger Sawtooth events with a pyzmq dealer socket and the provided asyncio functionality. Currently futures are returned but only sometimes finished, even though messages are sent to the Socket.
Weirdly this works for the connection message (only when sleeping before it as shown below) but not for event messages. I implemented this already with JavaScript and it works without problems. It seems that the issue does not lie with Sawtooth but rather in pyzmq's implementation of asyncio functionality or in my code.
class EventListener:
def __init__(self):
...
ctx = Context.instance()
self._socket = ctx.socket(zmq.DEALER)
self._socket.connect("tcp://127.0.0.1:4004")
async def subscribe(self):
...
await self._socket.send_multipart([connection_msg])
async def receive(self):
# Necessary wait otherwise the future is never finished
await asyncio.sleep(0.1)
resp = await self._socket.recv_multipart()
handle_response(resp)
async def listen(self):
while True:
# here sleep is not helping
# await asyncio.sleep(0.1)
# follwing await is never finished
resp = await self._socket.recv_multipart()
handle_response(resp)
...
listener = listener.EventListener()
await asyncio.gather(
listener.receive(), listener.subscribe())
await asyncio.create_task(listener.listen())
...
Debugging shows that the returned Future object is never changed from a pending to a finished state. So, is my code incorrect, do I need to await messages differently or is it possible that something is wrong with pyzmq's asyncio functionality? Also, why do I need to sleep in receive(), isn't that why we have asyncio?
There are too many queries, this answer may not address all of them. Hope at least this will help others looking for a way to setup event listeners.
The Hyperledger Sawtooth python SDK provides option for clients to subscribe to the events. The SDK part of code that does what you're trying to do can be found at https://github.com/hyperledger/sawtooth-sdk-python/blob/master/sawtooth_sdk/messaging/stream.py
The example code to use the Hyperledger Sawtooth python SDK for event subscription can be found here https://github.com/danintel/sawtooth-cookiejar/blob/master/events/events_client.py