How can i launch an application which does below 2 things:
Expose rest endpoint via FastAPI.
Run a seperate thread infintely (rabbitmq consumer - pika) waiting for request.
Below is the code through which i am launching the fastAPI server. But when i try to run a thread, before execution of below line, it says coroutine was never awaited.
How can both be run parallely?
Maybe this is not the answer you are looking for.
There is a library called celery that makes multithreading easy to manage in python.
Check it out:
https://docs.celeryproject.org/en/stable/getting-started/introduction.html
import asyncio
from concurrent.futures import ThreadPoolExecutor
from fastapi import FastAPI
import uvicorn
app = FastAPI()
def run(corofn, *args):
loop = asyncio.new_event_loop()
try:
coro = corofn(*args)
asyncio.set_event_loop(loop)
return loop.run_until_complete(coro)
finally:
loop.close()
async def sleep_forever():
await asyncio.sleep(1000)
#
async def main():
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(max_workers=2)
futures = [loop.run_in_executor(executor, run, sleep_forever),
loop.run_in_executor(executor, run, uvicorn.run, app)]
print(await asyncio.gather(*futures))
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Note: This may hinder your FastAPI performance. Better approach would be to use a Celery Task
Related
I create redis pool the following way:
async def create_redis_connection_pool(app) -> aioredis.Redis:
redis = aioredis.from_url(
"redis://localhost", encoding="utf-8", decode_responses=True, max_connections=10,
)
app["redis"] = redis
try:
yield
finally:
loop = asyncio.get_event_loop()
await loop.create_task(app["redis"].close())
Then I use the function when I create the Aiohttp app:
def init() -> web.Application:
app = web.Application()
...
app.cleanup_ctx.append(create_redis_connection_pool)
...
return app
When I start server, do at least one request which use redis pool and then do Cnrl+C I get the following warning message:
sys:1: RuntimeWarning: coroutine 'Connection.disconnect' was never awaited
How to solve the issue and gracefully close Redis connection pool? I do tests in Mac OS.
If you're using redis==4.2.0 (from redis import asyncio as aioredis) or later,
pass close_connection_pool=True when you call .close():
await app["redis"].close(close_connection_pool=True)
Otherwise, for aioredis==2.0.1 (latest version as of this answer) or earlier,
call .connection_pool.disconnect() after .close():
await app["redis"].close()
await app["redis"].connection_pool.disconnect()
Reference: https://github.com/aio-libs/aioredis-py/pull/1256
I am very new to asyncio and I find the EchoServer example very confusing. I am trying to achieve a simple situation where a server accepts multiple clients, runs that in a coroutine and handles data, and a UI thread which handles ncurses input. I currently have the following, which, in code, conveys the idea I think. But it does not work.
import asyncio
async def do_ui():
await asyncio.sleep(1)
print('doing')
await do_ui()
async def run_game():
loop = asyncio.get_running_loop()
server = await GameServer.create_server()
loop.create_task(server.serve_forever())
loop.create_task(do_ui())
loop.run_forever()
def run():
asyncio.run(run_game())
The problem starts in GameServer.create_server, where I, for encapsulation reasons, want to delegate creating the server to. However this is an asynchronous action (for some reason) and so has to be awaited. See the server code below:
class GameServer:
#staticmethod
async def create_server():
loop = asyncio.get_running_loop()
return await loop.create_server(
lambda: GameServerProtocol(),
'127.0.0.1', 8888
)
This forces me to make run_game async aswell and await it in the run method, which is my setup.py entrypoint, so I can't do that. Using the asyncio.run method however starts a different event loop and I am not able to access it anymore.
How do I solve this? And to vent my frustration, how is this in any way easier than just using threads?
You cannot use loop.run_forever() whilst the event loop is already running. For example, the following code will not work:
import asyncio
async def main():
loop=asyncio.get_running_loop()
loop.run_forever()
if __name__ == '__main__':
asyncio.run(main())
But this code will work, and has the "run forever" behaviour you appear to be looking for:
import asyncio
async def do_ui():
while True:
await asyncio.sleep(1)
print('doing ui')
async def main():
loop = asyncio.get_running_loop()
loop.create_task(do_ui())
# insert game server code here
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.create_task(main())
loop.run_forever()
I created several endpoint calling to Celery tasks which preforming different tasks against the DB.
Obviously it doesn't make sense to re-connect to DB each time,
But from other hand - When the connection should be close?
Is it make sense using async connection to DB?
I'm not sure how i can achieve that and if that make sense to use Async with Celery - would appreciate any guidance
import os
import traceback
from celery import Celery
from celery.utils.log import get_task_logger
from config.config import *
app = Celery('proj',
broker=config('CELERY_BROKER_URL'),
backend=config('CELERY_RESULT_BACKEND'),
include=['proj.tasks','proj.fetch_data'])
app.conf.update(
result_expires=3600,
)
app.autodiscover_tasks()
if __name__ == '__main__':
app.start()
I came up to worker_process_init, worker_process_shutdown,
Note :
database_ps_ms_stg - is the based on Databases (async to postgres).
tasks.py
from .celery import app
from celery.signals import worker_process_init, worker_process_shutdown,task_postrun
from config.db import database_ps_ms_stg
import asyncio
#worker_process_init.connect
async def init_worker(**kwargs):
if not database_ps_ms_stg.is_connected:
await database_ps_ms_stg.connect()
print ("connected to database_ps_ms_stg")
#worker_process_shutdown.connect
async def shutdown_worker(**kwargs):
if database_ps_ms_stg.is_connected:
await database_ps_ms_stg.disconnect()
print ("disconneceting from database_ps_ms_stg")
Getting :
[2021-07-18 16:23:16,951: WARNING/ForkPoolWorker-1] /usr/local/lib/python3.8/site-packages/celery/concurrency/prefork.py:77: RuntimeWarning: coroutine 'init_worker' was never awaited
signals.worker_process_init.send(sender=None)
```
Your coroutines are not being scheduled for execution in any event loop.
For example, this code
#worker_process_init.connect
async def init_worker(**kwargs):
if not database_ps_ms_stg.is_connected:
await database_ps_ms_stg.connect()
print ("connected to database_ps_ms_stg")
just creates a coroutine object when worker_process_init fires, but does nothing with this object afterwards.
It is probably a solution - to wrap your coroutines with kind of a scheduler-decorator, which will start them
def async2sync(func):
def wrapper(*args, **kwargs):
loop = asyncio.get_event_loop()
task = loop.ensure_future(func())
task.add_done_callback(lambda f: loop.stop())
loop.run_forever()
try:
return task.result()
except asyncio.CancelledError:
pass
return wrapper
...
#worker_process_init.connect
#async2sync
async def init_worker(**kwargs):
if not database_ps_ms_stg.is_connected:
await database_ps_ms_stg.connect()
print ("connected to database_ps_ms_stg")
Please check if this may answer some of your questions. My opinion - it's not worth it, better just use blocking connectors, as long as your code is intended to be used with celery.
My main event loop uses asyncio but needs to call a library method that is a coroutine of type tornado.concurrent.Future. Attempting to await on the coroutine fails with RuntimeError.
RuntimeError: Task got bad yield: <tornado.concurrent.Future object at 0x7f374abdbef0>
Documentation and searches have suggested upgrading the version of Tornado (currently using 4.5) or using method tornado.platform.asyncio.to_asyncio_future which no longer produces a RuntimeError but instead just hangs on await. I'm curious to know if someone can explain what is happening. There are two main methods, one with asyncio calling a Tornado coroutine and another that is purely Tornado which works as expected.
import asyncio
from tornado import gen
from tornado.platform.asyncio import to_asyncio_future
async def coro_wrap():
tornado_fut = coro()
print(f'tornado_fut = {tornado_fut}, type({type(tornado_fut)})')
async_fut = to_asyncio_future(tornado_fut)
print(f'async_fut = {async_fut}')
res = await async_fut
print(f'done => {res}')
#gen.coroutine
def coro():
print('coro start')
yield gen.sleep(3)
print('coro end')
return 'my result'
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(coro_wrap())
loop.run_until_complete(task)
print('end')
def main2():
from tornado import ioloop
loop = ioloop.IOLoop()
res = loop.run_sync(coro)
print(res)
if __name__ == '__main__':
main()
Output from main
coro start
tornado_fut = <tornado.concurrent.Future object at 0x7f41493f1f28>, type(<class 'tornado.concurrent.Future'>)
async_fut = <Future pending>
Output from main2
coro start
coro end
my result
In new versions of Tornado, this just works.
In old versions of tornado you must both use to_asyncio_future and at startup call tornado.platform.asyncio.AsyncIOMainLoop.install().
Have worked through most examples but am still learning async in Python. I am having trouble why this example of code will not print "i am async".
import asyncio
from threading import Thread
async def cor1():
print("i am async!")
def myasync(loop):
print("Async running")
loop.run_forever()
print("Async ended?")
def main():
this_threads_event_loop = asyncio.get_event_loop()
t_1 = Thread(target=myasync, args=(this_threads_event_loop,));
t_1.start()
print("begining the async loop")
t1 = this_threads_event_loop.create_task(cor1())
print("Finsihed cor1")
main()
Your code attempts to submit tasks to the event loop from a different thread. To do that, you must use run_coroutine_threadsafe:
def main():
loop = asyncio.get_event_loop()
# start the event loop in a separate thread
t_1 = Thread(target=myasync, args=(loop,));
t_1.start()
# submit the coroutine to the event loop running in the
# other thread
f1 = asyncio.run_coroutine_threadsafe(cor1(), loop)
# wait for the coroutine to finish, by asking for its result
f1.result()
print("Finsihed cor1")
Please note that combining asyncio and threads should only be done in special circumstances, such as when introducing asyncio to a legacy application where the new functionality needs to be added gradually. If you are writing new code, you almost certainly want the main to be a coroutine, run from top-level using asyncio.run(main()).
To run a legacy synchronous function from asyncio code, you can always use run_in_executor.