I create redis pool the following way:
async def create_redis_connection_pool(app) -> aioredis.Redis:
redis = aioredis.from_url(
"redis://localhost", encoding="utf-8", decode_responses=True, max_connections=10,
)
app["redis"] = redis
try:
yield
finally:
loop = asyncio.get_event_loop()
await loop.create_task(app["redis"].close())
Then I use the function when I create the Aiohttp app:
def init() -> web.Application:
app = web.Application()
...
app.cleanup_ctx.append(create_redis_connection_pool)
...
return app
When I start server, do at least one request which use redis pool and then do Cnrl+C I get the following warning message:
sys:1: RuntimeWarning: coroutine 'Connection.disconnect' was never awaited
How to solve the issue and gracefully close Redis connection pool? I do tests in Mac OS.
If you're using redis==4.2.0 (from redis import asyncio as aioredis) or later,
pass close_connection_pool=True when you call .close():
await app["redis"].close(close_connection_pool=True)
Otherwise, for aioredis==2.0.1 (latest version as of this answer) or earlier,
call .connection_pool.disconnect() after .close():
await app["redis"].close()
await app["redis"].connection_pool.disconnect()
Reference: https://github.com/aio-libs/aioredis-py/pull/1256
Related
How can i launch an application which does below 2 things:
Expose rest endpoint via FastAPI.
Run a seperate thread infintely (rabbitmq consumer - pika) waiting for request.
Below is the code through which i am launching the fastAPI server. But when i try to run a thread, before execution of below line, it says coroutine was never awaited.
How can both be run parallely?
Maybe this is not the answer you are looking for.
There is a library called celery that makes multithreading easy to manage in python.
Check it out:
https://docs.celeryproject.org/en/stable/getting-started/introduction.html
import asyncio
from concurrent.futures import ThreadPoolExecutor
from fastapi import FastAPI
import uvicorn
app = FastAPI()
def run(corofn, *args):
loop = asyncio.new_event_loop()
try:
coro = corofn(*args)
asyncio.set_event_loop(loop)
return loop.run_until_complete(coro)
finally:
loop.close()
async def sleep_forever():
await asyncio.sleep(1000)
#
async def main():
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(max_workers=2)
futures = [loop.run_in_executor(executor, run, sleep_forever),
loop.run_in_executor(executor, run, uvicorn.run, app)]
print(await asyncio.gather(*futures))
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Note: This may hinder your FastAPI performance. Better approach would be to use a Celery Task
I created several endpoint calling to Celery tasks which preforming different tasks against the DB.
Obviously it doesn't make sense to re-connect to DB each time,
But from other hand - When the connection should be close?
Is it make sense using async connection to DB?
I'm not sure how i can achieve that and if that make sense to use Async with Celery - would appreciate any guidance
import os
import traceback
from celery import Celery
from celery.utils.log import get_task_logger
from config.config import *
app = Celery('proj',
broker=config('CELERY_BROKER_URL'),
backend=config('CELERY_RESULT_BACKEND'),
include=['proj.tasks','proj.fetch_data'])
app.conf.update(
result_expires=3600,
)
app.autodiscover_tasks()
if __name__ == '__main__':
app.start()
I came up to worker_process_init, worker_process_shutdown,
Note :
database_ps_ms_stg - is the based on Databases (async to postgres).
tasks.py
from .celery import app
from celery.signals import worker_process_init, worker_process_shutdown,task_postrun
from config.db import database_ps_ms_stg
import asyncio
#worker_process_init.connect
async def init_worker(**kwargs):
if not database_ps_ms_stg.is_connected:
await database_ps_ms_stg.connect()
print ("connected to database_ps_ms_stg")
#worker_process_shutdown.connect
async def shutdown_worker(**kwargs):
if database_ps_ms_stg.is_connected:
await database_ps_ms_stg.disconnect()
print ("disconneceting from database_ps_ms_stg")
Getting :
[2021-07-18 16:23:16,951: WARNING/ForkPoolWorker-1] /usr/local/lib/python3.8/site-packages/celery/concurrency/prefork.py:77: RuntimeWarning: coroutine 'init_worker' was never awaited
signals.worker_process_init.send(sender=None)
```
Your coroutines are not being scheduled for execution in any event loop.
For example, this code
#worker_process_init.connect
async def init_worker(**kwargs):
if not database_ps_ms_stg.is_connected:
await database_ps_ms_stg.connect()
print ("connected to database_ps_ms_stg")
just creates a coroutine object when worker_process_init fires, but does nothing with this object afterwards.
It is probably a solution - to wrap your coroutines with kind of a scheduler-decorator, which will start them
def async2sync(func):
def wrapper(*args, **kwargs):
loop = asyncio.get_event_loop()
task = loop.ensure_future(func())
task.add_done_callback(lambda f: loop.stop())
loop.run_forever()
try:
return task.result()
except asyncio.CancelledError:
pass
return wrapper
...
#worker_process_init.connect
#async2sync
async def init_worker(**kwargs):
if not database_ps_ms_stg.is_connected:
await database_ps_ms_stg.connect()
print ("connected to database_ps_ms_stg")
Please check if this may answer some of your questions. My opinion - it's not worth it, better just use blocking connectors, as long as your code is intended to be used with celery.
I have a class that contains a function that I would like to be able to invoke by invoking a flask-resful endpoint. Is there a way to define an asynchronous function that would await/subscribe to this endpoint to be called? I can make changes to the flask app (but can't switch to SocketIO) as well if required or write some sort of async requests function. I can only work with the base Anaconda 3.7 library and I don't have any additional message brokers installed or available.
class DaemonProcess:
def __init__(self):
pass
async def await_signal():
signal = await http://ip123/signal
self.(process_signal) # do stuff with signal
For context, this isn't the main objective of the process. I simply want to be able to use this to tell my process remotely or via UI to shut down worker processes either gracefully or forcefully. The only other idea I came up with is pinging a database table repeatedly to see if a signal has been inserted, but time is of the essence and would require pinging at too short of intervals in my opinion and an asynchronous approach would be favored. The database would be SQLite3 and it doesn't appear to support update_hook callbacks.
Here's sample pattern to send a singal and process it:
import asyncio
import aiotools
class DaemonProcess
async def process(reader, writer):
data = await reader.read(100)
writer.write(data)
print(f"We got a message {data} - time to do something about it.")
await writer.drain()
writer.close()
#aiotools.server
async def worker(loop, pidx, args):
server = await asyncio.start_server(echo, '127.0.0.1', 8888,
reuse_port=True, loop=loop)
print(f'[{pidx}] started')
yield # wait until terminated
server.close()
await server.wait_closed()
print(f'[{pidx}] terminated')
def start(self):
aiotools.start_server(myworker, num_workers=4)
if __name__ == '__main__':
# Run the above server using 4 worker processes.
d = DaemonProcess()
d.start()
if you save it in a file, for example, process.py, you should be able to start it:
python3 process.py
Now once you have this daemon in background, you should be able to ping it (see a sample client below):
import asyncio
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection('127.0.0.1', 8888)
print(f'Send: {message!r}')
writer.write(message.encode())
await writer.drain()
data = await reader.read(100)
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
await writer.wait_closed()
And now, somewhere in your Flask view, you should be able to invoke:
asyncio.run(tcp_echo_client('I want my daemon to do something for me'))
Notice this all used localhost 127.0.0.1 and port 8888, so those to be made available unless you have your own ports and IPs, then you'll need to configure them accordingly.
Also notice the use of aiotools which is a module providing a set of common asyncio patterns (daemons, etc...).
My main event loop uses asyncio but needs to call a library method that is a coroutine of type tornado.concurrent.Future. Attempting to await on the coroutine fails with RuntimeError.
RuntimeError: Task got bad yield: <tornado.concurrent.Future object at 0x7f374abdbef0>
Documentation and searches have suggested upgrading the version of Tornado (currently using 4.5) or using method tornado.platform.asyncio.to_asyncio_future which no longer produces a RuntimeError but instead just hangs on await. I'm curious to know if someone can explain what is happening. There are two main methods, one with asyncio calling a Tornado coroutine and another that is purely Tornado which works as expected.
import asyncio
from tornado import gen
from tornado.platform.asyncio import to_asyncio_future
async def coro_wrap():
tornado_fut = coro()
print(f'tornado_fut = {tornado_fut}, type({type(tornado_fut)})')
async_fut = to_asyncio_future(tornado_fut)
print(f'async_fut = {async_fut}')
res = await async_fut
print(f'done => {res}')
#gen.coroutine
def coro():
print('coro start')
yield gen.sleep(3)
print('coro end')
return 'my result'
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(coro_wrap())
loop.run_until_complete(task)
print('end')
def main2():
from tornado import ioloop
loop = ioloop.IOLoop()
res = loop.run_sync(coro)
print(res)
if __name__ == '__main__':
main()
Output from main
coro start
tornado_fut = <tornado.concurrent.Future object at 0x7f41493f1f28>, type(<class 'tornado.concurrent.Future'>)
async_fut = <Future pending>
Output from main2
coro start
coro end
my result
In new versions of Tornado, this just works.
In old versions of tornado you must both use to_asyncio_future and at startup call tornado.platform.asyncio.AsyncIOMainLoop.install().
I am trying out some asyncio examples found on the web:
Proxybroker example
When I run this first example:
"""Find and show 10 working HTTP(S) proxies."""
import asyncio
from proxybroker import Broker
async def show(proxies):
while True:
proxy = await proxies.get()
if proxy is None: break
print('Found proxy: %s' % proxy)
proxies = asyncio.Queue()
broker = Broker(proxies)
tasks = asyncio.gather(
broker.find(types=['HTTP', 'HTTPS'], limit=10),
show(proxies))
loop = asyncio.get_event_loop()
loop.run_until_complete(tasks)
I get the error:
RuntimeError: This event loop is already running
But the loop completes as expected.
I'm new to concurrent code so any explanation / pseudo code of what is occurring would be appreciated.
I install this package,and run it passed, no error occured,are use a ide? try to run it on cli,or move it another directory