context-managed resource on module level - python

I am looking for a pattern where I have multiple functions that need access to a resource that is context-managed.
In particular, I am using fastAPI and want to re-use aiopg (async psycopg2) connections.
This is the basic layout:
#app.get("/hello")
def hello():
async with aiopg.connect(...) as conn:
async with conn.cursor(...):
return cursor.execute(...)
Now I want to avoid a connection per route. I could think of an object outside, and in the route I either access the conn property or await creation (and store back) and then just use the with on the cursor() method.
class Pg():
async def conn(self):
if not self._conn:
self._conn = await aiopg.connect(...)
return self._conn
myPg = Pg()
#app.get("/hello")
def hello():
conn = await myPg.conn()
async with conn.cursor(...):
return cursor.execute(...)
However, I then would lose the ability to automatically close the connection.
I think that I miss something really obvious here and hope that someone could guide me how to implement this properly.
Thanks!

aiopg provides a Pool class that can manage connections.
just create a pool instance at the module level:
pool = await aiopg.create_pool('<your connection string>')
Then you can use the Pool.acquire context-manager to get a connection:
async with pool.acquire() as conn:
...
If connections already exist in the pool, they will be re-used.

Related

Can FastAPI guarantee a sync handler will never block the main application thread?

I have the following FastAPI application:
from fastapi import FastAPI
import socket
app = FastAPI()
#app.get("/")
async def root():
return {"message": "Hello World"}
#app.get("/healthcheck")
def health_check():
result = some_network_operation()
return result
def some_network_operation():
HOST = "192.168.30.12" # This host does not exist so the connection will time out
PORT = 4567
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(10)
s.connect((HOST, PORT))
s.sendall(b"Are you ok?")
data = s.recv(1024)
print(data)
This is a simple application with two routes:
/ handler that is async
/healthcheck handler that is sync
With this particular example, if you call /healthcheck, it won't complete until after 10 seconds because the socket connection will timeout. However, if you make a call to / in the meantime, it will return the response right away because FastAPI's main thread is not blocked. This makes sense because according to the docs, FastAPI runs sync handlers on an external threadpool.
My question is, if it is at all possible for us to block the application (block FastAPI's main thread) by doing something inside the health_check method.
Perhaps by acquiring the global interpreter lock?
Some other kind of lock?
Yes, if you try to do sync work in a async method it will block FastAPI, something like this:
#router.get("/healthcheck")
async def health_check():
result = some_network_operation()
return result
Where some_network_operation() is blocking the event loop because it is a synchronous method.
I think I may have an answer to my question, which is that there are some weird edge cases where a sync endpoint handler can block FastAPI.
For instance, if we adjust the some_network_operation in my example to the following, it will block the entire application.
def some_network_operation():
""" No, this is not a network operation, but it illustrates the point """
block = pow (363,100000000000000)
I reached this conclusion based on this question: pow function blocking all threads with ThreadPoolExecutor.
So, it looks like the GIL maybe the culprit here.
That SO question suggests using the multiprocessing module (which will get around GIL). However, I tried this, and it still resulted in the same behavior. So my root problem remains unsolved.
Either way, here is the entire example in the question edited to reproduce the problem:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
async def root():
return {"message": "Hello World"}
#app.get("/healthcheck")
def health_check():
result = some_network_operation()
return result
def some_network_operation():
block = pow(363,100000000000000)

Contacting another WebSocket server from inside Django Channels

I have two websocket servers, call them Main and Worker, and this is the desired workflow:
Client sends message to Main
Main sends message to Worker
Worker responds to Main
Main responds to Client
Is this doable? I couldn't find any WS client functionality in Channels. I tried naively to do this (in consumers.py):
import websockets
class SampleConsumer(AsyncWebsocketConsumer):
async def receive(self, text_data):
async with websockets.connect(url) as worker_ws:
await worker_ws.send(json.dumps({ 'to': 'Worker' }))
result = json.loads(await worker_ws.recv())
await self.send(text_data=json.dumps({ 'to': 'Client' })
However, it seems that the with section blocks (Main doesn't seem to accept any further messages until the response is received from Worker). I suspect it is because websockets runs its own loop, but I don't know for sure. (EDIT: I compared id(asyncio.get_running_loop()) and it seems to be the same loop. I have no clue why it is blocking then.)
The response { "to": "Client" } does not need to be here, I would be okay even if it is in a different method, as long as it triggers when the response from Worker is received.
Is there a way to do this, or am I barking up the wrong tree?
If there is no way to do this, I was thinking of having a thread (or process? or a separate application?) that communicates with Worker, and uses channel_layer to talk to Main. Would this be viable? I would be grateful if I could get a confirmation (and even more so for a code sample).
EDIT I think I see what is going on (though still investigating), but — I believe one connection from Client instantiates one consumer, and while different instances can all run at the same time, within one consumer instance it seems the instance doesn't allow a second method to start until one method has finished. Is this correct? Looking now if moving the request-and-wait-for-response code into a thread would work.
I was in the same position where I wanted to process the message in my Django app whenever I receive it from another WebSocket server.
I took the idea of using the WebSockets client library and keeping it running as a separate process using the manage.py command from this post on the Django forum.
You can define an async coroutine client(websocket_url) to listen to messages received from the WebSocket server.
import asyncio
import websockets
async def client(websocket_url):
async for websocket in websockets.connect(uri):
print("Connected to Websocket server")
try:
async for message in websocket:
# Process message received on the connection.
print(message)
except websockets.ConnectionClosed:
print("Connection lost! Retrying..")
continue #continue will retry websocket connection by exponential back off
In the above code connect() acts as an infinite asynchronous iterator. More on that here.
You can run the above coroutine inside handle() method of the custom management command class.
runwsclient.py
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, *args, **options):
URL = "ws://example.com/messages"
print(f"Connecting to websocket server {URL}")
asyncio.run(client(URL))
Finally, run the manage.py command.
python manage.py runwsclient
You can also pass handler to client(ws_url, msg_handler) which will process the message so that processing logic will remain outside of the client.
Update 31/05/2022:
I have created a django package to integrate the above functionality with the minimal setup: django-websocketclient
Yes, Django Channels does not provide a websocket client as it is used as a server mainly.
From your code, it doesn't seem like you really need a websocket communication between the Main and Worker, as you just fire up a socket, send a single message, receive the response and close the socket. This is the classical use case for regular HTTP, so if you do not really need to keep the connection alive, I suggest you use a regular HTTP endpoint instead and use aioHTTP as a client.
However, if you do really need a client, then you should open the socket once on client connection and close it when the client disconnects. You can do something like this.
import websockets
async def create_ws(on_create, on_message):
uri = "wss://localhost:8765"
async with websockets.connect(uri) as websocket:
await on_create(websocket)
while True:
message = await websocket.recv()
if message:
await on_message(message)
class WebsocketClient:
asyn def __init__(self, channel):
self.channel = channel
self.ws = None
await creat_ws(self.on_message)
async def on_create(self, was):
self.ws = ws
async def on_message(self, ws, message):
await self.channel.send(text_data=json.dumps(message)
async def send(self, message):
self.ws.send(message)
asunc def close(self):
self.ws.close()
Then in your consumer, you can use the client as follows:
class SampleConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.ws_client = WebsocketClient(self)
async def receive(self, text_data):
await self.ws_client.send(text_data)
async def disconnect(self, code):
await self.ws_client.close()
It seems I managed to do it using the latest idea I posted — launching a thread to handle the connection to Worker. Something like this:
class SampleConsumer(AsyncWebsocketConsumer):
async def receive(self, text_data):
threading.Thread(
target=asyncio.run,
args=(self.talk_to_worker(
url,
{ 'to': 'Worker' },
),)
).start()
async def talk_to_worker(self, url, message):
async with websockets.connect(url) as worker_ws:
await worker_ws.send(json.dumps(message))
result = json.loads(await worker_ws.recv())
await self.send(text_data=json.dumps({ 'to': 'Client' })
It may actually be smarter to do it with HTTP requests in each direction (since both endpoints can be HTTP servers), but this seems to work.

How to async await flask endpoint from python function

I have a class that contains a function that I would like to be able to invoke by invoking a flask-resful endpoint. Is there a way to define an asynchronous function that would await/subscribe to this endpoint to be called? I can make changes to the flask app (but can't switch to SocketIO) as well if required or write some sort of async requests function. I can only work with the base Anaconda 3.7 library and I don't have any additional message brokers installed or available.
class DaemonProcess:
def __init__(self):
pass
async def await_signal():
signal = await http://ip123/signal
self.(process_signal) # do stuff with signal
For context, this isn't the main objective of the process. I simply want to be able to use this to tell my process remotely or via UI to shut down worker processes either gracefully or forcefully. The only other idea I came up with is pinging a database table repeatedly to see if a signal has been inserted, but time is of the essence and would require pinging at too short of intervals in my opinion and an asynchronous approach would be favored. The database would be SQLite3 and it doesn't appear to support update_hook callbacks.
Here's sample pattern to send a singal and process it:
import asyncio
import aiotools
class DaemonProcess
async def process(reader, writer):
data = await reader.read(100)
writer.write(data)
print(f"We got a message {data} - time to do something about it.")
await writer.drain()
writer.close()
#aiotools.server
async def worker(loop, pidx, args):
server = await asyncio.start_server(echo, '127.0.0.1', 8888,
reuse_port=True, loop=loop)
print(f'[{pidx}] started')
yield # wait until terminated
server.close()
await server.wait_closed()
print(f'[{pidx}] terminated')
def start(self):
aiotools.start_server(myworker, num_workers=4)
if __name__ == '__main__':
# Run the above server using 4 worker processes.
d = DaemonProcess()
d.start()
if you save it in a file, for example, process.py, you should be able to start it:
python3 process.py
Now once you have this daemon in background, you should be able to ping it (see a sample client below):
import asyncio
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection('127.0.0.1', 8888)
print(f'Send: {message!r}')
writer.write(message.encode())
await writer.drain()
data = await reader.read(100)
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
await writer.wait_closed()
And now, somewhere in your Flask view, you should be able to invoke:
asyncio.run(tcp_echo_client('I want my daemon to do something for me'))
Notice this all used localhost 127.0.0.1 and port 8888, so those to be made available unless you have your own ports and IPs, then you'll need to configure them accordingly.
Also notice the use of aiotools which is a module providing a set of common asyncio patterns (daemons, etc...).

Best way to open/close DB Connection with async/await

In tutorials I found, there is always opening and closing the connection for every request, for example :
import asyncio
import asyncpg
async def run():
conn = await asyncpg.connect(user='user', password='password',
database='database', host='127.0.0.1')
values = await conn.fetch('''SELECT * FROM mytable''')
await conn.close()
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
While this works for one single function, How about web application ?
IE: for example in Tornado, every URL is a class, which leads to lot of classes/methods.
I have the habit to open the connection in a blocking way, then use the wrapper to make asynchronous DB calls, and close the connection only to shut down gracefuly the server, what is the best practice in that case with async/await ?
Without having used asyncpg, I assume like in most asyncio compliant packages that there is an async context manager allowing exactly what you are asking for.
Something like:
async with asyncpg.create_pool(**kwargs) as pool:
async with pool.acquire() as connection:
async with connection.transaction():
result = await connection.fetchval(fetch stuff)
connection.execute(insert stuff with result)
(as taken from this question)
Check the docs for mentions of context managers or examples with async with statements or if nothing else then check classes in the source code which have implement the __aenter__, __aexit__ methods.
Edit 1:
The example above is partly taken from the question I've linked to and partly contrived for completeness. But to address your comments about what the with statements are doing:
async with asyncpg.create_pool(**kwargs) as pool:
#in this block pool is created and open
async with pool.acquire() as connection:
# in this block connection is acquired and open
async with connection.transaction():
# in this block each executed statement is in a transaction
execute_stuff_with_connection(connection)
# now we are back up one logical block so the transaction is closed
do_stuff_without_transaction_but_with_connection(connection)
# now we are up another block and the connection is closed and returned to the pool
do_more_stuff_with_pool(pool)
# now we are up another level and the pool is closed/exited/cleaned up
done_doing_async_stuff()
I'm not sure how good of an explanation this is, perhaps you should read up on context managers.

Pytest: testing a websocket connection

I have this piece of code
class RTMClient:
...
#not important code
...
async def connect(self, queue: asyncio.Queue):
"""
Connect to the websocket stream and iterate over the messages
dumping them in the Queue.
"""
ws_url=''#url aquisition amended to readability
try:
self._ws = await websockets.connect(ws_url)
while not self.is_closed:
msg = await self._ws.recv()
if msg is None:
break
await queue.put(json.loads(msg))
except asyncio.CancelledError:
pass
finally:
self._closed.set()
self._ws = None
I want to write an automated test for it.
What I intend to do:
Monkeypatch websockets.connect to return a mock connection
Make the mock connection return mock messages from a predefined list
Make the mock connection set is_closed to True
Assert that the websocket connection was closed
Assert that all predefined messages are in the Queue
My question: how do I mock the websockets.connection to achieve steps 1-3?
I am thinking of a pytest fixture like this
from websockets import WebSocketClientProtocol()
#pytest.fixture
def patch_websockets_connect(monkeypatch):
async def mock_ws_connect(*args, **kwargs):
mock_connection = WebSocketClientProtocol()
mock_connection.is_closed = False
return mock_connection
monkeypatch.setattr('target_module.websockets.connect', mock_ws_connect)
But I don't see how I will be able to return a predefined list of messages this way, and also there must be a better way of doing this.
It is not a full answer but maybe it will help you.
I hit the simlar problem with testing rabbitmq messages emission. And looks like the most common and robust approach here is to create Connection, create Emmiter and Consumer, connect Consumer to the fake Channel and that manually send messages to that channel in the test.
So you creating all objects and just mocking their responses. Here is very similar example (with sockets and not with websockets, but maybe still usefull for you): mocking a socket connection in Python

Categories

Resources