Make websocket callback asynchronous with asyncio - python

I am trying to implement a basic websocket client using asyncio and websockets with Python 3.5.2.
Basically, I want connect_to_dealer to be a blocking call, but wait for the websocket message on a different thread.
After reading some docs (I have very little exp with Python), I concluded that asyncio.ensure_future() passing a coroutine (listen_for_message) was the way to go.
Now, I get to run listen_for_message on a different thread, but from within the coroutine I can't seem to use await or any other mechanism to make the calls synchronous. If I do it, the execution waits forever (it hangs) even for a simple sleep.
I'd like to know what I'm doing wrong.
async def listen_for_message(self, future, websocket):
while (True):
try:
await asyncio.sleep(1) # It hangs here
print('Listening for a message...')
message = await websocket.recv() # If I remove the sleep, hangs here
print("< {}".format(message))
future.set_result(message)
future.done()
except websockets.ConnectionClosed as cc:
print('Connection closed')
except Exception as e:
print('Something happened')
def handle_connect_message(self, future):
# We must first remove the websocket-specific payload because we're only interested in the connect protocol msg
print(future.result)
async def connect_to_dealer(self):
print('connect to dealer')
websocket = await websockets.connect('wss://mywebsocket'))
hello_message = await websocket.recv()
print("< {}".format(hello_message))
# We need to parse the connection ID out of the message
connection_id = hello_message['connectionId']
print('Got connection id {}'.format(connection_id))
sub_response = requests.put('https://subscribetotraffic{user_id}?connection={connection_id}'.format(user_id='username', connection_id=connection_id), headers=headers)
if sub_response.status_code == 200:
print('Now we\'re observing traffic')
else:
print('Oops request failed with {code}'.format(code=sub_response.status_code))
# Now we need to handle messages but continue with the regular execution
try:
future = asyncio.get_event_loop().create_future()
future.add_done_callback(self.handle_connect_message)
asyncio.ensure_future(self.listen_for_message(future, websocket))
except Exception as e:
print(e)

Is there a specific reason you need to work with explicit futures?
With asyncio you can use a combination of coroutines and Tasks to achieve most purposes. Tasks are essentially wrapped coroutines that go about cranking themselves over in the background, independently of other async code, so you don't have to explicitly manage their flow or juggle them with other bits of code.
I am not entirely sure of your end goal, but perhaps the approach elaborated below gives you something to work with:
import asyncio
async def listen_for_message(websocket):
while True:
await asyncio.sleep(0)
try:
print('Listening for a message...')
message = await websocket.recv()
print("< {}".format(message))
except websockets.ConnectionClosed as cc:
print('Connection closed')
except Exception as e:
print('Something happened')
async def connect_to_dealer():
print('connect to dealer')
websocket = await websockets.connect('wss://mywebsocket')
hello_message = await websocket.recv()
print("< {}".format(hello_message))
# We need to parse the connection ID out of the message
connection_id = hello_message['connectionId']
print('Got connection id {}'.format(connection_id))
sub_response = requests.put('https://subscribetotraffic{user_id}?connection={connection_id}'.format(
user_id='username', connection_id=connection_id), headers=headers)
if sub_response.status_code == 200:
print('Now we\'re observing traffic')
else:
print('Oops request failed with {code}'.format(code=sub_response.status_code))
async def my_app():
# this will block until connect_to_dealer() returns
websocket = await connect_to_dealer()
# start listen_for_message() in its own task wrapper, so doing it continues in the background
asyncio.ensure_future(listen_for_message(websocket))
# you can continue with other code here that can now coexist with listen_for_message()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(my_app())
loop.run_forever()

Related

Keep indefinite connections open on TCP server with asyncio streams?

I'm trying to understand how to use asyncio streams for multiple connections that will keep sending messages until a predefined condition or a socket timeout. Looking at Python docs, they provide the following example for a TCP server based on asyncio streams:
import asyncio
async def handle_echo(reader, writer):
data = await reader.read(100)
message = data.decode()
addr = writer.get_extra_info('peername')
print(f"Received {message!r} from {addr!r}")
print(f"Send: {message!r}")
writer.write(data)
await writer.drain()
print("Close the connection")
writer.close()
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
asyncio.run(main())
What I'm trying to do is more complex and it looks more like so (a lot of it is pseudocode, written in capital letters or with implementation omitted):
import asyncio
async def io_control(queue):
while true:
...
# do I/O control in this function ...
async def data_processing(queue):
while true:
...
# perform data handling
async def handle_data(reader, writer):
data = await reader.read()
message = data.decode()
addr = writer.get_extra_info('peername')
print(f"Received {message!r} from {addr!r}")
#do stuff with a queue - pass messages to other two async functions as needed
#keep open until something happens
if(ERROR or SOCKET_TIMEOUT):
writer.close()
async def server(queue):
server = await asyncio.start_server(
handle_data, '127.0.0.1', 8888)
addrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)
print(f'Serving on {addrs}')
async with server:
await server.serve_forever()
async def main():
queue_io = asyncio.Queue()
queue_data = asyncio.Queue()
asyncio.run(server(queue_data))
asyncio.run(data_handling(queue_data))
asyncio.run(io_control(queue_io))
asyncio.run(main())
Does this look feasible? I'm not used to working with co-routines (I'm coming from more of a multi-threading paradigm), so I'm not sure if what I'm doing is right or if I have to explicitly include yields or do any extra stuff.
If I understand correctly, you just need the TCP server to be able to handle multiple concurrent connections. The start_server function should already give you everything you need.
The first parameter client_connected_cb is a coroutine function called whenever a client establishes a connection. If you introduce a loop into that function (in your example code handle_data), you can keep the connection open until some criterion is met. What conditions exactly should lead to closing the connection is up to you, and the implementation details will obviously depend on that. The simplest approach I can imagine is something like this:
import asyncio
import logging
log = logging.getLogger(__name__)
async def handle_data(reader, writer):
while True:
data = (await reader.readline()).decode().strip()
if not data:
log.debug("client disconnected")
break
response = await your_data_processing_function(data)
writer.write(response.encode())
await writer.drain()
...
async def main():
server = await asyncio.start_server(handle_data, '127.0.0.1', 8888)
async with server:
await server.serve_forever()
if __name__ == '__main__':
asyncio.run(main())
There is theoretically no limit for the number of concurrent connections.
If your client_connected_cb is a coroutine function, each new connection will schedule a new task for the event loop. That is where the concurrency comes from. The magic then happens at the point of awaiting new data from the client; that is where the event loop can switch execution to another coroutine. All this happens behind the scenes, so to speak.
If you want to introduce a timeout, you could wrap the awaitable readline coroutine in a wait_for for example and then catch the TimeoutError exiting the loop.
Hope this helps.

FastAPI websockets not working when using Redis pubsub functionality

currently I'm using websockets to pass through data that I receive from a Redis queue (pub/sub). But for some reason the websocket doesn't send messages when using this redis queue.
What my code looks like
My code works as folllow:
I accept the socket connection
I connect to the redis queue
For each message that I receive from the subscription, i sent a message through the socket. (at the moment only text for testing)
#check_route.websocket_route("/check")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
redis = Redis(host='::1', port=6379, db=1)
subscribe = redis.pubsub()
subscribe.subscribe('websocket_queue')
try:
for result in subscribe.listen():
await websocket.send_text('test')
print('test send')
except Exception as e:
await websocket.close()
raise e
The issue with the code
When I'm using this code it's just not sending the message through the socket. But when I accept the websocket within the subscribe.listen() loop it does work but it reconnects every time (see code below).
#check_route.websocket_route("/check")
async def websocket_endpoint(websocket: WebSocket):
redis = Redis(host='::1', port=6379, db=1)
subscribe = redis.pubsub()
subscribe.subscribe('websocket_queue')
try:
for result in subscribe.listen():
await websocket.accept()
await websocket.send_text('test')
print('test send')
except Exception as e:
await websocket.close()
raise e
I think that the subscribe.listen() causes some problems that make the websocket do nothing when websocket.accept() is outside the for loop.
I hope someone knows whats wrong with this.
I'm not sure if this will work, but you could try this:
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
redis = Redis(host='::1', port=6379, db=1)
subscribe = redis.pubsub()
subscribe.subscribe('websocket_queue')
try:
results = await subscribe.listen()
for result in results:
await websocket.send_text('test')
print('test send')
except Exception as e:
await websocket.close()
raise e
After a few days more research I found a solution for this issue. I solved it by using aioredis. This solution is based on the following GitHub Gist.
import json
import aioredis
from fastapi import APIRouter, WebSocket
from app.service.config_service import load_config
check_route = APIRouter()
#check_route.websocket("/check")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
# ---------------------------- REDIS REQUIREMENTS ---------------------------- #
config = load_config()
redis_uri: str = f"redis://{config.redis.host}:{config.redis.port}"
redis_channel = config.redis.redis_socket_queue.channel
redis = await aioredis.create_redis_pool(redis_uri)
# ------------------ SEND SUBSCRIBE RESULT THROUGH WEBSOCKET ----------------- #
(channel,) = await redis.subscribe(redis_channel)
assert isinstance(channel, aioredis.Channel)
try:
while True:
response_raw = await channel.get()
response_str = response_raw.decode("utf-8")
response = json.loads(response_str)
if response:
await websocket.send_json({
"event": 'NEW_CHECK_RESULT',
"data": response
})
except Exception as e:
raise e

Async rabbitmq consumer with socket connect_ex

I want to write a python program that gets ip and tcp port from a rabbitmq server and scans to check if the port is open, as these scans sometimes come in bulk (maybe 100 port, ip pairs are added to the queue at a time) I need to do the scans asynchronously to get all the results in time, and even if I lower the timeout to 1 second, 30 closed ports will hold the scan for 30 seconds each time!
I tried asyncio and aio_pika to reach my goal but still the scans are being performed synchronously.
import asyncio
import aio_pika
import socket
async def tcp_check(host, port):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
await asyncio.sleep(1)
result = sock.connect_ex((host,port))
print (str(result))
async def main(loop):
connection = await aio_pika.connect_robust("amqp://user:password#192.168.1.100/")
async with connection:
queue_name = "tcp_scans"
channel = await connection.channel()
queue = await channel.declare_queue(queue_name, auto_delete=False, durable=True)
async with queue.iterator() as queue_iter:
async for message in queue_iter:
async with message.process():
context = message.body.decode("utf-8").split(',')
await tcp_check(context[0], int(context[1]))
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
loop.close()
UPDATE:
I used asyncio.open_connection too:
async def tcp_check(host, port):
con = asyncio.open_connection(host, port, loop=loop)
try:
await asyncio.wait_for(con, timeout=1)
print("{}:{} Connected".format(host, port))
except asyncio.TimeoutError:
print ("{}:{} Closeed".format(host, port))
Still it takes each item from the and test one by one...
Calling of synchronous long running functions inside asynchronous coroutines should be avoided. I'd suggest to use asyncio alternative to connect_ex, e.g.:
try:
await asyncio.open_connection(host, port)
except Exception as e:
print(e)
In order to execute some coroutines simultaneously "on the fly" you can use create_task that "wrap the coroutine into a Task and schedule its execution" as it is written in doc. And after this, coroutine will be executed soon, e.g. after next await or async for iteration, when control flow returns to the event-loop.
create_task return Task object which you can add to list and wait them all finished using asyncio.gather with flag return_exceptions=True.
But in your case i think it will be sufficient replace await tcp_check() to create_task(tcp_check()) and use gather at the end of your main() to guarantee all coro is finished.
...
asyncio.create_task(tcp_check(context[0], int(context[1])))
...

asyncio tcp socket: How to cancel asyncio.sleep() when socket is closed?

I am currently implementing the TCP socket protocol. The protocol requires sending heartbeat messages every five minutes. I am implementing a protocol using asyncio in Python. The source code below is a program that connects to localhost:8889, sends hello, and disconnects the socket after 1 second. In this case, the connection is disconnected after one second (if this actually happens, the network is down or the server is disconnected). The problem is that the send_heartbeat function waits 5 minutes without knowing that the socket is down. I would like to cancel the coroutine immediately instead of waiting 5 minutes when the socket is disconnected. What's the best way to do it?
import asyncio
async def run(host: str, port: int):
while True:
try:
reader, writer = await asyncio.open_connection(host, port)
except OSError as e:
print('connection failed:', e)
await asyncio.sleep(0.5)
continue
await asyncio.wait([
handle_stream(reader, writer),
send_heartbeat(reader, writer),
], return_when=asyncio.FIRST_COMPLETED) # will stop after 1 second
writer.close() # close socket after 1 second
await writer.wait_closed()
async def handle_stream(reader, writer):
writer.write(b'hello\n') # will success because socket is alive
await writer.drain()
await asyncio.sleep(1)
async def send_heartbeat(reader, writer):
while True:
await asyncio.sleep(300)
heartbeat_message = b'heartbeat\n'
writer.write(heartbeat_message) # will fail because socket is already closed after 1 second
await writer.drain()
if __name__ == '__main__':
asyncio.run(run('127.0.0.1', 8889))
You can cancel the sleep by canceling a task that executes it. Creating send_heartbeat as a separate task ensures that it runs in parallel to handle_stream while you await the latter:
async def run(host: str, port: int):
while True:
...
heartbeat = asyncio.create_task(send_heartbeat(reader, writer))
try:
await handle_stream(reader, writer)
finally:
heartbeat.cancel()
writer.close()
await writer.wait_closed()
BTW, since you're awaiting writer.drain() inside handle_stream, there is no guarantee that handle_stream will always complete in 1 second. This might be a place where you might want to avoid the drain, or you can use asyncio.wait_for when awaiting handle_stream(...).

How to implement timeout in asyncio server?

Below is a simple echo server. But if the client does not send anything for 10 seconds, I want to close the connection.
import asyncio
async def process(reader: asyncio.StreamReader, writer: asyncio.StreamWriter):
print("awaiting for data")
line = await reader.readline()
print(f"received {line}")
writer.write(line)
print(f"sent {line}")
await writer.drain()
print(f"Drained")
async def timeout(task: asyncio.Task, duration):
print("timeout started")
await asyncio.sleep(duration)
print("client unresponsive, cancelling")
task.cancel()
print("task cancelled")
async def new_session(reader, writer):
print("new session started")
task = asyncio.create_task(process(reader, writer))
timer = asyncio.create_task(timeout(task, 10))
await task
print("task complete")
timer.cancel()
print("timer cancelled")
writer.close()
print("writer closed")
async def a_main():
server = await asyncio.start_server(new_session, port=8088)
await server.serve_forever()
if __name__ == '__main__':
asyncio.run(a_main())
If the client sends a message, it works fine. But the other case, when client is silent, it does not work
When client sends message:
new session started
awaiting for data
timeout started
received b'slkdfjsdlkfj\r\n'
sent b'slkdfjsdlkfj\r\n'
Drained
task complete
timer cancelled
writer closed
When client is silent after opening connection
new session started
awaiting for data
timeout started
client unresponsive, cancelling
task cancelled
There is no task complete, timer cancelled, writer closed.
What is the issue with above code?
Is there a better way to implement timeouts?
Update
Figured out the problem, Looks like the task was actually cancelled, but the exception got silently ignored, Fixed the problem by catching CancelledError
async def new_session(reader, writer):
print("new session started")
task = asyncio.create_task(process(reader, writer))
timer = asyncio.create_task(timeout(task, 10))
try:
await task
except asyncio.CancelledError:
print(f"Task took too long and was cancelled by timer")
print("task complete")
timer.cancel()
print("timer cancelled")
writer.close()
print("writer closed")
Second part still remains. Is there a better way to implement timeouts?
Update2
Complete code using wait_for. The timeout code is no longer needed. Check accepted solution below:
async def new_session(reader, writer):
print("new session started")
try:
await asyncio.wait_for(process(reader, writer), timeout=5)
except asyncio.TimeoutError as te:
print(f'time is up!{te}')
finally:
writer.close()
print("writer closed")
I use the following code when making a connection. I'd suggest using wait_for similarly for your code.
fut = asyncio.open_connection( self.host, self.port, loop=self.loop )
try:
r, w = await asyncio.wait_for(fut, timeout=self.connection_timeout)
except asyncio.TimeoutError:
pass
Is there a better way to implement timeouts?
You can use asyncio.wait_for instead of timeout. It has similar semantics, but already comes with asyncio. Also, you can await the future it returns to detect if the timeout has occurred.

Categories

Resources