how to stop reading when not expecting any message - python

in the below code we asynchronously wait to receive data. But how to stop waiting for read when not expecting anymore data.
The reason I'm asking this is because I want to implement a class which keeps listening to arriving messages in a separate thread using asyncio.run_coroutine_threadsafe and want to terminate listening once my test gets over (in order to release socket connection).
import asyncio
#asyncio.coroutine
def tcp_echo_client(message, loop):
reader, writer = yield from asyncio.open_connection('127.0.0.1', 8888,
loop=loop)
print('Send: %r' % message)
writer.write(message.encode())
data = yield from reader.read(100)
print('Received: %r' % data.decode())
print('Close the socket')
writer.close()
message = 'Hello World!'
loop = asyncio.get_event_loop()
loop.run_until_complete(tcp_echo_client(message, loop))
loop.close()

To stop waiting, cancel the task that runs the coroutine. For example:
# tcp_echo_client as in your code
message = 'Hello World!'
loop = asyncio.get_event_loop()
task = loop.create_task(tcp_echo_client(message, loop))
loop.call_later(5, task.cancel) # cancel the task after five seconds
try:
loop.run_until_complete(task)
except asyncio.CancelledError:
pass
loop.close()
If you are using asyncio.run_coroutine_threadsafe, note that it returns a concurrent.futures.Future object, which itself has a cancel method. Cancellation of the future returned by run_coroutine_threadsafe will be noticed by asyncio and result in the cancellation of the task in the event loop thread.

streamreader doesn't comes with any timeout https://github.com/python/asyncio/issues/96
so once you start reading you cannot stop.
For me I sent a closing message and handled in my client that on encountering closing message it should not go for further read.
That's how i stopped reading.
Once reader.read is called it keeps waiting for a message (Even after calling writer.close) and keeps the socket engaged until object is destroyed.

Related

python3 websocket in thread

I have a simple python tkinter gui with just a couple buttons. When the button is pressed all I want to do is start a websocket connection and start receiving. I can run the code normally but as soon as I try to put it into a thread I get errors
RuntimeError: There is no current event loop in thread
So first try:
import websockets
websocket = websockets.connect(uri, ssl = True)
websocket.recv()
I get the error
"Connect object has no attribute 'recv'"
Which is weird when I run it differently I don't get that error
When I follow the documentation exactly
def run_websockets2(self):
async def hello():
uri = Websocket_Feed
# with websockets.connect(uri, ssl=True) as websocket:
socket = await websockets.connect(uri, ssl=True)
self.web_socket = socket
while self.running:
greeting = await socket.recv()
print(f"< {greeting}")
asyncio.get_event_loop().run_until_complete(hello())
It works as long as I just call "websockets2()". But if I try to do
self.websocket_thread = threading.Thread(target=self.run_websockets2, args=())
self.websocket_thread.start()
I get the error
RuntimeError: There is no current event loop in thread 'web_sockets'
And when I make the whole function non async I get an error
def run_websockets(self):
uri = Websocket_Feed
# with websockets.connect(uri, ssl=True) as websocket:
socket = websockets.connect(uri, ssl=True)
self.web_socket = socket
while self.running:
greeting = socket.recv()
print(f"< {greeting}")
I get the error
RuntimeError: There is no current event loop in thread 'web_sockets'. on
socket = websockets.connect(uri, ssl=True)
I don't understand why I can't just simply run these non asynchronous in a thread. Any help is greatly appreciated
You have a couple of different errors here, which is confusing the picture somewhat. First, regarding:
"Connect object has no attribute 'recv'"
... this just says that websocket object has no method called recv
The main problem you have is trying to invoke run_websockets2() from a spawned thread. I.e. calling this method from main thread works, but calling this from a new thread fails.
This is expected behaviour. It is because in a spawned thread (i.e., thread other than main thread), there is no asyncio event loop defined. But there is one defined in the main thread, for convenience. So asyncio is aware of whether you are invoking from spawned thread, or main thread, and behaves differently. See this answer for detailed explantion. Why asyncio.get_event_loop method checks if the current thread is the main thread?
To solve your problem you could create a new event loop per spawned thread, so that the code would become:
event_loop = asyncio.new_event_loop()
event_loop.run_until_complete(hello())
instead of
asyncio.get_event_loop().run_until_complete(hello())
Or, you could store event_loop in a common place, and allow all spawned threads to reuse that event loop.
I wanted to post how I actually solved my code thanks to #Darren Smith. I simply added one line of code
"asyncio.set_event_loop(asyncio.new_event_loop())" to the top.
def run_websockets2(self):
asyncio.set_event_loop(asyncio.new_event_loop())
async def hello():
uri = Websocket_Feed
# with websockets.connect(uri, ssl=True) as websocket:
socket = await websockets.connect(uri, ssl=True)
self.web_socket = socket
while self.running:
greeting = await socket.recv()
print(f"< {greeting}")
asyncio.get_event_loop().run_until_complete(hello())
I have to admit, I love python, but to have code that fundamentally runs differently depending on its context and the fix is a one line that is not related to the context seems to be in really bad form.

asyncio run_until_complete blocks after future has set result (autobahn websockets & threading)

TL;DR: Calling future.set_result doesn't immediately resolve loop.run_until_complete. Instead it blocks for an additional 5 seconds.
Full context:
In my project, I'm using autobahn and asyncio to send and receive messages with a websocket server. For my use case, I need a 2nd thread for websocket communication, since I have arbitrary blocking code that will be running in the main thread. The main thread also needs to be able to schedule messages for the communication thread to send back and forth with the server. My current goal is to send a message originating from the main thread and block until the response comes back, using the communication thread for all message passing.
Here is a snippet of my code:
import asyncio
import threading
from autobahn.asyncio.websocket import WebSocketClientFactory, WebSocketClientProtocol
CLIENT = None
class MyWebSocketClientProtocol(WebSocketClientProtocol):
# -------------- Boilerplate --------------
is_connected = False
msg_queue = []
msg_listeners = []
def onOpen(self):
self.is_connected = True
for msg in self.msg_queue[::]:
self.publish(msg)
def onClose(self, wasClean, code, reason):
is_connected = False
def onMessage(self, payload, isBinary):
for listener in self.msg_listeners:
listener(payload)
def publish(self, msg):
if not self.is_connected:
self.msg_queue.append(msg)
else:
self.sendMessage(msg.encode('utf-8'))
# /----------------------------------------
def send_and_wait(self):
future = asyncio.get_event_loop().create_future()
def listener(msg):
print('set result')
future.set_result(123)
self.msg_listeners.append(listener)
self.publish('hello')
return future
def worker(loop, ready):
asyncio.set_event_loop(loop)
factory = WebSocketClientFactory('ws://127.0.0.1:9000')
factory.protocol = MyWebSocketClientProtocol
transport, protocol = loop.run_until_complete(loop.create_connection(factory, '127.0.0.1', 9000))
global CLIENT
CLIENT = protocol
ready.set()
loop.run_forever()
if __name__ == '__main__':
# Set up communication thread to talk to the server
threaded_loop = asyncio.new_event_loop()
thread_is_ready = threading.Event()
thread = threading.Thread(target=worker, args=(threaded_loop, thread_is_ready))
thread.start()
thread_is_ready.wait()
# Send a message and wait for response
print('starting')
loop = asyncio.get_event_loop()
result = loop.run_until_complete(CLIENT.send_and_wait())
print('done') # this line gets called 5 seconds after it should
I'm using the autobahn echo server example to respond to my messages.
Problem: The WebSocketClientProtocol receives the response to its outgoing message and calls set_result on its pending future, but loop.run_until_complete blocks an additional ~4.9 seconds until eventually resolving.
I understand that run_until_complete also processes other pending events on the event loop. Is it possible that the main thread has somehow queued up a bunch of events that have to now get processed once I start the loop? Also, if I move run_until_complete into the communications thread or move the create_connection into the main thread, then the event loop doesn't block me.
Lastly, I tried to recreate this problem without using autobahn, but I couldn't cause the extra delay. I'm curious if maybe this is an issue with the nature of autobahn's callback timing (onMessage for example).

How can I refactor to accept multiple clients?

I don't understand why server.py Version 1 allows a client to be keyboard-interrupted and restarted, while server.py Version 2 doesn't:
server.py Version1:
import asyncio
async def handle_client(reader, writer):
while True:
request = (await reader.read(128)).decode()
writer.write('Received ok.'.encode())
await writer.drain()
async def main():
loop.create_task(asyncio.start_server(handle_client, 'localhost', 15555))
loop = asyncio.new_event_loop()
loop.create_task(main())
loop.run_forever()
server.py Version 2:
import asyncio
async def handle_client(reader, writer):
while True:
request = (await reader.read(128)).decode()
if request == "hello":
writer.write('Received ok.'.encode())
await writer.drain()
async def main():
loop.create_task(asyncio.start_server(handle_client, 'localhost', 15555))
loop = asyncio.new_event_loop()
loop.create_task(main())
loop.run_forever()
client.py:
import asyncio
async def make_connections():
reader, writer = await asyncio.open_connection('localhost', 15555, loop=loop)
loop.create_task(connect(reader, writer))
async def connect(reader, writer):
writer.write("hello".encode())
await writer.drain()
result = await reader.read(128)
print(result.decode())
loop = asyncio.new_event_loop()
loop.create_task(make_connections())
loop.run_forever()
Version 2 works fine for a single client, but if I send a keyboard interrupt to the client I can no longer connect after I restart the client. It's annoying to ssh in and kill/restart the server every time I alter code in the client. I don't see why the second version doesn't accept the client the second time it attempts to connect.
I don't understand why server.py Version 1 allows a client to be keyboard-interrupted and restarted, while server.py Version 2 doesn't
Both versions have a bug that they don't correctly check for the end-of-file condition. When you interrupt the client, the socket gets closed and reading from it returns EOF, while writing to it raises an exception. Awaiting writer.drain() in version 1 delivers the exception and interrupts the coroutine. (This exception is probably displayed on the server's standard error.)
Version 2 has a problem, though: the if request == "hello" test is false at EOF because reader.read() keeps returning an empty byte string to mark the EOF condition. This prevents await writer.drain() from executing and delivering the exception, so the coroutine remains stuck in an infinite loop. A simple fix is to add something like if not request: break after the read.
Why version 2 gets stuck
But the above doesn't fully explain why in Version 2 the whole server is broken and new clients unable to connect. Surely one would expect await to either return a result or yield control to other coroutines. But the observed behavior is that, despite containing an await in the while loop, the coroutine doesn't allow other coroutines to run!
The problem is that await doesn't mean "pass control to the event loop", as it is often understood. It means "request value from the provided awaitable object, yielding control to the event loop if the object indicates that it does not have a value ready." The part after the if is crucial: if the object does have a value ready, this value will be used immediately without ever deferring to the event loop. In other words, await doesn't guarantee that the event loop will get a chance to run.
A stream at EOF always has data to return - the empty string that marks the EOF. As a result, it never gets suspended and the loop ends up completely blocking the event loop. To guarantee that other tasks get a chance to run, you can add await asyncio.sleep(0) in a loop - but this should not be necessary in correctly written code, where requesting IO data will soon result in a wait, at which point the event loop will kick in. Once the EOF handling bug is corrected, the server will function correctly.

How to schedule and cancel tasks with asyncio

I am writing a client-server application. While connected, client sends to the server a "heartbeat" signal, for example, every second.
On the server-side I need a mechanism where I can add tasks (or coroutines or something else) to be executed asynchronously. Moreover, I want to cancel tasks from a client, when it stops sending that "heartbeat" signal.
In other words, when the server starts a task it has kind of timeout or ttl, in example 3 seconds. When the server receives the "heartbeat" signal it resets timer for another 3 seconds until task is done or client disconnected (stops send the signal).
Here is an example of canceling a task from asyncio tutorial on pymotw.com. But here the task is canceled before the event_loop started, which is not suitable for me.
import asyncio
async def task_func():
print('in task_func')
return 'the result'
event_loop = asyncio.get_event_loop()
try:
print('creating task')
task = event_loop.create_task(task_func())
print('canceling task')
task.cancel()
print('entering event loop')
event_loop.run_until_complete(task)
print('task: {!r}'.format(task))
except asyncio.CancelledError:
print('caught error from cancelled task')
else:
print('task result: {!r}'.format(task.result()))
finally:
event_loop.close()
You can use asyncio Task wrappers to execute a task via the ensure_future() method.
ensure_future will automatically wrap your coroutine in a Task wrapper and attach it to your event loop. The Task wrapper will then also ensure that the coroutine 'cranks-over' from await to await statement (or until the coroutine finishes).
In other words, just pass a regular coroutine to ensure_future and assign the resultant Task object to a variable. You can then call Task.cancel() when you need to stop it.
import asyncio
async def task_func():
print('in task_func')
# if the task needs to run for a while you'll need an await statement
# to provide a pause point so that other coroutines can run in the mean time
await some_db_or_long_running_background_coroutine()
# or if this is a once-off thing, then return the result,
# but then you don't really need a Task wrapper...
# return 'the result'
async def my_app():
my_task = None
while True:
await asyncio.sleep(0)
# listen for trigger / heartbeat
if heartbeat and my_task is None:
my_task = asyncio.ensure_future(task_func())
# also listen for termination of hearbeat / connection
elif not heartbeat and my_task:
if not my_task.cancelled():
my_task.cancel()
else:
my_task = None
run_app = asyncio.ensure_future(my_app())
event_loop = asyncio.get_event_loop()
event_loop.run_forever()
Note that tasks are meant for long-running tasks that need to keep working in the background without interrupting the main flow. If all you need is a quick once-off method, then just call the function directly instead.

Asyncio: Start a non-blocking listening server

This is the basic tcp server from asyncio tutotial:
import asyncio
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
message = data.decode()
print('Data received: {!r}'.format(message))
print('Send: {!r}'.format(message))
self.transport.write(data)
print('Close the client socket')
self.transport.close()
loop = asyncio.get_event_loop()
# Each client connection will create a new protocol instance
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
# Serve requests until CTRL+c is pressed
print('Serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
# Close the server
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
Like all (i found) other examples it uses blocking loop.run_forever().
How do i start listeting server and do something else in the time?
I have tried to outsource starting server in a function and start this function with asyncio.async(), but with no success.
What i'm missing here?
You can schedule several concurrent asyncio tasks before calling loop.run_forever().
#asyncio.coroutine
def other_task_coroutine():
pass # do something
start_tcp_server_task = loop.create_task(loop.create_server(
EchoServerClientProtocol, '127.0.0.1', 8888))
other_task = loop.create_task(other_task_coroutine())
self.run_forever()
When you call loop.create_task(loop.create_server()) or loop.create_task(other_task_coroutine()), nothing is actually executed: a coroutine object is created and wrapped in a task (consider a task to be a shell and the coroutine an instance of the code that will be executed in the task). The tasks are scheduled on the loop when created.
The loop will execute start_tcp_server_task first (as it's scheduled first) until a blocking IO event is pending or the passive socket is ready to listen for incoming connections.
You can see asyncio as a non-preemptible scheduler running on one CPU: once the first task interrupts itself or is done, the second task will be executed. Hence, when one task is executed, the other one has to wait until the running task finishes or yields (or "awaits" with Python 3.5). "yielding" (yield from client.read()) or "awaiting" (await client.read()) means that the task gives back the hand to the loop's scheduler, until client.read() can be executed (data is available on the socket).
Once the task gave back the control to the loop, it can schedule the other pending tasks, process incoming events and schedule the tasks which were waiting for those events. Once there is nothing left to do, the loop will perform the only blocking call of the process: sleep until the kernel notifies it that events are ready to be processed.
In this context, you must understand that when using asyncio, everything running in the process must run asynchronously so the loop can do its work. You can not use multiprocessing objects in the loop.
Note that asyncio.async(coroutine(), loop=loop) is equivalent to loop.create_task(coroutine()).
Additionally, you can consider running what you want in an executor.
For example.
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
async def execute(self, loop):
await loop.run_in_executor(None, your_func_here, args:
asyncio.async(execute(loop))
loop.run_forever()
An executor will run whatever function you want in an executor, which wont block your server.

Categories

Resources