How to detect write failure in asyncio? - python

As a simple example, consider the network equivalent of /dev/zero, below. (Or more realistically, just a web server sending a large file.)
If a client disconnects early, you get a barrage of log messages:
WARNING:asyncio:socket.send() raised exception.
But I'm not finding any way to catch said exception. The hypothetical server continues reading gigabytes from disk and sending them to a dead socket, with no effort on the client's part, and you've got yourself a DoS attack.
The only thing I've found from the docs is to yield from a read, with an empty string indicating closure. But that's no good here because a normal client isn't going to send anything, blocking the write loop.
What's the right way to detect failed writes, or be notified that the TCP connection has been closed, with the streams API or otherwise?
Code:
from asyncio import *
import logging
#coroutine
def client_handler(reader, writer):
while True:
writer.write(bytes(1))
yield from writer.drain()
logging.basicConfig(level=logging.INFO)
loop = get_event_loop()
coro = start_server(client_handler, '', 12345)
server = loop.run_until_complete(coro)
loop.run_forever()

I did some digging into the asyncio source to expand on dano's answer on why the exceptions aren't being raised without explicitly passing control to the event loop. Here's what I've found.
Calling yield from wirter.drain() gives the control over to the StreamWriter.drain coroutine. This coroutine checks for and raises any exceptions that that the StreamReaderProtocol set on the StreamReader. But since we passed control over to drain, the protocol hasn't had the chance to set the exception yet. drain then gives control over to the FlowControlMixin._drain_helper coroutine. This coroutine the returns immediately because some more flags haven't been set yet, and the control ends up back with the coroutine that called yield from wirter.drain().
And so we have gone full circle without giving control to the event loop to allow it handle other coroutines and bubble up the exceptions to writer.drain().
yielding before a drain() gives the transport/protocol a chance to set the appropriate flags and exceptions.
Here's a mock up of what's going on, with all the nested calls collapsed:
import asyncio as aio
def set_exception(ctx, exc):
ctx["exc"] = exc
#aio.coroutine
def drain(ctx):
if ctx["exc"] is not None:
raise ctx["exc"]
return
#aio.coroutine
def client_handler(ctx):
i = 0
while True:
i += 1
print("write", i)
# yield # Uncommenting this allows the loop.call_later call to be scheduled.
yield from drain(ctx)
CTX = {"exc": None}
loop = aio.get_event_loop()
# Set the exception in 5 seconds
loop.call_later(5, set_exception, CTX, Exception("connection lost"))
loop.run_until_complete(client_handler(CTX))
loop.close()
This should probably fixed upstream in the Streams API by the asyncio developers.

This is a little bit strange, but you can actually allow an exception to reach the client_handler coroutine by forcing it to yield control to the event loop for one iteration:
import asyncio
import logging
#asyncio.coroutine
def client_handler(reader, writer):
while True:
writer.write(bytes(1))
yield # Yield to the event loop
yield from writer.drain()
logging.basicConfig(level=logging.INFO)
loop = asyncio.get_event_loop()
coro = asyncio.start_server(client_handler, '', 12345)
server = loop.run_until_complete(coro)
loop.run_forever()
If I do that, I get this output when I kill the client connection:
ERROR:asyncio:Task exception was never retrieved
future: <Task finished coro=<client_handler() done, defined at aio.py:4> exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/usr/lib/python3.4/asyncio/tasks.py", line 238, in _step
result = next(coro)
File "aio.py", line 9, in client_handler
yield from writer.drain()
File "/usr/lib/python3.4/asyncio/streams.py", line 301, in drain
raise exc
File "/usr/lib/python3.4/asyncio/selector_events.py", line 700, in write
n = self._sock.send(data)
ConnectionResetError: [Errno 104] Connection reset by peer
I'm really not quite sure why you need to explicitly let the event loop get control for the exception to get through - don't have time at the moment to dig into it. I assume some bit needs to get flipped to indicate the connection dropped, and calling yield from writer.drain() (which can short-circuit going through the event loop) in a loop is preventing that from happening, but I'm really not sure. If I get a chance to investigate, I'll update the answer with that info.

The stream based API doesn't have a callback you can specify for when the connection is closed. But the Protocol API does, so use it instead: https://docs.python.org/3/library/asyncio-protocol.html#connection-callbacks

Related

Do un-awaited tasks always start when created via create_task?

I'm trying to create a server, and I'm having difficulty understanding how using create_task starts a coroutine in motion. In the first test, create_task seems to start the task immediately. In the second test though, it doesn't seem to start it until it's awaited.
import asyncio
async def task_test():
async def delayed_print(delay, message):
await asyncio.sleep(delay)
print(message)
print_task = asyncio.create_task(delayed_print(2, "Hello"))
await asyncio.sleep(5)
print("World")
await print_task
asyncio.run(task_test())
Hello
World
If print_task only started when await print_task was reached, "World\nHello" would have been printed instead.
The problem is, this seems to contradict the behavior I'm seeing with AbstractServer's serve_forever function. If I setup a similar test for starting a server:
async def server_test():
server: asyncio.AbstractServer = await asyncio.start_server(lambda r, w: print("conn"), "127.0.0.1", 5555)
serve_task = asyncio.create_task(server.serve_forever())
# await serve_task # The pivotal part
return server
The server only accepts incoming connections when the (currently commented) await line executes; suggesting serve_forever requires awaiting to work properly.
Evidence:
asyncio.run(server_test()) # With "await serve_task" commented out
# Returns
# --- In another REPL
rdr, wtr = asyncio.run(asyncio.open_connection("127.0.0.1", 5555))
Traceback (most recent call last):
# Truncated - It's very long
raise OSError(err, f'Connect call failed {address}')
ConnectionRefusedError: [Errno 10061] Connect call failed ('127.0.0.1', 5555)
It errors out due to the client being unable to connect to the server.
If I uncomment out that line though:
asyncio.run(server_test()) # With "await serve_task" executing
# Never returns
# --- In another REPL
rdr, wtr = asyncio.run(asyncio.open_connection("127.0.0.1", 5555))
(rdr, wtr)
(<StreamReader transport=<_SelectorSocketTransport fd=1064>>, <StreamWriter transport=<_SelectorSocketTransport fd=1064> reader=<StreamReader transport=<_SelectorSocketTransport fd=1064>>>)
It connects successfully ("conn" is printed out in the server REPL).
Can anybody explain why serve_forever is only allowing the server to accept connected when it's awaited? I would rather not need to explicitly await the serve_task. Why?:
I want to understand why there's a difference between these two fairly similar bits of code so I can avoid future pitfalls, but mostly...
Because I don't want to have to await serve_forever. That will create an effectively infinite blocking call preventing the server from doing anything new. Ideally, I'd like to be able to start the server in a REPL, and send commands to the server locally to carry out actions. With how it is right now, the REPL becomes blocked as soon as I start the server. The only workaround I've come up with is pre-creating tasks I'll want to run and delaying them, then giving them and serve_forever to gather. Something like:
asyncio.gather(server.serve_forever(),
some_delayed_task,
some_other_delayed_task)
Any clarity here would be appreciated.
You don't have to await the result of create_task(serve_forever()). You do, however, need to await something. In asyncio, only one thing can happen at a time, and it's only possible to switch between tasks with await. Until you reach an await, the serve_forever task isn't actually running.
The problem you're seeing in the REPL is that because the REPL is not implemented in terms of asyncio, so while you're sitting at the repl prompt no asyncio tasks can be run. Try using aioconsole instead of the standard REPL.
#Ben's right, but I just to elaborate on what happened here because it's interesting.
The first example works because await asyncio.sleep(5) passes off control and allows the delayed_print task to run.
The second example doesn't work because I never pass off control using await to let any other tasks run. Seemingly, the solution to this then would be to add a sleep or something to let other tasks run:
async def server_test():
server: aio.AbstractServer = await aio.start_server(lambda r, w: print("conn"), "127.0.0.1", 5555)
serve_task = aio.create_task(server.serve_forever())
await aio.sleep(10)
return server
I thought that surely during the 10-second passoff, serve_forever would have had a chance to run. After the 10 seconds though, I still couldn't connect.
It turns out, serve_forever was running, and I could connect, but only during the 10 second window. The problem is, asyncio.run cancels all running tasks when it exits. In this case, that meant serve_forever was being cancelled. If I connect during the sleep though, it's fine:
async def server_test():
server: aio.AbstractServer = await aio.start_server(lambda r, w: print("conn"), "127.0.0.1", 5555)
aio.create_task(server.serve_forever())
await aio.sleep(10)
return server
server = aio.run(server_test())
# --- Quickly switch to other REPL
rdr, wtr = aio.run(aio.open_connection("127.0.0.1", 5555))
# "conn" gets printed in server REPL
(rdr, wtr)
(<StreamReader transport=<_SelectorSocketTransport fd=988>>, <StreamWriter transport=<_SelectorSocketTransport fd=988> reader=<StreamReader transport=<_SelectorSocketTransport fd=988>>>)
In "real code", run ending and killing tasks shouldn't be a problem. My example was too simple and contrived to be useful unfortunately.

How can I refactor to accept multiple clients?

I don't understand why server.py Version 1 allows a client to be keyboard-interrupted and restarted, while server.py Version 2 doesn't:
server.py Version1:
import asyncio
async def handle_client(reader, writer):
while True:
request = (await reader.read(128)).decode()
writer.write('Received ok.'.encode())
await writer.drain()
async def main():
loop.create_task(asyncio.start_server(handle_client, 'localhost', 15555))
loop = asyncio.new_event_loop()
loop.create_task(main())
loop.run_forever()
server.py Version 2:
import asyncio
async def handle_client(reader, writer):
while True:
request = (await reader.read(128)).decode()
if request == "hello":
writer.write('Received ok.'.encode())
await writer.drain()
async def main():
loop.create_task(asyncio.start_server(handle_client, 'localhost', 15555))
loop = asyncio.new_event_loop()
loop.create_task(main())
loop.run_forever()
client.py:
import asyncio
async def make_connections():
reader, writer = await asyncio.open_connection('localhost', 15555, loop=loop)
loop.create_task(connect(reader, writer))
async def connect(reader, writer):
writer.write("hello".encode())
await writer.drain()
result = await reader.read(128)
print(result.decode())
loop = asyncio.new_event_loop()
loop.create_task(make_connections())
loop.run_forever()
Version 2 works fine for a single client, but if I send a keyboard interrupt to the client I can no longer connect after I restart the client. It's annoying to ssh in and kill/restart the server every time I alter code in the client. I don't see why the second version doesn't accept the client the second time it attempts to connect.
I don't understand why server.py Version 1 allows a client to be keyboard-interrupted and restarted, while server.py Version 2 doesn't
Both versions have a bug that they don't correctly check for the end-of-file condition. When you interrupt the client, the socket gets closed and reading from it returns EOF, while writing to it raises an exception. Awaiting writer.drain() in version 1 delivers the exception and interrupts the coroutine. (This exception is probably displayed on the server's standard error.)
Version 2 has a problem, though: the if request == "hello" test is false at EOF because reader.read() keeps returning an empty byte string to mark the EOF condition. This prevents await writer.drain() from executing and delivering the exception, so the coroutine remains stuck in an infinite loop. A simple fix is to add something like if not request: break after the read.
Why version 2 gets stuck
But the above doesn't fully explain why in Version 2 the whole server is broken and new clients unable to connect. Surely one would expect await to either return a result or yield control to other coroutines. But the observed behavior is that, despite containing an await in the while loop, the coroutine doesn't allow other coroutines to run!
The problem is that await doesn't mean "pass control to the event loop", as it is often understood. It means "request value from the provided awaitable object, yielding control to the event loop if the object indicates that it does not have a value ready." The part after the if is crucial: if the object does have a value ready, this value will be used immediately without ever deferring to the event loop. In other words, await doesn't guarantee that the event loop will get a chance to run.
A stream at EOF always has data to return - the empty string that marks the EOF. As a result, it never gets suspended and the loop ends up completely blocking the event loop. To guarantee that other tasks get a chance to run, you can add await asyncio.sleep(0) in a loop - but this should not be necessary in correctly written code, where requesting IO data will soon result in a wait, at which point the event loop will kick in. Once the EOF handling bug is corrected, the server will function correctly.

Pika channel.stop_consuming doesn't stop start_consuming loop

I have this piece of code, basically it run channel.start_consuming().
I want it to stop after a while.
I think that channel.stop_consuming() is the right method:
def stop_consuming(self, consumer_tag=None):
""" Cancels all consumers, signalling the `start_consuming` loop to
exit.
But it doesn't work: start_consuming() never ends (execution doesn't exit from this call, "end" is never printed).
import unittest
import pika
import threading
import time
_url = "amqp://user:password#xxx.rabbitserver.com/aaa"
class Consumer_test(unittest.TestCase):
def test_startConsuming(self):
def callback(channel, method, properties, body):
print("callback")
print(body)
def connectionTimeoutCallback():
print("connecionClosedCallback")
def _closeChannel(channel_):
print("_closeChannel")
time.sleep(1)
print("close")
if channel_.is_open:
channel_.stop_consuming()
print("stop_cosuming")
else:
print("channel is closed")
#channel_.close()
params = pika.URLParameters(_url)
params.socket_timeout = 5
connection = pika.BlockingConnection(params)
#connection.add_timeout(2, connectionTimeoutCallback)
channel = connection.channel()
channel.basic_consume(callback,
queue='test',
no_ack=True)
t = threading.Thread(target=_closeChannel, args=[channel])
t.start()
print("start_consuming")
channel.start_consuming() # start consuming (loop never ends)
connection.close()
print("end")
connection.add_timeout solve my problem, maybe call basic_cancel too, but I want to use the right method.
Thanks
Note:
I can't respond or add comment to this (pika, stop_consuming does not work) due to my low reputation points.
Note 2:
I think that I'm not sharing channel or connection across threads (Pika doesn't support this) because I use "channel_" passed as parameter and not "channel" instance of the class (Am I wrong?).
I was having the same problem; as pika is not thread safe. i.e. connections and channels can't be safely shared across threads.
So I used a separate connection to send a shutdown message; then stopped consuming the original channel from the callback function.

python asyncio run event loop once?

I am trying to understand the asyncio library, specifically with using sockets. I have written some code in an attempt to gain understanding,
I wanted to run a sender and a receiver sockets asynchrounously. I got to the point where I get all data sent up till the last one, but then I have to run one more loop. Looking at how to do this, I found this link from stackoverflow, which I implemented below -- but what is going on here? Is there a better/more sane way to do this than to call stop followed by run_forever?
The documentation for stop() in the event loop is:
Stop running the event loop.
Every callback scheduled before stop() is called will run. Callbacks scheduled after stop() is called will not run. However, those callbacks will run if run_forever() is called again later.
And run_forever()'s documentation is:
Run until stop() is called.
Questions:
why in the world is run_forever the only way to run_once? This doesn't even make sense
Is there a better way to do this?
Does my code look like a reasonable way to program with the asyncio library?
Is there a better way to add tasks to the event loop besides asyncio.async()? loop.create_task gives an error on my Linux system.
https://gist.github.com/cloudformdesign/b30e0860497f19bd6596
The stop(); run_forever() trick works because of how stop is implemented:
def stop(self):
"""Stop running the event loop.
Every callback scheduled before stop() is called will run.
Callback scheduled after stop() is called won't. However,
those callbacks will run if run() is called again later.
"""
self.call_soon(_raise_stop_error)
def _raise_stop_error(*args):
raise _StopError
So, next time the event loop runs and executes pending callbacks, it's going to call _raise_stop_error, which raises _StopError. The run_forever loop will break only on that specific exception:
def run_forever(self):
"""Run until stop() is called."""
if self._running:
raise RuntimeError('Event loop is running.')
self._running = True
try:
while True:
try:
self._run_once()
except _StopError:
break
finally:
self._running = False
So, by scheduling a stop() and then calling run_forever, you end up running one iteration of the event loop, then stopping once it hits the _raise_stop_error callback. You may have also noticed that _run_once is defined and called by run_forever. You could call that directly, but that can sometimes block if there aren't any callbacks ready to run, which may not be desirable. I don't think there's a cleaner way to do this currently - That answer was provided by Andrew Svetlov, who is an asyncio contributor; he would probably know if there's a better option. :)
In general, your code looks reasonable, though I think that you shouldn't be using this run_once approach to begin with. It's not deterministic; if you had a longer list or a slower system, it might require more than two extra iterations to print everything. Instead, you should just send a sentinel that tells the receiver to shut down, and then wait on both the send and receive coroutines to finish:
import sys
import time
import socket
import asyncio
addr = ('127.0.0.1', 1064)
SENTINEL = b"_DONE_"
# ... (This stuff is the same)
#asyncio.coroutine
def sending(addr, dataiter):
loop = asyncio.get_event_loop()
for d in dataiter:
print("Sending:", d)
sock = socket.socket()
yield from send_close(loop, sock, addr, str(d).encode())
# Send a sentinel
sock = socket.socket()
yield from send_close(loop, sock, addr, SENTINEL)
#asyncio.coroutine
def receiving(addr):
loop = asyncio.get_event_loop()
sock = socket.socket()
try:
sock.setblocking(False)
sock.bind(addr)
sock.listen(5)
while True:
data = yield from accept_recv(loop, sock)
if data == SENTINEL: # Got a sentinel
return
print("Recevied:", data)
finally: sock.close()
def main():
loop = asyncio.get_event_loop()
# add these items to the event loop
recv = asyncio.async(receiving(addr), loop=loop)
send = asyncio.async(sending(addr, range(10)), loop=loop)
loop.run_until_complete(asyncio.wait([recv, send]))
main()
Finally, asyncio.async is the right way to add tasks to the event loop. create_task was added in Python 3.4.2, so if you have an earlier version it won't exist.

Python asyncio force timeout

Using asyncio a coroutine can be executed with a timeout so it gets cancelled after the timeout:
#asyncio.coroutine
def coro():
yield from asyncio.sleep(10)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(), 5))
The above example works as expected (it times out after 5 seconds).
However, when the coroutine doesn't use asyncio.sleep() (or other asyncio coroutines) it doesn't seem to time out. Example:
#asyncio.coroutine
def coro():
import time
time.sleep(10)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(), 1))
This takes more than 10 seconds to run because the time.sleep(10) isn't cancelled. Is it possible to enforce the cancellation of the coroutine in such a case?
If asyncio should be used to solve this, how could I do that?
No, you can't interrupt a coroutine unless it yields control back to the event loop, which means it needs to be inside a yield from call. asyncio is single-threaded, so when you're blocking on the time.sleep(10) call in your second example, there's no way for the event loop to run. That means when the timeout you set using wait_for expires, the event loop won't be able to take action on it. The event loop doesn't get an opportunity to run again until coro exits, at which point its too late.
This is why in general, you should always avoid any blocking calls that aren't asynchronous; any time a call blocks without yielding to the event loop, nothing else in your program can execute, which is probably not what you want. If you really need to do a long, blocking operation, you should try to use BaseEventLoop.run_in_executor to run it in a thread or process pool, which will avoid blocking the event loop:
import asyncio
import time
from concurrent.futures import ProcessPoolExecutor
#asyncio.coroutine
def coro(loop):
ex = ProcessPoolExecutor(2)
yield from loop.run_in_executor(ex, time.sleep, 10) # This can be interrupted.
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(loop), 1))
Thx #dano for your answer. If running a coroutine is not a hard requirement, here is a reworked, more compact version
import asyncio, time
timeout = 0.5
loop = asyncio.get_event_loop()
future = asyncio.wait_for(loop.run_in_executor(None, time.sleep, 2), timeout)
try:
loop.run_until_complete(future)
print('Thx for letting me sleep')
except asyncio.exceptions.TimeoutError:
print('I need more sleep !')
For the curious, a little debugging in my Python 3.8.2 showed that passing None as an executor results in the creation of a _default_executor, as follows:
self._default_executor = concurrent.futures.ThreadPoolExecutor()
The examples I've seen for timeout handling are very trivial. Given reality, my app is bit more complex. The sequence is:
When a client connects to server, have the server create another connection to internal server
When the internal server connection is ok, wait for the client to send data. Based on this data we may make a query to internal server.
When there is data to send to internal server, send it. Since internal server sometimes doesn't respond fast enough, wrap this request into a timeout.
If the operation times out, collapse all connections to signal the client about error
To achieve all of the above, while keeping the event loop running, the resulting code contains following code:
def connection_made(self, transport):
self.client_lock_coro = self.client_lock.acquire()
asyncio.ensure_future(self.client_lock_coro).add_done_callback(self._got_client_lock)
def _got_client_lock(self, task):
task.result() # True at this point, but call there will trigger any exceptions
coro = self.loop.create_connection(lambda: ClientProtocol(self),
self.connect_info[0], self.connect_info[1])
asyncio.ensure_future(asyncio.wait_for(coro,
self.client_connect_timeout
)).add_done_callback(self.connected_server)
def connected_server(self, task):
transport, client_object = task.result()
self.client_transport = transport
self.client_lock.release()
def data_received(self, data_in):
asyncio.ensure_future(self.send_to_real_server(message, self.client_send_timeout))
def send_to_real_server(self, message, timeout=5.0):
yield from self.client_lock.acquire()
asyncio.ensure_future(asyncio.wait_for(self._send_to_real_server(message),
timeout, loop=self.loop)
).add_done_callback(self.sent_to_real_server)
#asyncio.coroutine
def _send_to_real_server(self, message):
self.client_transport.write(message)
def sent_to_real_server(self, task):
task.result()
self.client_lock.release()

Categories

Resources