I have a TCP server implemented in Python using asyncio's create_server.
I call the coroutine start_server with a connection_handler_cb.
Now my question is this: let's say my connection_handler_cb looks something
like this:
def connection_handler_cb(reader, writer):
while True:
yield from reader.read()
--do some computation--
I know that only the yield from coroutines are being run "concurrently" (I know it's not really concurrent), all the "--do some computation--" part is being called sequentially and is preventing everything else from running in the loop.
Let's say we are talking about a TCP server with multiple clients trying to send. Can this situation cause send timeout from the other side - the client side?
If your clients are waiting for a response from the server, and that response isn't sent until the computation is done, then it's possible the clients could eventually timeout, if the computations took long enough. More likely, though, is that the clients will just hang until the computations are done and the event loop gets unblocked.
In any case, if you're worried about timeouts or hangs, use loop.run_in_executor to run your computations in a background process (this is preferable), or thread (probably not a good choice since you're doing CPU-bound computations) without blocking the event loop:
import asyncio
import multiprocessing
from concurrent.futures import ProcessPoolExecutor
def comp_func(arg1, arg2):
# Do computation here
return output
def connection_handler_cb(reader, writer):
while True:
yield from reader.read()
# Do computation in a background process
# This won't block the event loop.
output = yield from loop.run_in_executor(None, comp_func, arg1, arg2) #
if __name__ == "__main__":
executor =
loop = asyncio.get_event_loop()
loop.set_default_executor(
ProcessPoolExecutor(multiprocessing.cpu_count()))
asyncio.async(asyncio.start_server(connect_handler_cb, ...))
loop.run_forever()
Related
#asyncio.coroutine
def listener():
while True:
message = yield from websocket.recieve_message()
if message:
yield from handle(message)
loop = asyncio.get_event_loop()
loop.run_until_complete(listener())
Let's say i'm using websockets with asyncio. That means I recieve messages from websockets. And when I recieve a message, I want to handle the message but I'm loosing all the async thing with my code. Because the yield from handle(message) is definetly blocking... How could I find a way to make it non-blocking ? Like, handle multiple messages in the same time. Not having to wait the message to be handled before I can handle another message.
Thanks.
If you don't care about the return value from handle message, you can simply create a new Task for it, which will run in the event loop alongside your websocket reader. Here is a simple example:
#asyncio.coroutine
def listener():
while True:
message = yield from websocket.recieve_message()
if message:
asyncio.ensure_future(handle(message))
ensure_future will create a task and attach it to the default event loop. Since the loop is already running, it will get processed alongside your websocket reader in parallel. In fact, if it is a slow-running I/O blocked task (like sending an email), you could easily have a few dozen handle(message) tasks running at once. They are created dynamically when needed, and destroyed when finished (with much lower overhead than spawning threads).
If you want a bit more control, you could simply write to an asyncio.Queue in the reader and have a task pool of a fixed size that can consume the queue, a typical pattern in multi-threaded or multi-process programming.
#asyncio.coroutine
def consumer(queue):
while True:
message = yield from queue.get()
yield from handle(message)
#asyncio.coroutine
def listener(queue):
for i in range(5):
asyncio.ensure_future(consumer(queue))
while True:
message = yield from websocket.recieve_message()
if message:
yield from q.put(message)
q = asyncio.Queue()
loop = asyncio.get_event_loop()
loop.run_until_complete(listener(q))
Should I replace while True in my code (without asyncio) or should I use asyncio event loop to accomplish the same result.
Currently I work on some kind "worker" that is connected to zeromq, receive some data and then performs some request (http) to external tool(server). Everything is written in normal blocking IO. Does it makes sense to use asyncio event loop to get rid of while True: ...?
In future it might be rewritten fully in asyncio, but now I'm afraid to start with asyncio.
i'm new with asyncio and not all part of this library are clear for me :)
Thx :)
If you want to start writing asyncio code with a library that doesn't support it, you can use BaseEventLoop.run_in_executor.
This allows you to submit a callable to a ThreadPoolExecutor or a ProcessPoolExecutor and get the result asynchronously. The default executor is a thread pool of 5 threads.
Example:
# Python 3.4
#asyncio.coroutine
def some_coroutine(*some_args, loop=None):
while True:
[...]
result = yield from loop.run_in_executor(
None, # Use the default executor
some_blocking_io_call,
*some_args)
[...]
# Python 3.5
async def some_coroutine(*some_args, loop=None):
while True:
[...]
result = await loop.run_in_executor(
None, # Use the default executor
some_blocking_io_call,
*some_args)
[...]
loop = asyncio.get_event_loop()
coro = some_coroutine(*some_arguments, loop=loop)
loop.run_until_complete(coro)
This is the basic tcp server from asyncio tutotial:
import asyncio
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
message = data.decode()
print('Data received: {!r}'.format(message))
print('Send: {!r}'.format(message))
self.transport.write(data)
print('Close the client socket')
self.transport.close()
loop = asyncio.get_event_loop()
# Each client connection will create a new protocol instance
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
# Serve requests until CTRL+c is pressed
print('Serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
# Close the server
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
Like all (i found) other examples it uses blocking loop.run_forever().
How do i start listeting server and do something else in the time?
I have tried to outsource starting server in a function and start this function with asyncio.async(), but with no success.
What i'm missing here?
You can schedule several concurrent asyncio tasks before calling loop.run_forever().
#asyncio.coroutine
def other_task_coroutine():
pass # do something
start_tcp_server_task = loop.create_task(loop.create_server(
EchoServerClientProtocol, '127.0.0.1', 8888))
other_task = loop.create_task(other_task_coroutine())
self.run_forever()
When you call loop.create_task(loop.create_server()) or loop.create_task(other_task_coroutine()), nothing is actually executed: a coroutine object is created and wrapped in a task (consider a task to be a shell and the coroutine an instance of the code that will be executed in the task). The tasks are scheduled on the loop when created.
The loop will execute start_tcp_server_task first (as it's scheduled first) until a blocking IO event is pending or the passive socket is ready to listen for incoming connections.
You can see asyncio as a non-preemptible scheduler running on one CPU: once the first task interrupts itself or is done, the second task will be executed. Hence, when one task is executed, the other one has to wait until the running task finishes or yields (or "awaits" with Python 3.5). "yielding" (yield from client.read()) or "awaiting" (await client.read()) means that the task gives back the hand to the loop's scheduler, until client.read() can be executed (data is available on the socket).
Once the task gave back the control to the loop, it can schedule the other pending tasks, process incoming events and schedule the tasks which were waiting for those events. Once there is nothing left to do, the loop will perform the only blocking call of the process: sleep until the kernel notifies it that events are ready to be processed.
In this context, you must understand that when using asyncio, everything running in the process must run asynchronously so the loop can do its work. You can not use multiprocessing objects in the loop.
Note that asyncio.async(coroutine(), loop=loop) is equivalent to loop.create_task(coroutine()).
Additionally, you can consider running what you want in an executor.
For example.
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
async def execute(self, loop):
await loop.run_in_executor(None, your_func_here, args:
asyncio.async(execute(loop))
loop.run_forever()
An executor will run whatever function you want in an executor, which wont block your server.
I am trying to understand the asyncio library, specifically with using sockets. I have written some code in an attempt to gain understanding,
I wanted to run a sender and a receiver sockets asynchrounously. I got to the point where I get all data sent up till the last one, but then I have to run one more loop. Looking at how to do this, I found this link from stackoverflow, which I implemented below -- but what is going on here? Is there a better/more sane way to do this than to call stop followed by run_forever?
The documentation for stop() in the event loop is:
Stop running the event loop.
Every callback scheduled before stop() is called will run. Callbacks scheduled after stop() is called will not run. However, those callbacks will run if run_forever() is called again later.
And run_forever()'s documentation is:
Run until stop() is called.
Questions:
why in the world is run_forever the only way to run_once? This doesn't even make sense
Is there a better way to do this?
Does my code look like a reasonable way to program with the asyncio library?
Is there a better way to add tasks to the event loop besides asyncio.async()? loop.create_task gives an error on my Linux system.
https://gist.github.com/cloudformdesign/b30e0860497f19bd6596
The stop(); run_forever() trick works because of how stop is implemented:
def stop(self):
"""Stop running the event loop.
Every callback scheduled before stop() is called will run.
Callback scheduled after stop() is called won't. However,
those callbacks will run if run() is called again later.
"""
self.call_soon(_raise_stop_error)
def _raise_stop_error(*args):
raise _StopError
So, next time the event loop runs and executes pending callbacks, it's going to call _raise_stop_error, which raises _StopError. The run_forever loop will break only on that specific exception:
def run_forever(self):
"""Run until stop() is called."""
if self._running:
raise RuntimeError('Event loop is running.')
self._running = True
try:
while True:
try:
self._run_once()
except _StopError:
break
finally:
self._running = False
So, by scheduling a stop() and then calling run_forever, you end up running one iteration of the event loop, then stopping once it hits the _raise_stop_error callback. You may have also noticed that _run_once is defined and called by run_forever. You could call that directly, but that can sometimes block if there aren't any callbacks ready to run, which may not be desirable. I don't think there's a cleaner way to do this currently - That answer was provided by Andrew Svetlov, who is an asyncio contributor; he would probably know if there's a better option. :)
In general, your code looks reasonable, though I think that you shouldn't be using this run_once approach to begin with. It's not deterministic; if you had a longer list or a slower system, it might require more than two extra iterations to print everything. Instead, you should just send a sentinel that tells the receiver to shut down, and then wait on both the send and receive coroutines to finish:
import sys
import time
import socket
import asyncio
addr = ('127.0.0.1', 1064)
SENTINEL = b"_DONE_"
# ... (This stuff is the same)
#asyncio.coroutine
def sending(addr, dataiter):
loop = asyncio.get_event_loop()
for d in dataiter:
print("Sending:", d)
sock = socket.socket()
yield from send_close(loop, sock, addr, str(d).encode())
# Send a sentinel
sock = socket.socket()
yield from send_close(loop, sock, addr, SENTINEL)
#asyncio.coroutine
def receiving(addr):
loop = asyncio.get_event_loop()
sock = socket.socket()
try:
sock.setblocking(False)
sock.bind(addr)
sock.listen(5)
while True:
data = yield from accept_recv(loop, sock)
if data == SENTINEL: # Got a sentinel
return
print("Recevied:", data)
finally: sock.close()
def main():
loop = asyncio.get_event_loop()
# add these items to the event loop
recv = asyncio.async(receiving(addr), loop=loop)
send = asyncio.async(sending(addr, range(10)), loop=loop)
loop.run_until_complete(asyncio.wait([recv, send]))
main()
Finally, asyncio.async is the right way to add tasks to the event loop. create_task was added in Python 3.4.2, so if you have an earlier version it won't exist.
Using asyncio a coroutine can be executed with a timeout so it gets cancelled after the timeout:
#asyncio.coroutine
def coro():
yield from asyncio.sleep(10)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(), 5))
The above example works as expected (it times out after 5 seconds).
However, when the coroutine doesn't use asyncio.sleep() (or other asyncio coroutines) it doesn't seem to time out. Example:
#asyncio.coroutine
def coro():
import time
time.sleep(10)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(), 1))
This takes more than 10 seconds to run because the time.sleep(10) isn't cancelled. Is it possible to enforce the cancellation of the coroutine in such a case?
If asyncio should be used to solve this, how could I do that?
No, you can't interrupt a coroutine unless it yields control back to the event loop, which means it needs to be inside a yield from call. asyncio is single-threaded, so when you're blocking on the time.sleep(10) call in your second example, there's no way for the event loop to run. That means when the timeout you set using wait_for expires, the event loop won't be able to take action on it. The event loop doesn't get an opportunity to run again until coro exits, at which point its too late.
This is why in general, you should always avoid any blocking calls that aren't asynchronous; any time a call blocks without yielding to the event loop, nothing else in your program can execute, which is probably not what you want. If you really need to do a long, blocking operation, you should try to use BaseEventLoop.run_in_executor to run it in a thread or process pool, which will avoid blocking the event loop:
import asyncio
import time
from concurrent.futures import ProcessPoolExecutor
#asyncio.coroutine
def coro(loop):
ex = ProcessPoolExecutor(2)
yield from loop.run_in_executor(ex, time.sleep, 10) # This can be interrupted.
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(loop), 1))
Thx #dano for your answer. If running a coroutine is not a hard requirement, here is a reworked, more compact version
import asyncio, time
timeout = 0.5
loop = asyncio.get_event_loop()
future = asyncio.wait_for(loop.run_in_executor(None, time.sleep, 2), timeout)
try:
loop.run_until_complete(future)
print('Thx for letting me sleep')
except asyncio.exceptions.TimeoutError:
print('I need more sleep !')
For the curious, a little debugging in my Python 3.8.2 showed that passing None as an executor results in the creation of a _default_executor, as follows:
self._default_executor = concurrent.futures.ThreadPoolExecutor()
The examples I've seen for timeout handling are very trivial. Given reality, my app is bit more complex. The sequence is:
When a client connects to server, have the server create another connection to internal server
When the internal server connection is ok, wait for the client to send data. Based on this data we may make a query to internal server.
When there is data to send to internal server, send it. Since internal server sometimes doesn't respond fast enough, wrap this request into a timeout.
If the operation times out, collapse all connections to signal the client about error
To achieve all of the above, while keeping the event loop running, the resulting code contains following code:
def connection_made(self, transport):
self.client_lock_coro = self.client_lock.acquire()
asyncio.ensure_future(self.client_lock_coro).add_done_callback(self._got_client_lock)
def _got_client_lock(self, task):
task.result() # True at this point, but call there will trigger any exceptions
coro = self.loop.create_connection(lambda: ClientProtocol(self),
self.connect_info[0], self.connect_info[1])
asyncio.ensure_future(asyncio.wait_for(coro,
self.client_connect_timeout
)).add_done_callback(self.connected_server)
def connected_server(self, task):
transport, client_object = task.result()
self.client_transport = transport
self.client_lock.release()
def data_received(self, data_in):
asyncio.ensure_future(self.send_to_real_server(message, self.client_send_timeout))
def send_to_real_server(self, message, timeout=5.0):
yield from self.client_lock.acquire()
asyncio.ensure_future(asyncio.wait_for(self._send_to_real_server(message),
timeout, loop=self.loop)
).add_done_callback(self.sent_to_real_server)
#asyncio.coroutine
def _send_to_real_server(self, message):
self.client_transport.write(message)
def sent_to_real_server(self, task):
task.result()
self.client_lock.release()