Should I replace while True in my code (without asyncio) or should I use asyncio event loop to accomplish the same result.
Currently I work on some kind "worker" that is connected to zeromq, receive some data and then performs some request (http) to external tool(server). Everything is written in normal blocking IO. Does it makes sense to use asyncio event loop to get rid of while True: ...?
In future it might be rewritten fully in asyncio, but now I'm afraid to start with asyncio.
i'm new with asyncio and not all part of this library are clear for me :)
Thx :)
If you want to start writing asyncio code with a library that doesn't support it, you can use BaseEventLoop.run_in_executor.
This allows you to submit a callable to a ThreadPoolExecutor or a ProcessPoolExecutor and get the result asynchronously. The default executor is a thread pool of 5 threads.
Example:
# Python 3.4
#asyncio.coroutine
def some_coroutine(*some_args, loop=None):
while True:
[...]
result = yield from loop.run_in_executor(
None, # Use the default executor
some_blocking_io_call,
*some_args)
[...]
# Python 3.5
async def some_coroutine(*some_args, loop=None):
while True:
[...]
result = await loop.run_in_executor(
None, # Use the default executor
some_blocking_io_call,
*some_args)
[...]
loop = asyncio.get_event_loop()
coro = some_coroutine(*some_arguments, loop=loop)
loop.run_until_complete(coro)
Related
Have worked through most examples but am still learning async in Python. I am having trouble why this example of code will not print "i am async".
import asyncio
from threading import Thread
async def cor1():
print("i am async!")
def myasync(loop):
print("Async running")
loop.run_forever()
print("Async ended?")
def main():
this_threads_event_loop = asyncio.get_event_loop()
t_1 = Thread(target=myasync, args=(this_threads_event_loop,));
t_1.start()
print("begining the async loop")
t1 = this_threads_event_loop.create_task(cor1())
print("Finsihed cor1")
main()
Your code attempts to submit tasks to the event loop from a different thread. To do that, you must use run_coroutine_threadsafe:
def main():
loop = asyncio.get_event_loop()
# start the event loop in a separate thread
t_1 = Thread(target=myasync, args=(loop,));
t_1.start()
# submit the coroutine to the event loop running in the
# other thread
f1 = asyncio.run_coroutine_threadsafe(cor1(), loop)
# wait for the coroutine to finish, by asking for its result
f1.result()
print("Finsihed cor1")
Please note that combining asyncio and threads should only be done in special circumstances, such as when introducing asyncio to a legacy application where the new functionality needs to be added gradually. If you are writing new code, you almost certainly want the main to be a coroutine, run from top-level using asyncio.run(main()).
To run a legacy synchronous function from asyncio code, you can always use run_in_executor.
I am using asyncio for a network framework.
In below code(low_level is our low level function, main block is our program entry, user_func is user-defined function):
import asyncio
loop = asyncio.get_event_loop()
""":type :asyncio.AbstractEventLoop"""
def low_level():
yield from asyncio.sleep(2)
def user_func():
yield from low_level()
if __name__ == '__main__':
co = user_func()
loop.run_until_complete(co)
I want wrap the low_level as normal function rather than coroutine(for compatibility etc.), but low_level is in event loop. How can wrap it as a normal function?
Because low_level is a coroutine, it can only be used by running an asyncio event loop. If you want to be able to call it from synchronous code that isn't running an event loop, you have to provide a wrapper that actually launches an event loop and runs the coroutine until completion:
def sync_low_level():
loop = asyncio.get_event_loop()
loop.run_until_complete(low_level())
If you want to be able to call low_level() from a function that is part of the running event loop, have it block for two seconds, but not have to use yield from, the answer is that you can't. The event loop is single-threaded; whenever execution is inside one of your functions, the event loop is blocked. No other events or callbacks can be processed. The only ways for a function running in the event loop to give control back to the event loop are to 1) return 2) use yield from. The asyncio.sleep call in low_level will never be able to complete unless you do one those two things.
Now, I suppose you could create an entirely new event loop, and use that to run the sleep synchronously from a coroutine running as part of the default event loop:
import asyncio
loop = asyncio.get_event_loop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def sync_low_level():
new_loop = asyncio.new_event_loop()
new_loop.run_until_complete(low_level(loop=new_loop))
#asyncio.coroutine
def user_func():
sync_low_level()
if __name__ == "__main__":
loop.run_until_complete(user_func())
But I'm really not sure why you'd want to do that.
If you just want to be able to make low_level act like a method returning a Future, so you can attach callbacks, etc. to it, just wrap it in asyncio.async():
loop = asyncio.get_event_loop()
def sleep_done(fut):
print("Done sleeping")
loop.stop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def user_func():
fut = asyncio.async(low_level())
fut.add_done_callback(sleep_done)
if __name__ == "__main__":
loop.call_soon(user_func)
loop.run_forever()
Output:
<2 second delay>
"Done sleeping"
Also, in your example code, you should use the #asyncio.coroutine decorator for both low_level and user_func, as stated in the asyncio docs:
A coroutine is a generator that follows certain conventions. For
documentation purposes, all coroutines should be decorated with
#asyncio.coroutine, but this cannot be strictly enforced.
Edit:
Here's how a user from a synchronous web framework could call into your application without blocking other requests:
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def thr_low_level():
loop = asyncio.new_event_loop()
t = threading.Thread(target=loop.run_until_complete, args(low_level(loop=loop),))
t.start()
t.join()
If a request being handled by Flask calls thr_low_level, it will block until the request is done, but the GIL should be released for all of the asynchronous I/O going on in low_level, allowing other requests to be handled in separate threads.
I have a TCP server implemented in Python using asyncio's create_server.
I call the coroutine start_server with a connection_handler_cb.
Now my question is this: let's say my connection_handler_cb looks something
like this:
def connection_handler_cb(reader, writer):
while True:
yield from reader.read()
--do some computation--
I know that only the yield from coroutines are being run "concurrently" (I know it's not really concurrent), all the "--do some computation--" part is being called sequentially and is preventing everything else from running in the loop.
Let's say we are talking about a TCP server with multiple clients trying to send. Can this situation cause send timeout from the other side - the client side?
If your clients are waiting for a response from the server, and that response isn't sent until the computation is done, then it's possible the clients could eventually timeout, if the computations took long enough. More likely, though, is that the clients will just hang until the computations are done and the event loop gets unblocked.
In any case, if you're worried about timeouts or hangs, use loop.run_in_executor to run your computations in a background process (this is preferable), or thread (probably not a good choice since you're doing CPU-bound computations) without blocking the event loop:
import asyncio
import multiprocessing
from concurrent.futures import ProcessPoolExecutor
def comp_func(arg1, arg2):
# Do computation here
return output
def connection_handler_cb(reader, writer):
while True:
yield from reader.read()
# Do computation in a background process
# This won't block the event loop.
output = yield from loop.run_in_executor(None, comp_func, arg1, arg2) #
if __name__ == "__main__":
executor =
loop = asyncio.get_event_loop()
loop.set_default_executor(
ProcessPoolExecutor(multiprocessing.cpu_count()))
asyncio.async(asyncio.start_server(connect_handler_cb, ...))
loop.run_forever()
Using asyncio a coroutine can be executed with a timeout so it gets cancelled after the timeout:
#asyncio.coroutine
def coro():
yield from asyncio.sleep(10)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(), 5))
The above example works as expected (it times out after 5 seconds).
However, when the coroutine doesn't use asyncio.sleep() (or other asyncio coroutines) it doesn't seem to time out. Example:
#asyncio.coroutine
def coro():
import time
time.sleep(10)
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(), 1))
This takes more than 10 seconds to run because the time.sleep(10) isn't cancelled. Is it possible to enforce the cancellation of the coroutine in such a case?
If asyncio should be used to solve this, how could I do that?
No, you can't interrupt a coroutine unless it yields control back to the event loop, which means it needs to be inside a yield from call. asyncio is single-threaded, so when you're blocking on the time.sleep(10) call in your second example, there's no way for the event loop to run. That means when the timeout you set using wait_for expires, the event loop won't be able to take action on it. The event loop doesn't get an opportunity to run again until coro exits, at which point its too late.
This is why in general, you should always avoid any blocking calls that aren't asynchronous; any time a call blocks without yielding to the event loop, nothing else in your program can execute, which is probably not what you want. If you really need to do a long, blocking operation, you should try to use BaseEventLoop.run_in_executor to run it in a thread or process pool, which will avoid blocking the event loop:
import asyncio
import time
from concurrent.futures import ProcessPoolExecutor
#asyncio.coroutine
def coro(loop):
ex = ProcessPoolExecutor(2)
yield from loop.run_in_executor(ex, time.sleep, 10) # This can be interrupted.
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait_for(coro(loop), 1))
Thx #dano for your answer. If running a coroutine is not a hard requirement, here is a reworked, more compact version
import asyncio, time
timeout = 0.5
loop = asyncio.get_event_loop()
future = asyncio.wait_for(loop.run_in_executor(None, time.sleep, 2), timeout)
try:
loop.run_until_complete(future)
print('Thx for letting me sleep')
except asyncio.exceptions.TimeoutError:
print('I need more sleep !')
For the curious, a little debugging in my Python 3.8.2 showed that passing None as an executor results in the creation of a _default_executor, as follows:
self._default_executor = concurrent.futures.ThreadPoolExecutor()
The examples I've seen for timeout handling are very trivial. Given reality, my app is bit more complex. The sequence is:
When a client connects to server, have the server create another connection to internal server
When the internal server connection is ok, wait for the client to send data. Based on this data we may make a query to internal server.
When there is data to send to internal server, send it. Since internal server sometimes doesn't respond fast enough, wrap this request into a timeout.
If the operation times out, collapse all connections to signal the client about error
To achieve all of the above, while keeping the event loop running, the resulting code contains following code:
def connection_made(self, transport):
self.client_lock_coro = self.client_lock.acquire()
asyncio.ensure_future(self.client_lock_coro).add_done_callback(self._got_client_lock)
def _got_client_lock(self, task):
task.result() # True at this point, but call there will trigger any exceptions
coro = self.loop.create_connection(lambda: ClientProtocol(self),
self.connect_info[0], self.connect_info[1])
asyncio.ensure_future(asyncio.wait_for(coro,
self.client_connect_timeout
)).add_done_callback(self.connected_server)
def connected_server(self, task):
transport, client_object = task.result()
self.client_transport = transport
self.client_lock.release()
def data_received(self, data_in):
asyncio.ensure_future(self.send_to_real_server(message, self.client_send_timeout))
def send_to_real_server(self, message, timeout=5.0):
yield from self.client_lock.acquire()
asyncio.ensure_future(asyncio.wait_for(self._send_to_real_server(message),
timeout, loop=self.loop)
).add_done_callback(self.sent_to_real_server)
#asyncio.coroutine
def _send_to_real_server(self, message):
self.client_transport.write(message)
def sent_to_real_server(self, task):
task.result()
self.client_lock.release()
I am using asyncio for a network framework.
In below code(low_level is our low level function, main block is our program entry, user_func is user-defined function):
import asyncio
loop = asyncio.get_event_loop()
""":type :asyncio.AbstractEventLoop"""
def low_level():
yield from asyncio.sleep(2)
def user_func():
yield from low_level()
if __name__ == '__main__':
co = user_func()
loop.run_until_complete(co)
I want wrap the low_level as normal function rather than coroutine(for compatibility etc.), but low_level is in event loop. How can wrap it as a normal function?
Because low_level is a coroutine, it can only be used by running an asyncio event loop. If you want to be able to call it from synchronous code that isn't running an event loop, you have to provide a wrapper that actually launches an event loop and runs the coroutine until completion:
def sync_low_level():
loop = asyncio.get_event_loop()
loop.run_until_complete(low_level())
If you want to be able to call low_level() from a function that is part of the running event loop, have it block for two seconds, but not have to use yield from, the answer is that you can't. The event loop is single-threaded; whenever execution is inside one of your functions, the event loop is blocked. No other events or callbacks can be processed. The only ways for a function running in the event loop to give control back to the event loop are to 1) return 2) use yield from. The asyncio.sleep call in low_level will never be able to complete unless you do one those two things.
Now, I suppose you could create an entirely new event loop, and use that to run the sleep synchronously from a coroutine running as part of the default event loop:
import asyncio
loop = asyncio.get_event_loop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def sync_low_level():
new_loop = asyncio.new_event_loop()
new_loop.run_until_complete(low_level(loop=new_loop))
#asyncio.coroutine
def user_func():
sync_low_level()
if __name__ == "__main__":
loop.run_until_complete(user_func())
But I'm really not sure why you'd want to do that.
If you just want to be able to make low_level act like a method returning a Future, so you can attach callbacks, etc. to it, just wrap it in asyncio.async():
loop = asyncio.get_event_loop()
def sleep_done(fut):
print("Done sleeping")
loop.stop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def user_func():
fut = asyncio.async(low_level())
fut.add_done_callback(sleep_done)
if __name__ == "__main__":
loop.call_soon(user_func)
loop.run_forever()
Output:
<2 second delay>
"Done sleeping"
Also, in your example code, you should use the #asyncio.coroutine decorator for both low_level and user_func, as stated in the asyncio docs:
A coroutine is a generator that follows certain conventions. For
documentation purposes, all coroutines should be decorated with
#asyncio.coroutine, but this cannot be strictly enforced.
Edit:
Here's how a user from a synchronous web framework could call into your application without blocking other requests:
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def thr_low_level():
loop = asyncio.new_event_loop()
t = threading.Thread(target=loop.run_until_complete, args(low_level(loop=loop),))
t.start()
t.join()
If a request being handled by Flask calls thr_low_level, it will block until the request is done, but the GIL should be released for all of the asynchronous I/O going on in low_level, allowing other requests to be handled in separate threads.