My code execution does not reach the print statement: print("I want to display after MyClass has started")
Why is this? I thought the purpose of await asyncio.sleep() is to unblock execution of code so that subsequent lines of code can run. Is that not the case?
import asyncio
class MyClass:
def __init__(self):
self.input = False
asyncio.run(self.start())
print("I want to display after MyClass has started") #This line is never reached.
async def start(self):
while True:
print("Changing state...")
if self.input:
print("I am on.")
break
await asyncio.sleep(1)
m = MyClass()
m.input = True #This line is never reached! Why?
print("I want to display after MyClass is started")
When I execute, it keeps printing "Changing state...". Even when I ctrl+c to quit, the execution continues as shown below. How can I properly terminate the execution? Sorry, I am new to python.
EDIT:
I appreciate the common use of asyncio is for running two or more separate functions asynchronously. However, my class is one which will be responding to changes in its state. For example, I intend to write code in the setters to do stuff when the class objects attributes change -WHILE still having a while True event loop running in the background. Is there not any way to permit this? I have tried running the event loop in it's own thread. However, that thread then dominates and the class objects response times run into several seconds. This may be due to the GIL (Global Interpreter Lock) which we can do nothing about. I have also tried using multiprocessing, but then I lose access to the properties and methods of the object as parallel process run in their own memory spaces.
In the init method of MyClass you invoke asyncio.run() - this method will execute the required routine until that routine terminates. In your case, since the main method includes a while True loop, it will never terminate.
Here is a slight modification of your code that perhaps shows the concurrency effect you're after -
import asyncio
class MyClass:
def __init__(self):
self.input = False
asyncio.run(self.main())
print("I want to display after MyClass has been initialized.") # This line is never reached.
async def main(self):
work1 = self.work1()
work2 = self.work2()
await asyncio.gather(work1, work2)
async def work1(self):
for idx in range(5):
print('doing some work 1...')
await asyncio.sleep(1)
async def work2(self):
for idx in range(5):
print('doing some work 2...')
await asyncio.sleep(1)
m = MyClass()
print("I want to display after MyClass is terminated")
using: python 3.8 on windows 10
I am working on a script that runs in a while loop.
At the end of the loop I want it to wait for 5 seconds for the user to give input and then restart, or exit if they do not give input.
I've never actually used it before but I assumed asyncio would be useful because it allows for the definition of awaitables. However bringing things together is proving more difficult than I'd anticipated.
import keyboard, asyncio, time as t
async def get():
keyboard.record('enter', True)
return True
async def countdown(time):
while time > 0:
print(f'This program will exit in {int(time)} seconds. Press enter to start over.', end='\r')
await asyncio.sleep(1)
# t.sleep(1)
time -= 1
print(f'This program will exit in 0 seconds. Press enter to start over.', end='\r')
return False
async def main():
running = True
while running:
# other code
clock = asyncio.create_task(countdown(5))
check = asyncio.create_task(get())
done, pending = await asyncio.wait({clock, check}, return_when=asyncio.FIRST_COMPLETED)
running = next(iter(done)).result()
print(running, end='\r')
print('bye')
asyncio.run(main())
As it stands, the process doesn't end if I wait for five seconds. It also doesn't visibly count down (my best, and most ridiculous, guess is that it may be looping too fast in main? Try holding down the "enter" key - with and without printing running).
Also, when I switch to time.sleep, the display works fine, but it doesn't seem as though the countdown function ever returns.
Previously I'd tried using an input statement instead of keyboard.record; that blocks. I had also tried using asyncio.wait_for; but the timeout never came, though again, the "enter" key was registered (that said, it wouldn't have printed the countdown even if it had worked). I also tried asyncio.as_completed but I was unable to parse anything useful from the iterable. Happy to be wrong there though!
Also, bonus points if you can generalize the countdown to non-integer time spans <3
The wait_for approach:
async def main():
running = True
while running:
try:
running = await asyncio.wait_for(get(), 5)
except asyncio.TimeoutError:
running = False
print(running)
print('bye')
asyncio.run(main())
as_completed
async def main():
running = True
while running:
## other code
clock = asyncio.create_task(countdown(5))
check = asyncio.create_task(get())
for f in asyncio.as_completed({check, clock}):
# print(f.cr_frame)
# print(f.cr_await, f.cr_running, f.cr_origin, f.cr_frame, f.cr_code)
print(f.cr_await, f.cr_running, f.cr_origin)
print('bye')
Also, bonus points if you can generalize the countdown to non-integer time :P
cheers!
If you look in the docs for keyboard.record it says:
Note: this is a blocking function.
This is why your process didn't end after 5 seconds. It was blocking at keyboard.record('enter', True). If you are going to stick with the keyboard module, what you need to do is create a hook on the 'enter' key. I put a quick demo together with your code:
import asyncio
import keyboard
class Program:
def __init__(self):
self.enter_pressed = asyncio.Event()
self._loop = asyncio.get_event_loop()
async def get(self):
await self.enter_pressed.wait()
return True
#staticmethod
async def countdown(time):
while time > 0:
print(f'This program will exit in {int(time)} seconds. Press enter to start over.')
await asyncio.sleep(1)
# t.sleep(1)
time -= 1
print(f'This program will exit in 0 seconds. Press enter to start over.')
return False
def notify_enter(self, *args, **kwargs):
self._loop.call_soon_threadsafe(self.enter_pressed.set)
async def main(self):
running = True
while running:
# other code
keyboard.on_press_key('enter', self.notify_enter)
self.enter_pressed.clear()
clock = asyncio.create_task(self.countdown(5))
check = asyncio.create_task(self.get())
done, pending = await asyncio.wait({clock, check}, return_when=asyncio.FIRST_COMPLETED)
keyboard.unhook('enter')
for task in pending:
task.cancel()
running = next(iter(done)).result()
print(running)
print('bye')
async def main():
program = Program()
await program.main()
asyncio.run(main())
There is callback created notify_enter, that sets an asyncio.Event whenever it's fired. The get() task waits for this event to trigger before it exits. Since I didn't know what your other code is doing, we don't bind a hook to the enter key's key_down event until right before you await both tasks and we unbind it right after one of the tasks completes. I wrapped everything up in a class so the event is accessible in the callback, since there isn't a way to pass parameters in.
I have an application that needs to perform some processor-intensive work based on a websocket stream. I want to parallelize the CPU-intensive bits with multiprocessing, but I still need the async interface to handle the streaming parts of the application. To solve this problem I was hoping to make an awaitable version of multiprocessing.AsyncResult (the result of a multiprocessing.pool.Pool.submit_async action). However, I've run into some strange behavior.
My new awaitable pool result (which is a subclass of asyncio.Future) works fine as long as the pool result comes back before I start awaiting it. However, if I try to await the pool result before it has come back, then the program appears to stall indefinitely on the await statement.
I've checked the async iterator results with next(future.async()) and the iterator returns the future instance itself before the pool processing completes and raises a StopIterationError after, as I'd expect.
Code is below.
import multiprocessing
import multiprocessing.pool
import asyncio
import time
class Awaitable_Multiprocessing_Pool(multiprocessing.pool.Pool):
def __init__(self, *args, **kwargs):
multiprocessing.pool.Pool.__init__(self, *args, **kwargs)
def apply_awaitably(self, func, args = list(), kwargs = dict()):
return Awaitable_Multiprocessing_Pool_Result(
self,
func,
args,
kwargs)
class Awaitable_Multiprocessing_Pool_Result(asyncio.Future):
def __init__(self, pool, func, args = list(), kwargs = dict()):
asyncio.Future.__init__(self)
self.pool_result = pool.apply_async(
func,
args,
kwargs,
self.set_result,
self.set_exception)
def result(self):
return self.pool_result.get()
def done(self):
return self.pool_result.ready()
def dummy_processing_fun():
import time
print('start processing')
time.sleep(4)
print('finished processing')
return 'result'
if __name__ == '__main__':
async def main():
ah = Async_Handler(1)
pool = Awaitable_Multiprocessing_Pool(2)
while True:
future = pool.apply_awaitably(dummy_processing_fun, [])
# print(next(future.__await__())) # would show same as print(future)
# print(await future) # would stall indefinitely because pool result isn't in
time.sleep(10) # NOTE: you may have to make this longer to account for pool startup time on the first iteration
# print(next(future.__await__())) # would raise StopIteration
print(await future) # prints 'result'
asyncio.run(main())
Am I missing something obvious here? I think I have all the essential elements of an awaitable working correctly in part because of the fact that I can await successfully in some circumstances. Anyone have any insight?
I am not sure why you make it so complex... How about the following code?
from concurrent.futures import ProcessPoolExecutor
import asyncio
import time
def dummy_processing_fun():
import time
print('start processing')
time.sleep(4)
print('finished processing')
return 'result'
if __name__ == '__main__':
async def main():
pool = ProcessPoolExecutor(2)
while True:
future = pool.submit(dummy_processing_fun)
future = asyncio.wrap_future(future)
# print(next(future.__await__())) # would show same as print(future)
# print(await future) # would stall indefinitely because pool result isn't in
# time.sleep(5)
# print(next(future.__await__())) # would raise StopIteration
print(await future) # prints 'result'
asyncio.run(main())
I have a coroutine which I'd like to run as a "background job" in a Jupyter notebook. I've seen ways to accomplish this using threading, but I'm wondering whether it's also possible to hook into the notebook's event loop.
For example, say I have the following class:
import asyncio
class Counter:
def __init__(self):
self.counter = 0
async def run(self):
while True:
self.counter += 1
await asyncio.sleep(1.0)
t = Counter()
and I'd like to execute the run method (which loops indefinitely), while still being able to check the t.counter variable at any point. Any ideas?
The following basically does what I want I think, but it does use a separate thread. However, I can still use the async primitives.
def run_loop():
loop = asyncio.new_event_loop()
run_loop.loop = loop
asyncio.set_event_loop(loop)
task = loop.create_task(t.run())
loop.run_until_complete(task)
from IPython.lib import backgroundjobs as bg
jobs = bg.BackgroundJobManager()
jobs.new('run_loop()')
loop = run_loop.loop # to access the loop outside
I think your code works perfectly, you just need to create a task to wrap the coroutine, i.e. :
import asyncio
class Counter:
def __init__(self):
self.counter = 0
async def run(self):
while True:
self.counter += 1
await asyncio.sleep(1.0)
t = Counter()
asyncio.create_task(t.run())
wait 10 seconds, and check
t.counter
should get
> 10
There's a simplified version of what Mark proposed:
from IPython.lib import backgroundjobs as bg
jobs = bg.BackgroundJobManager()
jobs.new(asyncio.get_event_loop().run_forever)
If you need, you can access the loop with asyncio.get_event_loop()
I was wondering if there's any library for asynchronous method calls in Python. It would be great if you could do something like
#async
def longComputation():
<code>
token = longComputation()
token.registerCallback(callback_function)
# alternative, polling
while not token.finished():
doSomethingElse()
if token.finished():
result = token.result()
Or to call a non-async routine asynchronously
def longComputation()
<code>
token = asynccall(longComputation())
It would be great to have a more refined strategy as native in the language core. Was this considered?
Something like:
import threading
thr = threading.Thread(target=foo, args=(), kwargs={})
thr.start() # Will run "foo"
....
thr.is_alive() # Will return whether foo is running currently
....
thr.join() # Will wait till "foo" is done
See the documentation at https://docs.python.org/library/threading.html for more details.
You can use the multiprocessing module added in Python 2.6. You can use pools of processes and then get results asynchronously with:
apply_async(func[, args[, kwds[, callback]]])
E.g.:
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=1) # Start a worker processes.
result = pool.apply_async(f, [10], callback) # Evaluate "f(10)" asynchronously calling callback when finished.
This is only one alternative. This module provides lots of facilities to achieve what you want. Also it will be really easy to make a decorator from this.
As of Python 3.5, you can use enhanced generators for async functions.
import asyncio
import datetime
Enhanced generator syntax:
#asyncio.coroutine
def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
yield from asyncio.sleep(1)
loop = asyncio.get_event_loop()
# Blocking call which returns when the display_date() coroutine is done
loop.run_until_complete(display_date(loop))
loop.close()
New async/await syntax:
async def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
# Blocking call which returns when the display_date() coroutine is done
loop.run_until_complete(display_date(loop))
loop.close()
It's not in the language core, but a very mature library that does what you want is Twisted. It introduces the Deferred object, which you can attach callbacks or error handlers ("errbacks") to. A Deferred is basically a "promise" that a function will have a result eventually.
You can implement a decorator to make your functions asynchronous, though that's a bit tricky. The multiprocessing module is full of little quirks and seemingly arbitrary restrictions – all the more reason to encapsulate it behind a friendly interface, though.
from inspect import getmodule
from multiprocessing import Pool
def async(decorated):
r'''Wraps a top-level function around an asynchronous dispatcher.
when the decorated function is called, a task is submitted to a
process pool, and a future object is returned, providing access to an
eventual return value.
The future object has a blocking get() method to access the task
result: it will return immediately if the job is already done, or block
until it completes.
This decorator won't work on methods, due to limitations in Python's
pickling machinery (in principle methods could be made pickleable, but
good luck on that).
'''
# Keeps the original function visible from the module global namespace,
# under a name consistent to its __name__ attribute. This is necessary for
# the multiprocessing pickling machinery to work properly.
module = getmodule(decorated)
decorated.__name__ += '_original'
setattr(module, decorated.__name__, decorated)
def send(*args, **opts):
return async.pool.apply_async(decorated, args, opts)
return send
The code below illustrates usage of the decorator:
#async
def printsum(uid, values):
summed = 0
for value in values:
summed += value
print("Worker %i: sum value is %i" % (uid, summed))
return (uid, summed)
if __name__ == '__main__':
from random import sample
# The process pool must be created inside __main__.
async.pool = Pool(4)
p = range(0, 1000)
results = []
for i in range(4):
result = printsum(i, sample(p, 100))
results.append(result)
for result in results:
print("Worker %i: sum value is %i" % result.get())
In a real-world case I would ellaborate a bit more on the decorator, providing some way to turn it off for debugging (while keeping the future interface in place), or maybe a facility for dealing with exceptions; but I think this demonstrates the principle well enough.
Just
import threading, time
def f():
print "f started"
time.sleep(3)
print "f finished"
threading.Thread(target=f).start()
My solution is:
import threading
class TimeoutError(RuntimeError):
pass
class AsyncCall(object):
def __init__(self, fnc, callback = None):
self.Callable = fnc
self.Callback = callback
def __call__(self, *args, **kwargs):
self.Thread = threading.Thread(target = self.run, name = self.Callable.__name__, args = args, kwargs = kwargs)
self.Thread.start()
return self
def wait(self, timeout = None):
self.Thread.join(timeout)
if self.Thread.isAlive():
raise TimeoutError()
else:
return self.Result
def run(self, *args, **kwargs):
self.Result = self.Callable(*args, **kwargs)
if self.Callback:
self.Callback(self.Result)
class AsyncMethod(object):
def __init__(self, fnc, callback=None):
self.Callable = fnc
self.Callback = callback
def __call__(self, *args, **kwargs):
return AsyncCall(self.Callable, self.Callback)(*args, **kwargs)
def Async(fnc = None, callback = None):
if fnc == None:
def AddAsyncCallback(fnc):
return AsyncMethod(fnc, callback)
return AddAsyncCallback
else:
return AsyncMethod(fnc, callback)
And works exactly as requested:
#Async
def fnc():
pass
You could use eventlet. It lets you write what appears to be synchronous code, but have it operate asynchronously over the network.
Here's an example of a super minimal crawler:
urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
"https://wiki.secondlife.com/w/images/secondlife.jpg",
"http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
import eventlet
from eventlet.green import urllib2
def fetch(url):
return urllib2.urlopen(url).read()
pool = eventlet.GreenPool()
for body in pool.imap(fetch, urls):
print "got body", len(body)
Something like this works for me, you can then call the function, and it will dispatch itself onto a new thread.
from thread import start_new_thread
def dowork(asynchronous=True):
if asynchronous:
args = (False)
start_new_thread(dowork,args) #Call itself on a new thread.
else:
while True:
#do something...
time.sleep(60) #sleep for a minute
return
You can use concurrent.futures (added in Python 3.2).
import time
from concurrent.futures import ThreadPoolExecutor
def long_computation(duration):
for x in range(0, duration):
print(x)
time.sleep(1)
return duration * 2
print('Use polling')
with ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(long_computation, 5)
while not future.done():
print('waiting...')
time.sleep(0.5)
print(future.result())
print('Use callback')
executor = ThreadPoolExecutor(max_workers=1)
future = executor.submit(long_computation, 5)
future.add_done_callback(lambda f: print(f.result()))
print('waiting for callback')
executor.shutdown(False) # non-blocking
print('shutdown invoked')
The newer asyncio running method in Python 3.7 and later is using asyncio.run() instead of creating loop and calling loop.run_until_complete() as well as closing it:
import asyncio
import datetime
async def display_date(delay):
loop = asyncio.get_running_loop()
end_time = loop.time() + delay
while True:
print("Blocking...", datetime.datetime.now())
await asyncio.sleep(1)
if loop.time() > end_time:
print("Done.")
break
asyncio.run(display_date(5))
Is there any reason not to use threads? You can use the threading class.
Instead of finished() function use the isAlive(). The result() function could join() the thread and retrieve the result. And, if you can, override the run() and __init__ functions to call the function specified in the constructor and save the value somewhere to the instance of the class.
The native Python way for asynchronous calls in 2021 with Python 3.9 suitable also for Jupyter / Ipython Kernel
Camabeh's answer is the way to go since Python 3.3.
async def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
# Blocking call which returns when the display_date() coroutine is done
loop.run_until_complete(display_date(loop))
loop.close()
This will work in Jupyter Notebook / Jupyter Lab but throw an error:
RuntimeError: This event loop is already running
Due to Ipython's usage of event loops we need something called nested asynchronous loops which is not yet implemented in Python. Luckily there is nest_asyncio to deal with the issue. All you need to do is:
!pip install nest_asyncio # use ! within Jupyter Notebook, else pip install in shell
import nest_asyncio
nest_asyncio.apply()
(Based on this thread)
Only when you call loop.close() it throws another error as it probably refers to Ipython's main loop.
RuntimeError: Cannot close a running event loop
I'll update this answer as soon as someone answered to this github issue.
You can use process. If you want to run it forever use while (like networking) in you function:
from multiprocessing import Process
def foo():
while 1:
# Do something
p = Process(target = foo)
p.start()
if you just want to run it one time, do like that:
from multiprocessing import Process
def foo():
# Do something
p = Process(target = foo)
p.start()
p.join()