python avoid busy wait in event processing thread - python

How can I avoid busy_wait from the event consumer thread using asyncio?
I have a main thread which generates events which are processed by other thread. My event thread has busy_wait as it is trying to see if event queue has some item in it...
from Queue import Queue
from threading import Thread
import threading
def do_work(p):
print("print p - %s %s" % (p, threading.current_thread()))
def worker():
print("starting %s" % threading.current_thread())
while True: # <------------ busy wait
item = q.get()
do_work(item)
time.sleep(1)
q.task_done()
q = Queue()
t = Thread(target=worker)
t.daemon = True
t.start()
for item in range(20):
q.put(item)
q.join() # block until all tasks are done
How can I achieve something similar to the above code using asyncio?

asyncio makes sense only if you are working with IO, for example running an HTTP server or client. In the following example asyncio.sleep() simulates I/O calls. If you have a bunch of I/O tasks it can get as simple as:
import asyncio
import random
async def do_work(i):
print("[#{}] work part 1".format(i))
await asyncio.sleep(random.uniform(0.5, 2))
print("[#{}] work part 2".format(i))
await asyncio.sleep(random.uniform(0.1, 1))
print("[#{}] work part 3".format(i))
return "#{}".format(i)
loop = asyncio.get_event_loop()
tasks = [do_work(item + 1) for item in range(20)]
print("Start...")
results = loop.run_until_complete(asyncio.gather(*tasks))
print("...Done!")
print(results)
loop.close()
see also ensure_future and asyncio.Queue.

Related

Starting a new process from an asyncio loop

I want to start a new Process (Pricefeed) from my Executor class and then have the Executor class keep running in its own event loop (the shoot method). In my current attempt, the asyncio loop gets blocked on the line p.join(). However, without that line, my code just exits. How do I do this properly?
Note: fh.run() blocks as well.
import asyncio
from multiprocessing import Process, Queue
from cryptofeed import FeedHandler
from cryptofeed.defines import L2_BOOK
from cryptofeed.exchanges.ftx import FTX
class Pricefeed(Process):
def __init__(self, queue: Queue):
super().__init__()
self.coin_symbol = 'SOL-USD'
self.fut_symbol = 'SOL-USD-PERP'
self.queue = queue
async def _book_update(self, feed, symbol, book, timestamp, receipt_timestamp):
self.queue.put(book)
def run(self):
fh = FeedHandler()
fh.add_feed(FTX(symbols=[self.fut_symbol, self.coin_symbol], channels=[L2_BOOK],
callbacks={L2_BOOK: self._book_update}))
fh.run()
class Executor:
def __init__(self):
self.q = Queue()
async def shoot(self):
print('in shoot')
for i in range(5):
msg = self.q.get()
print(msg)
await asyncio.sleep(1) # do some stuff
async def run(self):
asyncio.create_task(self.shoot())
p = Pricefeed(self.q)
p.start()
p.join()
async def main():
g = Executor()
await g.run()
if __name__ == '__main__':
asyncio.run(main())
Since you're using a queue to communicate this is a somewhat tricky problem. To answer your first question as to why removing join makes the program work, join blocks until the process finishes. In asyncio you can't do anything blocking in a function marked async or it will freeze the event loop. To do this properly you'll need to run your process with the asyncio event loop's run_in_executor method which will run things in a process pool and return an awaitable that is compatible with the asyncio event loop.
Secondly, you'll need to use a multiprocessing Manager which creates shared state that can be used by multiple processes to properly share your queue. Managers directly support creation of a shared queue. Using these two bits of knowledge you can adapt your code to something like the following which works:
import asyncio
import functools
import time
from multiprocessing import Manager
from concurrent.futures import ProcessPoolExecutor
def run_pricefeed(queue):
i = 0
while True: #simulate putting an item on the queue every 250ms
queue.put(f'test-{i}')
i += 1
time.sleep(.25)
class Executor:
async def shoot(self, queue):
print('in shoot')
for i in range(5):
while not queue.empty():
msg = queue.get(block=False)
print(msg)
await asyncio.sleep(1) # do some stuff
async def run(self):
with ProcessPoolExecutor() as pool:
with Manager() as manager:
queue = manager.Queue()
asyncio.create_task(self.shoot(queue))
await asyncio.get_running_loop().run_in_executor(pool, functools.partial(run_pricefeed, queue))
async def main():
g = Executor()
await g.run()
if __name__ == '__main__':
asyncio.run(main())
This code has a drawback in that you need to empty the queue in a non-blocking fashing from your asyncio process and wait for a while for new items to come in before emptying it again, effectively implementing a polling mechanism. If you don't wait after emptying, you'll wind up with blocking code and you will freeze the event loop again. This isn't as good as just waiting for the queue to have an item in it by blocking, but may suit your needs. If possible, I would avoid asyncio here and use multiprocessing entirely, for example, by implementing queue processing as a separate process.

Start asyncio event loop in separate thread and consume queue items

I am writing a Python program that run tasks taken from a queue concurrently, to learn asyncio.
Items will be put onto a queue by interacting with a main thread (within REPL).
Whenever a task is put onto the queue, it should be consumed and executed immediately.
My approach is to kick off a separate thread and pass a queue to the event loop within that thread.
The tasks are running but only sequentially and I am not clear on how to run the tasks concurrently. My attempt is as follows:
import asyncio
import time
import queue
import threading
def do_it(task_queue):
'''Process tasks in the queue until the sentinel value is received'''
_sentinel = 'STOP'
def clock():
return time.strftime("%X")
async def process(name, total_time):
status = f'{clock()} {name}_{total_time}:'
print(status, 'START')
current_time = time.time()
end_time = current_time + total_time
while current_time < end_time:
print(status, 'processing...')
await asyncio.sleep(1)
current_time = time.time()
print(status, 'DONE.')
async def main():
while True:
item = task_queue.get()
if item == _sentinel:
break
await asyncio.create_task(process(*item))
print('event loop start')
asyncio.run(main())
print('event loop end')
if __name__ == '__main__':
tasks = queue.Queue()
th = threading.Thread(target=do_it, args=(tasks,))
th.start()
tasks.put(('abc', 5))
tasks.put(('def', 3))
Any advice pointing me in the direction of running these tasks concurrently would be greatly appreciated!
Thanks
UPDATE
Thank you Frank Yellin and cynthi8! I have reformed main() according to your advice:
removed await before asyncio.create_task - fixed concurrency
added wait while loop so that main would not return prematurely
used non-blocking mode of Queue.get()
The program now works as expected 👍
UPDATE 2
user4815162342 has offered further improvements, I have annotated his suggestions below.
'''
Starts auxiliary thread which establishes a queue and consumes tasks within a
queue.
Allow enqueueing of tasks from within __main__ and termination of aux thread
'''
import asyncio
import time
import threading
import functools
def do_it(started):
'''Process tasks in the queue until the sentinel value is received'''
_sentinel = 'STOP'
def clock():
return time.strftime("%X")
async def process(name, total_time):
print(f'{clock()} {name}_{total_time}:', 'Started.')
current_time = time.time()
end_time = current_time + total_time
while current_time < end_time:
print(f'{clock()} {name}_{total_time}:', 'Processing...')
await asyncio.sleep(1)
current_time = time.time()
print(f'{clock()} {name}_{total_time}:', 'Done.')
async def main():
# get_running_loop() get the running event loop in the current OS thread
# out to __main__ thread
started.loop = asyncio.get_running_loop()
started.queue = task_queue = asyncio.Queue()
started.set()
while True:
item = await task_queue.get()
if item == _sentinel:
# task_done is used to tell join when the work in the queue is
# actually finished. A queue length of zero does not mean work
# is complete.
task_queue.task_done()
break
task = asyncio.create_task(process(*item))
# Add a callback to be run when the Task is done.
# Indicate that a formerly enqueued task is complete. Used by queue
# consumer threads. For each get() used to fetch a task, a
# subsequent call to task_done() tells the queue that the processing
# on the task is complete.
task.add_done_callback(lambda _: task_queue.task_done())
# keep loop going until all the work has completed
# When the count of unfinished tasks drops to zero, join() unblocks.
await task_queue.join()
print('event loop start')
asyncio.run(main())
print('event loop end')
if __name__ == '__main__':
# started Event is used for communication with thread th
started = threading.Event()
th = threading.Thread(target=do_it, args=(started,))
th.start()
# started.wait() blocks until started.set(), ensuring that the tasks and
# loop variables are available from the event loop thread
started.wait()
tasks, loop = started.queue, started.loop
# call_soon schedules the callback callback to be called with args arguments
# at the next iteration of the event loop.
# call_soon_threadsafe is required to schedule callbacks from another thread
# put_nowait enqueues items in non-blocking fashion, == put(block=False)
loop.call_soon_threadsafe(tasks.put_nowait, ('abc', 5))
loop.call_soon_threadsafe(tasks.put_nowait, ('def', 3))
loop.call_soon_threadsafe(tasks.put_nowait, 'STOP')
As others pointed out, the problem with your code is that it uses a blocking queue which halts the event loop while waiting for the next item. The problem with the proposed solution, however, is that it introduces latency because it must occasionally sleep to allow other tasks to run. In addition to introducing latency, it prevents the program from ever going to sleep, even when there are no items in the queue.
An alternative is to switch to asyncio queue which is designed for use with asyncio. This queue must be created inside the running loop, so you can't pass it to do_it, you must retrieve it. Also, since it's an asyncio primitive, its put method must be invoked through call_soon_threadsafe to ensure that the event loop notices it.
One final issue is that your main() function uses another busy loop to wait for all the tasks to complete. This can be avoided by using Queue.join, which is explicitly designed for this use case.
Here is your code adapted to incorporate all of the above suggestions, with the process function remaining unchanged from your original:
import asyncio
import time
import threading
def do_it(started):
'''Process tasks in the queue until the sentinel value is received'''
_sentinel = 'STOP'
def clock():
return time.strftime("%X")
async def process(name, total_time):
status = f'{clock()} {name}_{total_time}:'
print(status, 'START')
current_time = time.time()
end_time = current_time + total_time
while current_time < end_time:
print(status, 'processing...')
await asyncio.sleep(1)
current_time = time.time()
print(status, 'DONE.')
async def main():
started.loop = asyncio.get_running_loop()
started.queue = task_queue = asyncio.Queue()
started.set()
while True:
item = await task_queue.get()
if item == _sentinel:
task_queue.task_done()
break
task = asyncio.create_task(process(*item))
task.add_done_callback(lambda _: task_queue.task_done())
await task_queue.join()
print('event loop start')
asyncio.run(main())
print('event loop end')
if __name__ == '__main__':
started = threading.Event()
th = threading.Thread(target=do_it, args=(started,))
th.start()
started.wait()
tasks, loop = started.queue, started.loop
loop.call_soon_threadsafe(tasks.put_nowait, ('abc', 5))
loop.call_soon_threadsafe(tasks.put_nowait, ('def', 3))
loop.call_soon_threadsafe(tasks.put_nowait, 'STOP')
Note: an unrelated issue with your code was that it awaited the result of create_task(), which nullified the usefulness of create_task() because it wasn't allowed to run in the background. (It would be equivalent to immediately joining a thread you've just started - you can do it, but it doesn't make much sense.) This issue is fixed both in the above code and in your edit to the question.
There are two problems with your code.
First, you should not have the await before the asyncio.create_task. This is possibly what is causing your code to run synchronously.
Then, once you've made your code run asynchronously, you need something after the while loop in main so that the code doesn't return immediately, but instead waits for all the jobs to finish. Another stackoverflow answer recommends:
while len(asyncio.Task.all_tasks()) > 1: # Any task besides main() itself?
await asyncio.sleep(0.2)
Alternatively there are versions of Queue that can keep track of running tasks.
As an additional problem:
If a queue.Queue is empty, get() blocks by default and does not return a sentinel string. https://docs.python.org/3/library/queue.html

Quit signal when waiting for blocking read from queue.Queue

In many cases I have a worker thread which pops data from a Queue and acts on it. At some kind of event I want my worker thread to stop. The simple solution is to add a timeout to the get call and check the Event/flag every time the get times out. This however as two problems:
Causes an unnecessary context switch
Delays the shutdown until a timeout occurs
Is there any better way to listen both to a stop event and new data in the Queue? Is it possible to listen to two Queue's at the same time and block until there's data in the first one? (In this case one can use a second Queue just to trigger the shutdown.)
The solution I'm currently using:
from queue import Queue, Empty
from threading import Event, Thread
from time import sleep
def worker(exit_event, queue):
print("Worker started.")
while not exit_event.isSet():
try:
data = queue.get(timeout=10)
print("got {}".format(data))
except Empty:
pass
print("Worker quit.")
if __name__ == "__main__":
exit_event = Event()
queue = Queue()
th = Thread(target=worker, args=(exit_event, queue))
th.start()
queue.put("Testing")
queue.put("Hello!")
sleep(2)
print("Asking worker to quit")
exit_event.set()
th.join()
print("All done..")
I guess you may easily reduce timeout to 0.1...0.01 sec. Slightly different solution is to use the queue to send both data and control commands to the thread:
import queue
import threading
import time
THREADSTOP = 0
class ThreadControl:
def __init__(self, command):
self.command = command
def worker(q):
print("Worker started.")
while True:
data = q.get()
if isinstance(data, ThreadControl):
if data.command == THREADSTOP:
break
print("got {}".format(data))
print("Worker quit.")
if __name__ == '__main__':
q = queue.Queue()
th = threading.Thread(target=worker, args=(q,))
th.start()
q.put("Testing")
q.put("Hello!")
time.sleep(2)
print("Asking worker to quit")
q.put(ThreadControl(command=THREADSTOP)) # sending command
th.join()
print("All done..")
Another option is to use sockets instead of queues.

async queue hangs when used with background thread

It seems asyncio.Queue only can be pushed by the same thread reading it? For instance:
import asyncio
from threading import Thread
import time
q = asyncio.Queue()
def produce():
for i in range(100):
q.put_nowait(i)
time.sleep(0.1)
async def consume():
while True:
i = await q.get()
print('consumed', i)
Thread(target=produce).start()
asyncio.get_event_loop().run_until_complete(consume())
only prints
consumed 0
and then hangs. What am I missing?
You can't call asyncio methods from another thread directly.
Either use loop.call_soon_threadsafe:
loop.call_soon_threadsafe(q.put_nowait, i)
Or asyncio.run_coroutine_threadsafe:
future = asyncio.run_coroutine_threadsafe(q.put(i), loop)
where loop is the loop returned by asyncio.get_event_loop() in your main thread.

In Producer/Consumer pattern, how could I kill the consumer thread?

I will run the consumer in another work thread, the code is as following:
def Consumer(self):
while True:
condition.acquire()
if not queue:
condition.wait()
json = queue.pop()
clients[0].write_message(json)
condition.notify()
condition.release()
t = threading.Thread(target=self.Consumer);
t.start()
However, I find that I could not kill this work thread, the thread will be wait() all the time after the job...
I try to send a single from Procedurer to Consumer whenever finish the procedure work, if the consumer receive the single, the work thread should exit(), is it possible to do that ?
My standard way to notify a consumer thread that should stop its work is send a fake message (I rewrite it to make it runnable):
import threading
condition = threading.Condition()
queue = []
class Client():
def write_message(self,msg):
print(msg)
clients=[Client()]
jobdone=object()
def Consumer():
while True:
condition.acquire()
try:
if not queue:
condition.wait()
json = queue.pop()
if json is jobdone:
break;
clients[0].write_message(json)
finally:
condition.release()
t = threading.Thread(target=Consumer);
t.start()
import time
time.sleep(2)
condition.acquire()
queue.append(jobdone)
condition.notify()
condition.release()
Anyway consider to use queue.Queue that is standard and make synchronization simple. Here is how my example become:
import threading
import queue
import time
queue = queue.Queue()
class Client():
def write_message(self,msg):
print(msg)
clients=[Client()]
jobdone=object()
def Consumer():
while True:
json = queue.get()
if json is jobdone:
break;
clients[0].write_message(json)
t = threading.Thread(target=Consumer);
t.start()
queue.put("Hello")
queue.put("Word")
time.sleep(2)
queue.put(jobdone)
t.join()
#You can use also q.join()
print("Job Done")

Categories

Resources