Python - Threading and a While True Loop - python

I have a thread that appends rows to self.output and a loop that runs until self.done is True (or the max execution time is reached).
Is there a more efficient way to do this other than using a while loop that constantly checks to see if it's done. The while loop causes the CPU to spike to 100% while it's running..
time.clock()
while True:
if len(self.output):
yield self.output.pop(0)
elif self.done or 15 < time.clock():
if 15 < time.clock():
yield "Maximum Execution Time Exceeded %s seconds" % time.clock()
break

Are your threads appending to self.output here, with your main task consuming them? If so, this is a tailor-made job for Queue.Queue. Your code should become something like:
import Queue
# Initialise queue as:
queue = Queue.Queue()
Finished = object() # Unique marker the producer will put in the queue when finished
# Consumer:
try:
while True:
next_item = self.queue.get(timeout=15)
if next_item is Finished: break
yield next_item
except Queue.Empty:
print "Timeout exceeded"
Your producer threads add items to the queue with queue.put(item)
[Edit] The original code has a race issue when checking self.done (for example multiple items may be appended to the queue before the flag is set, causing the code to bail out at the first one). Updated with a suggestion from ΤΖΩΤΖΙΟΥ - the producer thread should instead append a special token (Finished) to the queue to indicate it is complete.
Note: If you have multiple producer threads, you'll need a more general approach to detecting when they're all finished. You could accomplish this with the same strategy - each thread a Finished marker and the consumer terminates when it sees num_threads markers.

Use a semaphore; have the working thread release it when it's finished, and block your appending thread until the worker is finished with the semaphore.
ie. in the worker, do something like self.done = threading.Semaphore() at the beginning of work, and self.done.release() when finished. In the code you noted above, instead of the busy loop, simply do self.done.acquire(); when the worker thread is finished, control will return.
Edit: I'm afraid I don't address your needed timeout value, though; this issue describes the need for a semaphore timeout in the standard library.

Use time.sleep(seconds) to create a brief pause after each iteration of the while loop to relinquish the cpu. You will have to set the time you sleep during each iteration based on how important it is that you catch the job quickly after it's complete.
Example:
time.clock()
while True:
if len(self.output):
yield self.output.pop(0)
elif self.done or 15 < time.clock():
if 15 < time.clock():
yield "Maximum Execution Time Exceeded %s seconds" % time.clock()
break
time.sleep(0.01) # sleep for 10 milliseconds

use mutex module or event/semaphore

You have to use a synchronization primitive here. Look here: http://docs.python.org/library/threading.html.
Event objects seem very simple and should solve your problem. You can also use a condition object or a semaphore.
I don't post an example because I've never used Event objects, and the alternatives are probably less simple.
Edit: I'm not really sure I understood your problem. If a thread can wait until some condition is statisfied, use synchronization. Otherwise the sleep() solution that someone posted will about taking too much CPU time.

Related

Python multiprocessing - main process wont continue when spawned process terminated

I want to run a function in python in a new process, do some work, return progress to the main process using a queue and wait on the main process for termination of the spawned process and then continue execution of the main process.
I got the following code, which runs the function foo in a new process and returns progress using a queue:
import multiprocessing as mp
import time
def foo(queue):
for i in range(10):
queue.put(i)
time.sleep(1)
if __name__ == '__main__':
mp.set_start_method('spawn')
queue = mp.Queue()
p = mp.Process(target=foo, args=(queue,))
p.start()
while p.is_alive():
print("ALIVE")
print(queue.get())
time.sleep(0.01)
print("Process finished")
The output is:
ALIVE
0
ALIVE
1
ALIVE
2
ALIVE
3
ALIVE
4
ALIVE
5
ALIVE
6
ALIVE
7
ALIVE
8
ALIVE
9
ALIVE
At some point neither "Alive" nor "Process finished" is printed. How can I continue execution when the spawned process stops running?
*Edit
The problem was that I didn't know that queue.get() blocks until an item is put into the queue if the queue is empty. I fixed it by changing
while p.is_alive():
print(queue.get())
time.sleep(0.01)
to
while p.is_alive():
if not queue.empty():
print(queue.get())
time.sleep(0.01)
Your code has a race condition. After the last number is put into the queue, the child process sleeps one more time before it exits. That gives the parent process enough time to fetch that option, sleep for a shorter time, and then conclude that the child is still alive before waiting for an 11th item that never comes.
Note that you get more ALIVE reports in your output than you do numbers. That tells you where the parent process is deadlocked.
There are a few possible ways you could fix the issue. You could change the foo function to sleep first, and put the item into the queue afterwards. That would make it so that it could quit running immediately after sending the 9 to its parent, which would probably allow it to avoid the race condition (since the parent does sleep for a short time after receiving each item). There would still be a small possibility of the race happening if things behaved very strangely, but it's quite unlikely.
A better approach might be to prevent the possibility of the race from occurring at all. For example, you might change the queue.get call to have a timeout set, so that it will give up (with a queue.Empty exception) if there's nothing to retrieve for too long. You could catch that exception immediately, or even use it as a planned method of breaking out of the loop rather than testing if the child is still alive or not, and catching it at a higher level.
A final option might be to send a special sentinel value from the child to the parent in the queue to signal when there will be no further values coming. For instance, you might send None as the last value, just before the foo function ends. The parent code could check for that specific value and break out if its loop, rather than treating it like a normal value (and e.g. printing it). This sort of positive signal that the child code is done might be better than the negative signal of a timeout, since it's less likely for something going wrong (e.g. the child crashing) being misinterpreted as the expected shutdown.

Detect if main process has been quit from background process

I have a single background process running alongside the main one, where it uses Queue to communicate (using multiprocessing, not multithreading). The main process runs constantly, and the background thread runs once per queue item so that if it gets backlogged, it can still catch up. Instead of closing with the main script (I've enabled daemon for that), I would prefer it to run until the queue is empty, then save and quit.
It's started like this:
q_send = Queue()
q_recv = Queue()
p1 = Process(target=background_process, args=(q_send, q_recv))
p1.daemon = True
p1.start()
Here's how the background process currently runs:
while True:
received_data = q_recv.get()
#do stuff
One way I've considered is to switch the loop to run all the time, but check the size of the queue before trying to read it, and wait a few seconds if it's empty before trying again. There are a couple of problems though. The whole point is it'll run once per item, so if there are 1000 queued commands, it seems a little inefficient checking the queue size before each one. Also, there's no real limit on how long the main process can go without sending an update, so I'd have to set the timeout quite high, as opposed to instantly exiting when the connection is broken, and queue emptied. With the background thread using up to 2gb of ram, it could probably do with exiting as soon as possible.
It'd also make it look a lot more messy:
afk_time = 0
while True:
if afk_time > 300:
return
if not q_recv.qsize():
time.sleep(2)
afk_time += 2
else:
received_data = q_recv.get()
#do stuff
I came across is_alive(), and thought perhaps getting the main process from current_process() might work, but it gave a picking error when I tried to send it to the queue.
Queue.get has a keyword argument timeout which determines the time to wait for an item if the queue is empty. If no item is available when the timeout elapses then a Empty exception is raised.
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
So you can except that error and break out of the loop:
try:
received_data = q_recv.get(timeout=300)
except queue.Empty:
return

Interrupting a Queue.get

How can I interrupt a blocking Queue.get() in Python 3.X?
In Python 2.X setting a long timeout seems to work but the same cannot be said for Python 3.5.
Running on Windows 7, CPython 3.5.1, 64 bit both machine and Python.
Seems like it does not behave the same on Ubuntu.
The reason it works on Python 2 is that Queue.get with a timeout on Python 2 is implemented incredibly poorly, as a polling loop with increasing sleeps between non-blocking attempts to acquire the underlying lock; Python 2 doesn't actually feature a lock primitive that supports a timed blocking acquire (which is what a Queue internal Condition variable needs, but lacks, so it uses the busy loop). When you're trying this on Python 2, all you're checking is whether the Ctrl-C is processed after one of the (short) time.sleep calls finishes, and the longest sleep in Condition is only 0.05 seconds, which is so short you probably wouldn't notice even if you hit Ctrl-C the instant a new sleep started.
Python 3 has true timed lock acquire support (thanks to narrowing the number of target OSes to those which feature a native timed mutex or semaphore of some sort). As such, you're actually blocking on the lock acquisition for the whole timeout period, not blocking for 0.05s at a time between polling attempts.
It looks like Windows allows for registering handlers for Ctrl-C that mean that Ctrl-C doesn't necessarily generate a true signal, so the lock acquisition isn't interrupted to handle it. Python is informed of the Ctrl-C when the timed lock acquisition eventually fails, so if the timeout is short, you'll eventually see the KeyboardInterrupt, but it won't be seen until the timeout lapses. Since Python 2 Condition is only sleeping 0.05 seconds at a time (or less) the Ctrl-C is always processed quickly, but Python 3 will sleep until the lock is acquired.
Ctrl-Break is guaranteed to behave as a signal, but it also can't be handled by Python properly (it just kills the process) which probably isn't what you want either.
If you want Ctrl-C to work, you're stuck polling to some extent, but at least (unlike Python 2) you can effectively poll for Ctrl-C while live blocking on the queue the rest of the time (so you're alerted to an item becoming free immediately, which is the common case).
import time
import queue
def get_timed_interruptable(q, timeout):
stoploop = time.monotonic() + timeout - 1
while time.monotonic() < stoploop:
try:
return q.get(timeout=1) # Allow check for Ctrl-C every second
except queue.Empty:
pass
# Final wait for last fraction of a second
return q.get(timeout=max(0, stoploop + 1 - time.monotonic()))
This blocks for a second at a time until:
The time remaining is less than a second (it blocks for the remaining time, then allows the Empty to propagate normally)
Ctrl-C was pressed during the one second interval (after the remainder of that second elapses, KeyboardInterrupt is raised)
An item is acquired (if Ctrl-C was pressed, it will raise at this point too)
As mentioned in the comment thread to the great answer #ShadowRanger provided above, here is an alternate simplified form of his function:
import queue
def get_timed_interruptable(in_queue, timeout):
'''
Perform a queue.get() with a short timeout to avoid
blocking SIGINT on Windows.
'''
while True:
try:
# Allow check for Ctrl-C every second
return in_queue.get(timeout=min(1, timeout))
except queue.Empty:
if timeout < 1:
raise
else:
timeout -= 1
And as #Bharel pointed out in the comments, this could run a few milliseconds longer than the absolute timeout, which may be undesirable. As such here is a version with significantly better precision:
import time
import queue
def get_timed_interruptable_precise(in_queue, timeout):
'''
Perform a queue.get() with a short timeout to avoid
blocking SIGINT on Windows. Track the time closely
for high precision on the timeout.
'''
timeout += time.monotonic()
while True:
try:
# Allow check for Ctrl-C every second
return in_queue.get(timeout=max(0, min(1, timeout - time.monotonic())))
except queue.Empty:
if time.monotonic() > timeout:
raise
Just use get_nowait which won't block.
import time
...
while True:
if not q.empty():
q.get_nowait()
break
time.sleep(1) # optional timeout
This is obviously busy waiting, but q.get() does basically the same thing.

Python semaphore "hangs" in tight loops

I have been having an issue with Python Semaphores appearing to "lock" for an unbounded amount of time when there is a tight relationship between acquire/release. I do not have this issue with Lock/RLock
Below is code distilled to to the simplest case which exhibits the concerning behavior.
import threading
import time
sem = threading.Semaphore()
#sem = threading.RLock()
exit = False
def spinner():
while(not exit):
sem.acquire()
time.sleep(1)
sem.release()
t = threading.Thread(target=spinner)
t.start()
print time.strftime("%H:%M:%S",time.gmtime())
for i in range(0,10):
sem.acquire()
print "Accessed!"
sem.release()
print time.strftime("%H:%M:%S",time.gmtime())
exit = True
t.join()
When I use the semaphore method, this takes an unpredictable amount of time (sometimes 20 minutes!)
When I use a Lock or RLock, this completes quickly as I expect.
Am I missing something? It seems like semaphore with the default value=1 should behave the same as Lock.
According to the documentation I'm looking at, calling release on one thread should unblock an indeterminate other blocked thread. However what I think is happening is that the thread which calls the release is free to keep running, then re-acquire if it is still within its time-slice. When hitting acquire again it sees an unblocked semaphore and gets access again. Bad luck thus forces the waiting thread to keep waiting a long time.
Am I missing something? Why would Lock/RLock work any better?

How do I feed an infinite generator to eventlet (or gevent)?

The docs of both eventlet and gevent have several examples on how to asyncronously spawn IO tasks and get the results latter.
But so far, all the examples where a value should be returned from the async call,I allways find a blocking call after all the calls to spawn(). Either join(), joinall(), wait(), waitall().
This assumes that calling the functions that use IO is immediate and we can jump right into the point where we are waiting for the results.
But in my case I want to get the jobs from a generator that can be slow and or arbitrarily large or even infinite.
I obviously can't do this
pile = eventlet.GreenPile(pool)
for url in mybiggenerator():
pile.spawn(fetch_title, url)
titles = '\n'.join(pile)
because mybiggenerator() can take a long time before it is exhausted. So I have to start consuming the results while I am still spawning async calls.
This is probably usually done with resource to queues, but I'm not really sure how. Say I create a queue to hold jobs, push a bunch of jobs from a greenlet called P and pop them from another greenlet C.
When in C, if I find that the queue is empty, how do I know if P has pushed every job it had to push or if it is just in the middle of an iteration?
Alternativey,Eventlet allows me to loop through a pile to get the return values, but can I start doing this without having spawn all the jobs I have to spawn? How? This would be a simpler alternative.
You don't need any pool or pile by default. They're just convenient wrappers to implement a particular strategy. First you should get idea how exactly your code must work under all circumstances, that is: when and why you start another greenthread, when and why wait for something.
When you have some answers to these questions and doubt in others, ask away. In the meanwhile, here's a prototype that processes infinite "generator" (actually a queue).
queue = eventlet.queue.Queue(10000)
wait = eventlet.semaphore.CappedSemaphore(1000)
def fetch(url):
# httplib2.Http().request
# or requests.get
# or urllib.urlopen
# or whatever API you like
return response
def crawl(url):
with wait:
response = fetch(url)
links = parse(response)
for url in link:
queue.put(url)
def spawn_crawl_next():
try:
url = queue.get(block=False)
except eventlet.queue.Empty:
return False
# use another CappedSemaphore here to limit number of outstanding connections
eventlet.spawn(crawl, url)
return True
def crawler():
while True:
if spawn_crawl_next():
continue
while wait.balance != 0:
eventlet.sleep(1)
# if last spawned `crawl` enqueued more links -- process them
if not spawn_crawl_next():
break
def main():
queue.put('http://initial-url')
crawler()
Re: "concurrent.futures from Python3 does not really apply to "eventlet or gevent" part."
In fact, eventlet can be combined to deploy the concurrent.futures ThreadPoolExecutor as a GreenThread executor.
See: https://github.com/zopefiend/green-concurrent.futures-with-eventlet/commit/aed3b9f17ac27eeaf8c56210e0c8e4aff2ecbdb5
I had the same problem and it has been super difficult to find any answers.
I think I managed to get something working by having a consumer running on a separate thread and using Event for synchronization. Seems to work fine.
Only caveat is that you have to be careful with monkey-patching. If you monkey-patch threading facilities this will probably not work.
import gevent
import gevent.queue
import threading
import time
q = gevent.queue.JoinableQueue()
queue_not_empty = threading.Event()
def run_task(task):
print(f"Started task {task} # {time.time()}")
# Use whatever has been monkey-patched with gevent here
gevent.sleep(1)
print(f"Finished task {task} # {time.time()}")
def consumer():
while True:
print("Waiting for item in queue")
queue_not_empty.wait()
try:
task = q.get()
print(f"Dequed task {task} for consumption # {time.time()}")
except gevent.exceptions.LoopExit:
queue_not_empty.clear()
continue
try:
gevent.spawn(run_task, task)
finally:
q.task_done()
gevent.sleep(0) # Kickstart task
def enqueue(item):
q.put(item)
queue_not_empty.set()
# Run consumer on separate thread
consumer_thread = threading.Thread(target=consumer, daemon=True)
consumer_thread.start()
# Add some tasks
for i in range(5):
enqueue(i)
time.sleep(2)
Output:
Waiting for item in queue
Dequed task 0 for consumption # 1643232632.0220542
Started task 0 # 1643232632.0222237
Waiting for item in queue
Dequed task 1 for consumption # 1643232632.0222733
Started task 1 # 1643232632.0222948
Waiting for item in queue
Dequed task 2 for consumption # 1643232632.022315
Started task 2 # 1643232632.02233
Waiting for item in queue
Dequed task 3 for consumption # 1643232632.0223525
Started task 3 # 1643232632.0223687
Waiting for item in queue
Dequed task 4 for consumption # 1643232632.022386
Started task 4 # 1643232632.0224123
Waiting for item in queue
Finished task 0 # 1643232633.0235817
Finished task 1 # 1643232633.0236874
Finished task 2 # 1643232633.0237293
Finished task 3 # 1643232633.0237558
Finished task 4 # 1643232633.0237799
Waiting for item in queue
With the new concurrent.futures module in Py3k, I would say (assuming that the processing you want to do is actually something more complex than join):
with concurrent.futures.ThreadPoolExecutor(max_workers=foo) as wp:
res = [wp.submit(fetchtitle, url) for url in mybiggenerator()]
ans = '\n'.join([a for a in concurrent.futures.as_completed(res)]
This will allow you to start processing results before all of your fetchtitle calls complete. However, it will require you to exhaust mybiggenerator before you continue -- it's not clear how you want to get around this, unless you want to set some max_urls parameter or similar. That would still be something you could do with your original implementation, though.

Categories

Resources