Detect if main process has been quit from background process - python

I have a single background process running alongside the main one, where it uses Queue to communicate (using multiprocessing, not multithreading). The main process runs constantly, and the background thread runs once per queue item so that if it gets backlogged, it can still catch up. Instead of closing with the main script (I've enabled daemon for that), I would prefer it to run until the queue is empty, then save and quit.
It's started like this:
q_send = Queue()
q_recv = Queue()
p1 = Process(target=background_process, args=(q_send, q_recv))
p1.daemon = True
p1.start()
Here's how the background process currently runs:
while True:
received_data = q_recv.get()
#do stuff
One way I've considered is to switch the loop to run all the time, but check the size of the queue before trying to read it, and wait a few seconds if it's empty before trying again. There are a couple of problems though. The whole point is it'll run once per item, so if there are 1000 queued commands, it seems a little inefficient checking the queue size before each one. Also, there's no real limit on how long the main process can go without sending an update, so I'd have to set the timeout quite high, as opposed to instantly exiting when the connection is broken, and queue emptied. With the background thread using up to 2gb of ram, it could probably do with exiting as soon as possible.
It'd also make it look a lot more messy:
afk_time = 0
while True:
if afk_time > 300:
return
if not q_recv.qsize():
time.sleep(2)
afk_time += 2
else:
received_data = q_recv.get()
#do stuff
I came across is_alive(), and thought perhaps getting the main process from current_process() might work, but it gave a picking error when I tried to send it to the queue.

Queue.get has a keyword argument timeout which determines the time to wait for an item if the queue is empty. If no item is available when the timeout elapses then a Empty exception is raised.
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
So you can except that error and break out of the loop:
try:
received_data = q_recv.get(timeout=300)
except queue.Empty:
return

Related

Python multiprocessing - main process wont continue when spawned process terminated

I want to run a function in python in a new process, do some work, return progress to the main process using a queue and wait on the main process for termination of the spawned process and then continue execution of the main process.
I got the following code, which runs the function foo in a new process and returns progress using a queue:
import multiprocessing as mp
import time
def foo(queue):
for i in range(10):
queue.put(i)
time.sleep(1)
if __name__ == '__main__':
mp.set_start_method('spawn')
queue = mp.Queue()
p = mp.Process(target=foo, args=(queue,))
p.start()
while p.is_alive():
print("ALIVE")
print(queue.get())
time.sleep(0.01)
print("Process finished")
The output is:
ALIVE
0
ALIVE
1
ALIVE
2
ALIVE
3
ALIVE
4
ALIVE
5
ALIVE
6
ALIVE
7
ALIVE
8
ALIVE
9
ALIVE
At some point neither "Alive" nor "Process finished" is printed. How can I continue execution when the spawned process stops running?
*Edit
The problem was that I didn't know that queue.get() blocks until an item is put into the queue if the queue is empty. I fixed it by changing
while p.is_alive():
print(queue.get())
time.sleep(0.01)
to
while p.is_alive():
if not queue.empty():
print(queue.get())
time.sleep(0.01)
Your code has a race condition. After the last number is put into the queue, the child process sleeps one more time before it exits. That gives the parent process enough time to fetch that option, sleep for a shorter time, and then conclude that the child is still alive before waiting for an 11th item that never comes.
Note that you get more ALIVE reports in your output than you do numbers. That tells you where the parent process is deadlocked.
There are a few possible ways you could fix the issue. You could change the foo function to sleep first, and put the item into the queue afterwards. That would make it so that it could quit running immediately after sending the 9 to its parent, which would probably allow it to avoid the race condition (since the parent does sleep for a short time after receiving each item). There would still be a small possibility of the race happening if things behaved very strangely, but it's quite unlikely.
A better approach might be to prevent the possibility of the race from occurring at all. For example, you might change the queue.get call to have a timeout set, so that it will give up (with a queue.Empty exception) if there's nothing to retrieve for too long. You could catch that exception immediately, or even use it as a planned method of breaking out of the loop rather than testing if the child is still alive or not, and catching it at a higher level.
A final option might be to send a special sentinel value from the child to the parent in the queue to signal when there will be no further values coming. For instance, you might send None as the last value, just before the foo function ends. The parent code could check for that specific value and break out if its loop, rather than treating it like a normal value (and e.g. printing it). This sort of positive signal that the child code is done might be better than the negative signal of a timeout, since it's less likely for something going wrong (e.g. the child crashing) being misinterpreted as the expected shutdown.

Interrupting a Queue.get

How can I interrupt a blocking Queue.get() in Python 3.X?
In Python 2.X setting a long timeout seems to work but the same cannot be said for Python 3.5.
Running on Windows 7, CPython 3.5.1, 64 bit both machine and Python.
Seems like it does not behave the same on Ubuntu.
The reason it works on Python 2 is that Queue.get with a timeout on Python 2 is implemented incredibly poorly, as a polling loop with increasing sleeps between non-blocking attempts to acquire the underlying lock; Python 2 doesn't actually feature a lock primitive that supports a timed blocking acquire (which is what a Queue internal Condition variable needs, but lacks, so it uses the busy loop). When you're trying this on Python 2, all you're checking is whether the Ctrl-C is processed after one of the (short) time.sleep calls finishes, and the longest sleep in Condition is only 0.05 seconds, which is so short you probably wouldn't notice even if you hit Ctrl-C the instant a new sleep started.
Python 3 has true timed lock acquire support (thanks to narrowing the number of target OSes to those which feature a native timed mutex or semaphore of some sort). As such, you're actually blocking on the lock acquisition for the whole timeout period, not blocking for 0.05s at a time between polling attempts.
It looks like Windows allows for registering handlers for Ctrl-C that mean that Ctrl-C doesn't necessarily generate a true signal, so the lock acquisition isn't interrupted to handle it. Python is informed of the Ctrl-C when the timed lock acquisition eventually fails, so if the timeout is short, you'll eventually see the KeyboardInterrupt, but it won't be seen until the timeout lapses. Since Python 2 Condition is only sleeping 0.05 seconds at a time (or less) the Ctrl-C is always processed quickly, but Python 3 will sleep until the lock is acquired.
Ctrl-Break is guaranteed to behave as a signal, but it also can't be handled by Python properly (it just kills the process) which probably isn't what you want either.
If you want Ctrl-C to work, you're stuck polling to some extent, but at least (unlike Python 2) you can effectively poll for Ctrl-C while live blocking on the queue the rest of the time (so you're alerted to an item becoming free immediately, which is the common case).
import time
import queue
def get_timed_interruptable(q, timeout):
stoploop = time.monotonic() + timeout - 1
while time.monotonic() < stoploop:
try:
return q.get(timeout=1) # Allow check for Ctrl-C every second
except queue.Empty:
pass
# Final wait for last fraction of a second
return q.get(timeout=max(0, stoploop + 1 - time.monotonic()))
This blocks for a second at a time until:
The time remaining is less than a second (it blocks for the remaining time, then allows the Empty to propagate normally)
Ctrl-C was pressed during the one second interval (after the remainder of that second elapses, KeyboardInterrupt is raised)
An item is acquired (if Ctrl-C was pressed, it will raise at this point too)
As mentioned in the comment thread to the great answer #ShadowRanger provided above, here is an alternate simplified form of his function:
import queue
def get_timed_interruptable(in_queue, timeout):
'''
Perform a queue.get() with a short timeout to avoid
blocking SIGINT on Windows.
'''
while True:
try:
# Allow check for Ctrl-C every second
return in_queue.get(timeout=min(1, timeout))
except queue.Empty:
if timeout < 1:
raise
else:
timeout -= 1
And as #Bharel pointed out in the comments, this could run a few milliseconds longer than the absolute timeout, which may be undesirable. As such here is a version with significantly better precision:
import time
import queue
def get_timed_interruptable_precise(in_queue, timeout):
'''
Perform a queue.get() with a short timeout to avoid
blocking SIGINT on Windows. Track the time closely
for high precision on the timeout.
'''
timeout += time.monotonic()
while True:
try:
# Allow check for Ctrl-C every second
return in_queue.get(timeout=max(0, min(1, timeout - time.monotonic())))
except queue.Empty:
if time.monotonic() > timeout:
raise
Just use get_nowait which won't block.
import time
...
while True:
if not q.empty():
q.get_nowait()
break
time.sleep(1) # optional timeout
This is obviously busy waiting, but q.get() does basically the same thing.

Python Gevent Shared Queue (Listener Process)

I am trying to get some code working where I can implement logging into a multi-threaded program using gevent. What I'd like to do is set up custom logging handlers to put log events into a Queue, while a listener process is continuously watching for new log events to handle appropriately. I have done this in the past with Multiprocessing, but never with Gevent.
I'm having an issue where the program is getting caught up in the infinite loop (listener process), and not allowing the other threads to "do work"...
Ideally, after the worker processes have finished, I can pass an arbitrary value to the listener process to tell it to break the loop, and then join all the processes together. Here's what I have so far:
import gevent
from gevent.pool import Pool
import Queue
import random
import time
def listener(q):
while True:
if not q.empty():
num = q.get()
print "The number is: %s" % num
if num <= 100:
print q.get()
# got passed 101, break out
else:
break
else:
continue
def worker(pid,q):
if pid == 0:
listener(q)
else:
gevent.sleep(random.randint(0,2)*0.001)
num = random.randint(1,100)
q.put(num)
def main():
q = Queue.Queue()
all_threads = []
all_threads = [gevent.spawn(worker, pid,q) for pid in xrange(10)]
gevent.wait(all_threads[1:])
q.put(101)
gevent.joinall(all_threads)
if __name__ == '__main__':
main()
As I said, the program seems to be getting hung up on that first process and does not allow the other workers to do their thing. I have also tried spawning the listener process completely separately itself (which is actually how I would rather do it), but that didn't seem to work either so I tried this way.
Any help would be appreciated, feel like I am probably just missing something obvious about gevent's back end.
Thanks
The first problem is that your listener is never yielding if the queue is initially empty. The first task you spawn is your listener. When it starts, there's a while True:, the q will be empty, so you go to the else branch, which just continues, looping back to the start of the while loop, and then the q is still empty. So you just sit in the first thread constantly checking the q is empty.
The key thing here is that gevent does not use "native" threads or processes. Unlike "real" threads, which can be switched to at any time by something behind the scenes (like your OS scheduler), gevent uses 'greenlets', which require that you do something to "yield control" to another task. That something is whatever gevent thinks would block, such as read from the network, disk, or use one of the blocking gevent operations.
One crude fix would be to start your listener when pid == 9 rather than 0. By making it spawn last, there will be items in the q, and it will go into the main if branch. The downside is that this doesn't fix the logic problem, so the first time the queue is empty, you'll get stuck in your infinite loop again.
A more correct fix would be to put gevent.sleep() instead of continue. sleep is a blocking operation, so your other tasks will get a chance to run. Without arguments, it waits for no time, but still gives gevent the chance to decide to switch to another task if it is ready to run. This still isn't very efficient, though, as if the Queue is empty, it's going to spend a lot of pointless time checking that over and over and asking to run again as soon as it can. sleep'ing for longer than the default of 0 will be more efficient, but would delay processing your log messages.
However, you can instead take advantage of the fact that many of gevent's types, such as Queue, can be used in more Pythonic ways and make your code a lot simpler and easier to understand, as well as more efficient.
import gevent
from gevent.queue import Queue
def listener(q):
for msg in q:
print "the number is %d" % msg
def worker(pid,q):
gevent.sleep(random.randint(0,2)*0.001)
num = random.randint(1,100)
q.put(num)
def main():
q = Queue()
listener_task = gevent.spawn(listener, q)
worker_tasks = [gevent.spawn(worker, pid, q) for pid in xrange(1, 10)]
gevent.wait(worker_tasks)
q.put(StopIteration)
gevent.join(listener_task)
Here, Queue can operate as an iterator in a for loop. As long as there are messages, it will get an item, run the loop, and then wait for another item. If there are no items, it will just block and hang around until the next one arrives. Since it blocks, though, gevent will switch to one of your other tasks to run, avoiding the infinite loop problem your example code has.
Because this version is using the Queue as a for loop iterator, there's also automatically a nice sentinel value we can put in the queue to make the listener task quit. If a for loop gets StopIteration from its iterator, it will exit cleanly. So when our for loop that's reading from q gets StopIteration from the q, it exits, and then the function exits, and the spawned task is finished.

efficient python raw_input and serial port polling

I am working on a python project that is polling for data on a COM port and also polling for user input. As of now, the program is working flawlessly but seems to be inefficient. I have the serial port polling occurring in a while loop running in a separate thread and sticking data into a Queue. The user input polling is also occurring in a while loop running in a separate thread sticking input into a Queue. Unfortunately I have too much code and posting it would take away from the point of the question.
So is there a more efficient way to poll a serial or raw_input() without sticking them in an infinite loop and running them in their own thread?
I have been doing a lot of research on this topic and keep coming across the "separate thread and Queue" paradigm. However, when I run this program I am using nearly 30% of my CPU resources on a quad-core i7. There has to be a better way.
I have worked with ISR's in C and was hoping there is something similar to interrupts that I could be using. My recent research has uncovered a lot of "Event" libraries with callbacks but I can't seems to wrap my head around how they would fit in my situation. I am developing on a Windows 7 (64-bit) machine but will be moving the finished product to a RPi when I am finished. I'm not looking for code, I just need to be pointed in the right direction. Thank you for any info.
You're seeing the high CPU usage because your main thread is using the non-blocking get_nowait call to poll two different queues in an infinite loop, which means most of the time your loop is going to be constantly looping. Constantly running through the loop uses CPU cycles, just as any tight infinite loop does. To avoid using lots of CPU, you want to have your infinite loops use blocking I/O, so that they wait until there's actually data to process before continuing. This way, you're not constantly running through the loop, and therefore using CPU.
So, user input thread:
while True:
data = raw_input() # This blocks, and won't use CPU while doing so
queue.put({'type' : 'input' : 'data' : data})
COM thread:
while True:
data = com.get_com_data() # This blocks, and won't use CPU while doing so
queue.put({'type' : 'COM' : 'data' : data})
main thread:
while True:
data = queue.get() # This call will block, and won't use CPU while doing so
# process data
The blocking get call will just wait until it's woken up by a put in another thread, using a threading.Condition object. It's not repeatedly polling. From Queue.py:
# Notify not_empty whenever an item is added to the queue; a
# thread waiting to get is notified then.
self.not_empty = _threading.Condition(self.mutex)
...
def get(self, block=True, timeout=None):
self.not_empty.acquire()
try:
if not block:
if not self._qsize():
raise Empty
elif timeout is None:
while not self._qsize():
self.not_empty.wait() # This is where the code blocks
elif timeout < 0:
raise ValueError("'timeout' must be a non-negative number")
else:
endtime = _time() + timeout
while not self._qsize():
remaining = endtime - _time()
if remaining <= 0.0:
raise Empty
self.not_empty.wait(remaining)
item = self._get()
self.not_full.notify()
return item
finally:
self.not_empty.release()
def put(self, item, block=True, timeout=None):
self.not_full.acquire()
try:
if self.maxsize > 0:
if not block:
if self._qsize() == self.maxsize:
raise Full
elif timeout is None:
while self._qsize() == self.maxsize:
self.not_full.wait()
elif timeout < 0:
raise ValueError("'timeout' must be a non-negative number")
else:
endtime = _time() + timeout
while self._qsize() == self.maxsize:
remaining = endtime - _time()
if remaining <= 0.0:
raise Full
self.not_full.wait(remaining)
self._put(item)
self.unfinished_tasks += 1
self.not_empty.notify() # This is what wakes up `get`
finally:
self.not_full.release()

Output Queue of a Python multiprocessing is providing more results than expected

From the following code I would expect that the length of the resulting list were the same as the one of the range of items with which the multiprocess is feed:
import multiprocessing as mp
def worker(working_queue, output_queue):
while True:
if working_queue.empty() is True:
break #this is supposed to end the process.
else:
picked = working_queue.get()
if picked % 2 == 0:
output_queue.put(picked)
else:
working_queue.put(picked+1)
return
if __name__ == '__main__':
static_input = xrange(100)
working_q = mp.Queue()
output_q = mp.Queue()
for i in static_input:
working_q.put(i)
processes = [mp.Process(target=worker,args=(working_q, output_q)) for i in range(mp.cpu_count())]
for proc in processes:
proc.start()
for proc in processes:
proc.join()
results_bank = []
while True:
if output_q.empty() is True:
break
else:
results_bank.append(output_q.get())
print len(results_bank) # length of this list should be equal to static_input, which is the range used to populate the input queue. In other words, this tells whether all the items placed for processing were actually processed.
results_bank.sort()
print results_bank
Has anyone any idea about how to make this code to run properly?
This code will never stop:
Each worker gets an item from the queue as long as it is not empty:
picked = working_queue.get()
and puts a new one for each that it got:
working_queue.put(picked+1)
As a result the queue will never be empty except when the timing between the process happens to be such that the queue is empty at the moment one of the processes calls empty(). Because the queue length is initially 100 and you have as many processes as cpu_count() I would be surprised if this ever stops on any realistic system.
Well executing the code with slight modification proves me wrong, it does stop at some point, which actually surprises me. Executing the code with one process there seems to be a bug, because after some time the process freezes but does not return. With multiple processes the result is varying.
Adding a short sleep period in the loop iteration makes the code behave as I expected and explained above. There seems to be some timing issue between Queue.put, Queue.get and Queue.empty, although they are supposed to be thread-safe. Removing the empty test also gives the expected result (without ever getting stuck at an empty queue).
Found the reason for the varying behaviour. The objects put on the queue are not flushed immediately. Therefore empty might return False although there are items in the queue waiting to be flushed.
From the documentation:
Note: When an object is put on a queue, the object is pickled and a
background thread later flushes the pickled data to an underlying
pipe. This has some consequences which are a little surprising, but
should not cause any practical difficulties – if they really bother
you then you can instead use a queue created with a manager.
After putting an object on an empty queue there may be an infinitesimal delay before the queue’s empty() method returns False and get_nowait() can return without raising Queue.Empty.
If multiple processes are enqueuing objects, it is possible for the objects to be received at the other end out-of-order. However, objects enqueued by the same process will always be in the expected order with respect to each other.

Categories

Resources