How to handle exception in threading with queue in Python? - python

This is never print:
"Exception in threadfuncqueue handled by threadfuncqueue",
"Exception in threadfuncqueue handled by main thread" and
"thread test with queue passed". Never quitting!
from threading import Thread
from Queue import Queue
import time
class ImRaiseError():
def __init__(self):
time.sleep(1)
raise Exception(self.__class__.__name__)
# place for paste worked code example from below
print "begin thread test with queue"
def threadfuncqueue(q):
print "\n"+str(q.get())
while not q.empty():
try:
testthread = ImRaiseError()
finally:
print "Exception in threadfuncqueue handled by threadfuncqueue"
q = Queue()
items = [1,2]
for i in range(len(items)):
t = Thread(target=threadfuncqueue,args=(q,))
if(1 == i):
t.daemon = False
else:
t.daemon = True
t.start()
for item in items:
q.put("threadfuncqueue"+str(item))
try:
q.join() # block until all tasks are done
finally:
print "Exception in threadfuncqueue handled by main thread"
print "thread test with queue passed"
quit()
How handle this exception?
Example of worked code, but without queue:
print "=========== procedure style test"
def threadfunc(q):
print "\n"+str(q)
while True:
try:
testthread = ImRaiseError()
finally:
print str(q)+" handled by process"
try:
threadfunc('testproc')
except Exception as e:
print "error!",e
print "procedure style test ==========="
print "=========== simple thread tests"
testthread = Thread(target=threadfunc,args=('testthread',))
testthread.start()
try:
testthread.join()
finally:
print "Exception in testthread handled by main thread"
testthread1 = Thread(target=threadfunc,args=('testthread1',))
testthread1.start()
try:
testthread1.join()
finally:
print "Exception in testthread1 handled by main thread"
print "simple thread tests ==========="

Short Answer
You're putting things in a queue and retrieving them, but if you're going to join a queue, you need to mark tasks as done as you pull them out of the queue and process them. According to the docs, every time you enqueue an item, a counter is incremented, and you need to call q.task_done() to decrement that counter. q.join() will block until that counter reaches zero. Add this immediately after your q.get() call to prevent main from being blocked:
q.task_done()
Also, I find it odd that you're checking q for emptiness after you've retrieved something from it. I'm not sure exactly what you're trying to achieve with that so I don't have any recommendations for you, but I would suggest reconsidering your design in that area.
Other Thoughts
Once you get this code working you should take it over to Code Review because it is a bit of a mess. Here are a few thoughts for you:
Exception Handling
You're not actually "handling" the exception in threadfuncqueue(q). All the finally statement does is allow you to execute cleanup code in the event of an exception. It does not actually catch and handle the exception. The exception will still travel up the call stack. Consider this example, test.py:
try:
raise Exception
finally:
print("Yup!")
print("Nope!")
Output:
Yup!
Traceback (most recent call last):
File "test.py", line 2, in
raise Exception
Exception
Notice that "Yup!" got printed while "Nope!" didn't. The code in the finally block was executed, but that didn't stop the exception from propagating up the stack and halting the interpreter. You need the except statement for that:
try:
raise Exception
except Exception: # only catch the exceptions you expect
print("Yup!")
print("Nope!")
Output:
Yup!
Nope!
This time both are printed, because we caught and handled the exception.
Exception Raising
Your current method of raising the exception in your thread is needlessly complicated. Instead of creating the whole ImRaiseError class, just raise the exception you want with a string:
raise Exception('Whatever error message I want')
If you find yourself manually manipulating mangled names (like self.__class__.__name__), you're usually doing something wrong.
Extra Parentheses
Using parentheses around conditional expressions is generally frowned upon in Python:
if(1 == i): # unnecessary extra characters
Try to break the C/C++/Java habit and get rid of them:
if 1 == i:
Other
I've already gone beyond the scope of this question, so I'm going to cut this off now, but there are a few other things you could clean up and make more idiomatic. Head over to Code Review when you're done here and see what else can be improved.

Related

Python: if I wrap pool.apply_async commands in try...except, are they still executed in parallel?

I've taken over some code a former colleague wrote, which was frequently getting stuck when one or more parallelised functions through a NameError exception, which wasn't caught. (The parallelisation is handled by multiprocessing.Pool.) Because the exception is due to certain arguments not being defined, the only way I've been able to catch this exception is to put the pool.apply_async commands into try...except blocks, like so:
from multiprocessing import Pool
# Define worker functions
def workerfn1(args1):
#commands
def workerfn2(args2):
#more commands
def workerfn3(args3):
#even more commands
# Execute worker functions in parallel
with Pool(processes=os.cpu_count()-1) as pool:
try:
r1 = pool.apply_async(workerfn1, args1)
except NameError as e:
print("Worker function r1 failed")
print(e)
try:
r2 = pool.apply_async(workerfn2, args2)
except NameError as e:
print("Worker function r2 failed")
print(e)
try:
r3 = pool.apply_async(workerfn3, args3)
except NameError as e:
print("Worker function r3 failed")
print(e)
Obviously, the try...except blocks are not parallelised, but the interpreter has to read the apply_async commands sequentially anyway while it assigns them to different CPUs...so will these three functions still be executed in parallel (if they don't throw the NameError exception), or does the use of try...except prevent this from happening?
First, you need to be more careful in posting code that is not full of spelling and other errors.
Method multiprocessing.pool.Pool.apply_async (not apply_sync) returns a multiprocessing.pool.AsyncResult instance. It is only when you call method get on this instance that you get either the return value from your worker function or any exception that occurred in your worker function is now thrown. So:
from multiprocessing import Pool
# Define worker functions
def workerfn1(args1):
...
def workerfn2(args2):
...
def workerfn3(args3):
raise NameError('Some name goes here.')
# Required for Windows:
if __name__ == '__main__':
# Execute worker functions in parallel
with Pool(processes=3) as pool:
result1 = pool.apply_async(workerfn1, args=(1,))
result2 = pool.apply_async(workerfn2, args=(1,))
result3 = pool.apply_async(workerfn3, args=(1,))
try:
return_value1 = result1.get()
except NameError as e:
print("Worker function workerfn1 failed:", e)
try:
return_value2 = result2.get()
except NameError as e:
print("Worker function workerfn2 failed:", e)
try:
return_value3 = result3.get()
except NameError as e:
print("Worker function workerfn3 failed:", e)
Prints:
Worker function workerfn3 failed: Some name goes here.
Note
Without calling get on the AsyncResult returned from apply_async you are not waiting for the completion of the submitted task and there is no point in surrounding the call with try/catch. When you then fall through the with block an implicit call to terminate will be done on the pool instance that will immediately kill all running pool processes and any running tasks will be halted and any tasks waiting to run will be purged. You can call pool.close() followed by pool.join() within the block and that sequence will wait for all submitted tasks to complete. But without explicitly calling get on the AsyncResult instances you will not be able to get return values or exceptions.

python 1 particular thread does not return when it hits an exception

i have this code in my main script for starting threads from an array. i have different code in python scripts for different threads to work from. there is 1 particular code that i don't understand why when i hit an exception it seems to hang the entire program/script i have...
import threading #(of course)
def start_threads(threads)
for t in threads:
t.start()
for t in threads:
t.join()
return
threads = []
thread1 = threading.Thread(target=start_test1,args=(self,))
thread1.daemon = True
threads.append(thread1)
thread2 = threading.Thread(target=start_test2,args=(self,))
thread2.daemon = True
threads.append(thread2)
start_threads(threads)
So the code for thread1 will never cause my main program/script to halt when it hits any exceptions. for some reason the code in thread2 does. i am doing a try/except in the code for either thread so i don't know why thread2 wont return properly.
the code inside thread2 is similar to following:
try:
print('starting thread2')
some_function(options, parameters, etc)
print('thread2 complete')
except Exception as e:
print('error running thread2')
print('msg: %s' % e)
I even added a "return" after the last print line in the exception, when i debug after the last print line in or if i place a return there, nothing happens, meaning walking the lines seem ok, but it never returns to my main code to exit program/script. i see the print messages on console, but my code should be exiting the main program with another message.

How to catch Exceptions from thread

I've got this piece of code that starts a thread. Then it waits for a few seconds and then it checks if it got a Event. If it does have an event, the thread is 'canceled'. Else a Exception is thrown.
I want to know how to catch this Exception because I've searched for a long time and did not find a clear answer.
import threading
import time
def thread(args1, stop_event):
print "starting thread"
stop_event.wait(10)
if not stop_event.is_set():
raise Exception("signal!")
pass
try:
t_stop = threading.Event()
t = threading.Thread(target=thread, args=(1, t_stop))
t.start()
time.sleep(11)
#normally this should not be executed
print "stopping thread!"
t_stop.set()
except Exception as e:
print "action took to long, bye!"
First I've tried this concept with a Python signal but when performing more than 1 signal.alarm, it just got stuck for no reason at all (prob. a bug).
EDIT:
I don't want to extend the already existing class, I want to work with the native defined class.
Also I do not want to loop continuously to check if exceptions occured. I want the thread to pass the exception to its parent method. Therefore the time.sleep(11) action in my code
Created from my comments on your question.
Try something like this.
import sys
import threading
import time
import Queue
def thread(args1, stop_event, queue_obj):
print "starting thread"
stop_event.wait(10)
if not stop_event.is_set():
try:
raise Exception("signal!")
except Exception:
queue_obj.put(sys.exc_info())
pass
try:
queue_obj = Queue.Queue()
t_stop = threading.Event()
t = threading.Thread(target=thread, args=(1, t_stop, queue_obj))
t.start()
time.sleep(11)
# normally this should not be executed
print "stopping thread!"
t_stop.set()
try:
exc = queue_obj.get(block=False)
except Queue.Empty:
pass
else:
exc_type, exc_obj, exc_trace = exc
print exc_obj
except Exception as e:
print "action took to long, bye!"
When run, the Exception "signal!" is raised, and is printed by print exc_obj.

How to detect exceptions in concurrent.futures in Python3?

I have just moved on to python3 as a result of its concurrent futures module. I was wondering if I could get it to detect errors. I want to use concurrent futures to parallel program, if there are more efficient modules please let me know.
I do not like multiprocessing as it is too complicated and not much documentation is out. It would be great however if someone could write a Hello World without classes only functions using multiprocessing to parallel compute so that it is easy to understand.
Here is a simple script:
from concurrent.futures import ThreadPoolExecutor
def pri():
print("Hello World!!!")
def start():
try:
while True:
pri()
except KeyBoardInterrupt:
print("YOU PRESSED CTRL+C")
with ThreadPoolExecutor(max_workers=3) as exe:
exe.submit(start)
The above code was just a demo, of how CTRL+C will not work to print the statement.
What I want is to be able to call a function is an error is present. This error detection must be from the function itself.
Another example
import socket
from concurrent.futures import ThreadPoolExecutor
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
def con():
try:
s.connect((x,y))
main()
except: socket.gaierror
err()
def err():
time.sleep(1)
con()
def main():
s.send("[+] Hello")
with ThreadPoolExecutor as exe:
exe.submit(con)
Way too late to the party, but maybe it'll help someone else...
I'm pretty sure the original question was not really answered. Folks got hung up on the fact that user5327424 was using a keyboard interrupt to raise an exception when the point was that the exception (however it was caused) was not raised. For example:
import concurrent.futures
def main():
numbers = range(10)
with concurrent.futures.ThreadPoolExecutor() as executor:
results = {executor.submit(raise_my_exception, number): number for number in numbers}
def raise_my_exception(number):
print('Proof that this function is getting called. %s' % number)
raise Exception('This never sees the light of day...')
main()
When the example code above is executed, you will see the text inside the print statement displayed on the screen, but you will never see the exception. This is because the results of each thread are held in the results object. You need to iterate that object to get to your exceptions. The following example shows how to access the results.
import concurrent.futures
def main():
numbers = range(10)
with concurrent.futures.ThreadPoolExecutor() as executor:
results = {executor.submit(raise_my_exception, number): number for number in numbers}
for result in results:
# This will cause the exception to be raised (but only the first one)
print(result.result())
def raise_my_exception(number):
print('Proof that this function is getting called. %s' % number)
raise Exception('This will be raised once the results are iterated.')
main()
I'm not sure I like this behavior or not, but it does allow the threads to fully execute, regardless of the exceptions encountered inside the individual threads.
Here's a solution. I'm not sure you like it, but I can't think of any other. I've modified your code to make it work.
from concurrent.futures import ThreadPoolExecutor
import time
quit = False
def pri():
print("Hello World!!!")
def start():
while quit is not True:
time.sleep(1)
pri()
try:
pool = ThreadPoolExecutor(max_workers=3)
pool.submit(start)
while quit is not True:
print("hei")
time.sleep(1)
except KeyboardInterrupt:
quit = True
Here are the points:
When you use with ThreadPoolExecutor(max_workers=3) as exe, it waits until all tasks have been done. Have a look at Doc
If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait is False then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing.
You can avoid having to call this method explicitly if you use the with statement, which will shutdown the Executor (waiting as if Executor.shutdown() were called with wait set to True)
It's like calling join() on a thread.
That's why I replaced it with:
pool = ThreadPoolExecutor(max_workers=3)
pool.submit(start)
Main thread must be doing "work" to be able to catch a Ctrl+C. So you can't just leave main thread there and exit, the simplest way is to run an infinite loop
Now that you have a loop running in main thread, when you hit CTRL+C, program will enter the except KeyboardInterrupt block and set quit=True. Then your worker thread can exit.
Strictly speaking, this is only a workaround. It seems to me it's impossible to have another way for this.
Edit
I'm not sure what's bothering you, but you can catch exception in another thread without problem:
import socket
import time
from concurrent.futures import ThreadPoolExecutor
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
def con():
try:
raise socket.gaierror
main()
except socket.gaierror:
print("gaierror occurred")
err()
def err():
print("err invoked")
time.sleep(1)
con()
def main():
s.send("[+] Hello")
with ThreadPoolExecutor(3) as exe:
exe.submit(con)
Output
gaierror occurred
err invoked
gaierror occurred
err invoked
gaierror occurred
err invoked
gaierror occurred
...

Python multi processing . Handle exception in parent process and make all children die gracefully

I have this following code.
This uses a python module called decorator .
from multiprocessing import Pool
from random import randint
import traceback
import decorator
import time
def test_retry(number_of_retry_attempts=1, **kwargs):
timeout = kwargs.get('timeout', 2.0) # seconds
#decorator.decorator
def tryIt(func, *fargs, **fkwargs):
for _ in xrange(number_of_retry_attempts):
try: return func(*fargs, **fkwargs)
except:
tb = traceback.format_exc()
if timeout is not None:
time.sleep(timeout)
print 'Catching exception %s. Attempting retry: '%(tb)
raise
return tryIt
The decorator module helps me to decorate my datawarhouse call functions. So I don't need to take care of connection dropping and various connection based issues and allow me to reset the connection and try again after some timeout . I decorate all my functions which do data-warehouse reads with this method, so I get retry for free .
I have the following methods .
def process_generator(data):
#Process the generated data
def generator():
data = data_warhouse_fetch_method()#This is the actual method which needs retry
yield data
#test_retry(number_of_retry_attempts=2,timeout=1.0)
def data_warhouse_fetch_method():
#Fetch the data from data-warehouse
pass
I try to multi process my code using multiprocessing module like this.
try:
pool = Pool(processes=2)
result = pool.imap_unordered(process_generator,generator())
except Exception as exception:
print 'Do some post processing stuff'
tb = traceback.format_exc()
print tb
Things are normal when everything is successful . Also things are normal when it fixes itself within the number of retries. But once the number of reties exceeds i raise the exception in the test_retry method which is not getting caught in the main process . The process dies and the processes forked by main process are left as orphans . May be I am doing something wrong here . I am looking for some help to fix the following problem . Propagate the exception to parent process so that I can handle the exception and make my children die gracefully . Also I want to know how to inform the child processes to die gracefully. Thanks in advance for the help .
Edit : Added more code to explain.
def test_retry(number_of_retry_attempts=1, **kwargs):
timeout = kwargs.get('timeout', 2.0) # seconds
#decorator.decorator
def tryIt(func, *fargs, **fkwargs):
for _ in xrange(number_of_retry_attempts):
try: return func(*fargs, **fkwargs)
except:
tb = traceback.format_exc()
if timeout is not None:
time.sleep(timeout)
print 'Catching exception %s. Attempting retry: '%(tb)
raise
return tryIt
#test_retry(number_of_retry_attempts=2,timeout=1.0)
def bad_method():
sample_list =[]
return sample_list[0] #This will result in an exception
def process_generator(number):
if isinstance(number,int):
return number+1
else:
raise
def generator():
for i in range(20):
if i%10 == 0 :
yield bad_method()
else:
yield i
try:
pool = Pool(processes=2)
result = pool.imap_unordered(process_generator,generator())
pool.close()
#pool.join()
for r in result:
print r
except Exception, e: #Hoping the generator will catch the exception. But not .
print 'got exception: %r, terminating the pool' % (e,)
pool.terminate()
print 'pool is terminated'
finally:
print 'joining pool processes'
pool.join()
print 'join complete'
print 'the end'
The actual problem grinds down to if the generator is throwing an exception , I am unable to catch the exception thrown by the generator in the except clause which is wrapped around pool.imap_unordered() method . So after the exception is thrown the main process is stuck and the child process waits forever .Not sure what I am doing wrong here .
I don't fully understand the code that was shared here as I am not an expert. Also, the question is nearly an year old. But I had the same requirement as explained in the topic. And I managed to find a solution:
import multiprocessing
import time
def dummy(flag):
try:
if flag:
print('Sleeping for 2 secs')
time.sleep(2) # So that it can be terminated
else:
raise Exception('Exception from ', flag) # To simulate termination
return flag # To check that the sleeping thread never returns this
except Exception as e:
print('Exception inside dummy', e)
raise e
finally:
print('Entered finally', flag)
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
args_list = [(1,), (0,)]
# call dummy for each tuple inside args_list.
# Use error_callback to terminate the pool
results = pool.starmap_async(dummy, args_list,
error_callback=lambda e, mp_pool=pool: mp_pool.terminate())
pool.close()
pool.join()
try:
# Try to see the results.
# If there was an exception in any process, results.get() throws exception
for result in results.get():
# Never executed cause of the exception
print('Printing result ', result)
except Exception as e:
print('Exception inside main', e)
print('Reached the end')
This produces the following output:
Sleeping for 2 secs
Exception inside dummy ('Exception from ', 0)
Entered finally 0
Exception inside main ('Exception from ', 0)
Reached the end
This is pretty much the first time I am answering a question so I apologise in advance if I have violated any rules or made any mistakes.
I had tried to do the following without success:
Use apply_async. But that just hung the main process after the exception was thrown
Try killing the processes and children using the pid in error_callback
Use a multiprocessing.event to track exceptions and check the same in all processes after each step before proceeding. Not a good approach but that didn't work either: "Condition objects should only be shared between processes through inheritance"
I honestly wish it wasn't so hard to terminate all processes in same pool if one of the processes threw an exception.

Categories

Resources