How to catch Exceptions from thread - python

I've got this piece of code that starts a thread. Then it waits for a few seconds and then it checks if it got a Event. If it does have an event, the thread is 'canceled'. Else a Exception is thrown.
I want to know how to catch this Exception because I've searched for a long time and did not find a clear answer.
import threading
import time
def thread(args1, stop_event):
print "starting thread"
stop_event.wait(10)
if not stop_event.is_set():
raise Exception("signal!")
pass
try:
t_stop = threading.Event()
t = threading.Thread(target=thread, args=(1, t_stop))
t.start()
time.sleep(11)
#normally this should not be executed
print "stopping thread!"
t_stop.set()
except Exception as e:
print "action took to long, bye!"
First I've tried this concept with a Python signal but when performing more than 1 signal.alarm, it just got stuck for no reason at all (prob. a bug).
EDIT:
I don't want to extend the already existing class, I want to work with the native defined class.
Also I do not want to loop continuously to check if exceptions occured. I want the thread to pass the exception to its parent method. Therefore the time.sleep(11) action in my code

Created from my comments on your question.
Try something like this.
import sys
import threading
import time
import Queue
def thread(args1, stop_event, queue_obj):
print "starting thread"
stop_event.wait(10)
if not stop_event.is_set():
try:
raise Exception("signal!")
except Exception:
queue_obj.put(sys.exc_info())
pass
try:
queue_obj = Queue.Queue()
t_stop = threading.Event()
t = threading.Thread(target=thread, args=(1, t_stop, queue_obj))
t.start()
time.sleep(11)
# normally this should not be executed
print "stopping thread!"
t_stop.set()
try:
exc = queue_obj.get(block=False)
except Queue.Empty:
pass
else:
exc_type, exc_obj, exc_trace = exc
print exc_obj
except Exception as e:
print "action took to long, bye!"
When run, the Exception "signal!" is raised, and is printed by print exc_obj.

Related

Raise Exception if at least 1 of N threads raises Exception, killing other ones

I'm using some threads to compute a task in a faster way.
I've seen that if one of the threads I launch raises an exception, all the other threads continue to work and the code doesn't raise that exception.
I'd like that as soon as one thread fails, all the other threads are killed and the main file raises the same exception of the thread.
My thread file is this:
from threading import Thread
class myThread(Thread):
def __init__(self, ...):
Thread.__init__(self)
self.my_variables = ...
def run(self):
# some code that can raise Exception
My main is
import MyThread
threads = []
my_list = ["a_string", "another_string", "..."]
for idx in range(len(my_list)):
threads.append(MyThread(idx = idx, ... )
for t in threads:
t.start()
for t in threads:
t.join()
I know that there are some methods to propagate the exception between the parent and the child thread as here: https://stackoverflow.com/a/2830127/12569908. But in this discussion, there is only 1 thread while I've many. In addition, I don't want to wait for all of them to end if one of them fails at the beginning. I tried to adapt that code to my case, but I still have problems.
How can I do?
You can use PyThreadState_SetAsyncExc function from here. Also look at this link. We can raise an exception in target thread with ctypes.pythonapi.PyThreadState_SetAsyncExc() function, target thread can catch this exception and do some work.
When you look at the below code, you will see that f and g function work in seperate threads. We raise ThreadKill exception in f when ZeroDivisionError occured, and then we catch this exception in myThread class then we killing other thread/threads using PyThreadState_SetAsyncExc function.
Note: If target thread has no controling over the interpreter(like syscall, time.sleep(), I/O blocking operation) then target thread will not get killed until it has controling over the interpreter
I modified your code a little.
import threading
import time,ctypes
class ThreadKill(Exception): # this is our special exception class like ZeroDivisionError
pass
def f():
try:
for i in range(20):
print("hello")
time.sleep(1)
if i==2:
4/0
except ZeroDivisionError:
# your cleanup, close the file, flush the buffer etc.
raise ThreadKill # We do that because we will catch this in myThread class
def g():
try:
for i in range(20):
print("world")
time.sleep(1)
except ThreadKill:
# your cleanup, close the file, flush the buffer
print("i am killing")
class myThread(threading.Thread):
def __init__(self,func):
threading.Thread.__init__(self)
self.func=func
def run(self):
try:
self.func()
except Exception as e: # catch "raise ThreadKill" exception
for thread in threads:
my_ident=threading.get_ident()
if thread.ident!=my_ident: # don't kill yourself without all other threads signaled
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(thread.ident),
ctypes.py_object(ThreadKill))
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(threading._main_thread.ident),
ctypes.py_object(ThreadKill))
threads=[]
threads.append(myThread(func=f))
threads.append(myThread(func=g))
try:
for t in threads:
t.start()
for t in threads:
t.join()
except ThreadKill:
print("ThreadKill Exception")

Python multi processing . Handle exception in parent process and make all children die gracefully

I have this following code.
This uses a python module called decorator .
from multiprocessing import Pool
from random import randint
import traceback
import decorator
import time
def test_retry(number_of_retry_attempts=1, **kwargs):
timeout = kwargs.get('timeout', 2.0) # seconds
#decorator.decorator
def tryIt(func, *fargs, **fkwargs):
for _ in xrange(number_of_retry_attempts):
try: return func(*fargs, **fkwargs)
except:
tb = traceback.format_exc()
if timeout is not None:
time.sleep(timeout)
print 'Catching exception %s. Attempting retry: '%(tb)
raise
return tryIt
The decorator module helps me to decorate my datawarhouse call functions. So I don't need to take care of connection dropping and various connection based issues and allow me to reset the connection and try again after some timeout . I decorate all my functions which do data-warehouse reads with this method, so I get retry for free .
I have the following methods .
def process_generator(data):
#Process the generated data
def generator():
data = data_warhouse_fetch_method()#This is the actual method which needs retry
yield data
#test_retry(number_of_retry_attempts=2,timeout=1.0)
def data_warhouse_fetch_method():
#Fetch the data from data-warehouse
pass
I try to multi process my code using multiprocessing module like this.
try:
pool = Pool(processes=2)
result = pool.imap_unordered(process_generator,generator())
except Exception as exception:
print 'Do some post processing stuff'
tb = traceback.format_exc()
print tb
Things are normal when everything is successful . Also things are normal when it fixes itself within the number of retries. But once the number of reties exceeds i raise the exception in the test_retry method which is not getting caught in the main process . The process dies and the processes forked by main process are left as orphans . May be I am doing something wrong here . I am looking for some help to fix the following problem . Propagate the exception to parent process so that I can handle the exception and make my children die gracefully . Also I want to know how to inform the child processes to die gracefully. Thanks in advance for the help .
Edit : Added more code to explain.
def test_retry(number_of_retry_attempts=1, **kwargs):
timeout = kwargs.get('timeout', 2.0) # seconds
#decorator.decorator
def tryIt(func, *fargs, **fkwargs):
for _ in xrange(number_of_retry_attempts):
try: return func(*fargs, **fkwargs)
except:
tb = traceback.format_exc()
if timeout is not None:
time.sleep(timeout)
print 'Catching exception %s. Attempting retry: '%(tb)
raise
return tryIt
#test_retry(number_of_retry_attempts=2,timeout=1.0)
def bad_method():
sample_list =[]
return sample_list[0] #This will result in an exception
def process_generator(number):
if isinstance(number,int):
return number+1
else:
raise
def generator():
for i in range(20):
if i%10 == 0 :
yield bad_method()
else:
yield i
try:
pool = Pool(processes=2)
result = pool.imap_unordered(process_generator,generator())
pool.close()
#pool.join()
for r in result:
print r
except Exception, e: #Hoping the generator will catch the exception. But not .
print 'got exception: %r, terminating the pool' % (e,)
pool.terminate()
print 'pool is terminated'
finally:
print 'joining pool processes'
pool.join()
print 'join complete'
print 'the end'
The actual problem grinds down to if the generator is throwing an exception , I am unable to catch the exception thrown by the generator in the except clause which is wrapped around pool.imap_unordered() method . So after the exception is thrown the main process is stuck and the child process waits forever .Not sure what I am doing wrong here .
I don't fully understand the code that was shared here as I am not an expert. Also, the question is nearly an year old. But I had the same requirement as explained in the topic. And I managed to find a solution:
import multiprocessing
import time
def dummy(flag):
try:
if flag:
print('Sleeping for 2 secs')
time.sleep(2) # So that it can be terminated
else:
raise Exception('Exception from ', flag) # To simulate termination
return flag # To check that the sleeping thread never returns this
except Exception as e:
print('Exception inside dummy', e)
raise e
finally:
print('Entered finally', flag)
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
args_list = [(1,), (0,)]
# call dummy for each tuple inside args_list.
# Use error_callback to terminate the pool
results = pool.starmap_async(dummy, args_list,
error_callback=lambda e, mp_pool=pool: mp_pool.terminate())
pool.close()
pool.join()
try:
# Try to see the results.
# If there was an exception in any process, results.get() throws exception
for result in results.get():
# Never executed cause of the exception
print('Printing result ', result)
except Exception as e:
print('Exception inside main', e)
print('Reached the end')
This produces the following output:
Sleeping for 2 secs
Exception inside dummy ('Exception from ', 0)
Entered finally 0
Exception inside main ('Exception from ', 0)
Reached the end
This is pretty much the first time I am answering a question so I apologise in advance if I have violated any rules or made any mistakes.
I had tried to do the following without success:
Use apply_async. But that just hung the main process after the exception was thrown
Try killing the processes and children using the pid in error_callback
Use a multiprocessing.event to track exceptions and check the same in all processes after each step before proceeding. Not a good approach but that didn't work either: "Condition objects should only be shared between processes through inheritance"
I honestly wish it wasn't so hard to terminate all processes in same pool if one of the processes threw an exception.

How to handle exception in threading with queue in Python?

This is never print:
"Exception in threadfuncqueue handled by threadfuncqueue",
"Exception in threadfuncqueue handled by main thread" and
"thread test with queue passed". Never quitting!
from threading import Thread
from Queue import Queue
import time
class ImRaiseError():
def __init__(self):
time.sleep(1)
raise Exception(self.__class__.__name__)
# place for paste worked code example from below
print "begin thread test with queue"
def threadfuncqueue(q):
print "\n"+str(q.get())
while not q.empty():
try:
testthread = ImRaiseError()
finally:
print "Exception in threadfuncqueue handled by threadfuncqueue"
q = Queue()
items = [1,2]
for i in range(len(items)):
t = Thread(target=threadfuncqueue,args=(q,))
if(1 == i):
t.daemon = False
else:
t.daemon = True
t.start()
for item in items:
q.put("threadfuncqueue"+str(item))
try:
q.join() # block until all tasks are done
finally:
print "Exception in threadfuncqueue handled by main thread"
print "thread test with queue passed"
quit()
How handle this exception?
Example of worked code, but without queue:
print "=========== procedure style test"
def threadfunc(q):
print "\n"+str(q)
while True:
try:
testthread = ImRaiseError()
finally:
print str(q)+" handled by process"
try:
threadfunc('testproc')
except Exception as e:
print "error!",e
print "procedure style test ==========="
print "=========== simple thread tests"
testthread = Thread(target=threadfunc,args=('testthread',))
testthread.start()
try:
testthread.join()
finally:
print "Exception in testthread handled by main thread"
testthread1 = Thread(target=threadfunc,args=('testthread1',))
testthread1.start()
try:
testthread1.join()
finally:
print "Exception in testthread1 handled by main thread"
print "simple thread tests ==========="
Short Answer
You're putting things in a queue and retrieving them, but if you're going to join a queue, you need to mark tasks as done as you pull them out of the queue and process them. According to the docs, every time you enqueue an item, a counter is incremented, and you need to call q.task_done() to decrement that counter. q.join() will block until that counter reaches zero. Add this immediately after your q.get() call to prevent main from being blocked:
q.task_done()
Also, I find it odd that you're checking q for emptiness after you've retrieved something from it. I'm not sure exactly what you're trying to achieve with that so I don't have any recommendations for you, but I would suggest reconsidering your design in that area.
Other Thoughts
Once you get this code working you should take it over to Code Review because it is a bit of a mess. Here are a few thoughts for you:
Exception Handling
You're not actually "handling" the exception in threadfuncqueue(q). All the finally statement does is allow you to execute cleanup code in the event of an exception. It does not actually catch and handle the exception. The exception will still travel up the call stack. Consider this example, test.py:
try:
raise Exception
finally:
print("Yup!")
print("Nope!")
Output:
Yup!
Traceback (most recent call last):
File "test.py", line 2, in
raise Exception
Exception
Notice that "Yup!" got printed while "Nope!" didn't. The code in the finally block was executed, but that didn't stop the exception from propagating up the stack and halting the interpreter. You need the except statement for that:
try:
raise Exception
except Exception: # only catch the exceptions you expect
print("Yup!")
print("Nope!")
Output:
Yup!
Nope!
This time both are printed, because we caught and handled the exception.
Exception Raising
Your current method of raising the exception in your thread is needlessly complicated. Instead of creating the whole ImRaiseError class, just raise the exception you want with a string:
raise Exception('Whatever error message I want')
If you find yourself manually manipulating mangled names (like self.__class__.__name__), you're usually doing something wrong.
Extra Parentheses
Using parentheses around conditional expressions is generally frowned upon in Python:
if(1 == i): # unnecessary extra characters
Try to break the C/C++/Java habit and get rid of them:
if 1 == i:
Other
I've already gone beyond the scope of this question, so I'm going to cut this off now, but there are a few other things you could clean up and make more idiomatic. Head over to Code Review when you're done here and see what else can be improved.

KeyboardInterrupt does not work in multi threading python

I am trying to do multi threading to check the network connection. My code is:
exitFlag = 0
lst_doxygen=[]
lst_sphinx=[]
class myThread (threading.Thread):
def __init__(self, counter):
threading.Thread.__init__(self)
self.counter=counter
def run(self):
print "Starting thread"
link_urls(self.counter)
def link_urls(delay):
global lst_doxygen
global lst_sphinx
global exitFlag
while exitFlag==0:
try:
if network_connection() is True:
try:
links = lxml.html.parse(gr.prefs().get_string('grc', 'doxygen_base_uri', '').split(',')[1]+"annotated.html").xpath("//a/#href")
for url in links:
lst_doxygen.append(url)
links = lxml.html.parse(gr.prefs().get_string('grc', 'sphinx_base_uri', '').split(',')[1]+"genindex.html").xpath("//a/#href")
for url in links:
lst_sphinx.append(url)
exitFlag=1
except IOError, AttributeError:
pass
time.sleep(delay)
print "my"
except KeyboardInterrupt:
exitFlag=1
def network_connection():
network=False
try:
response = urllib2.urlopen("http://google.com", None, 2.5)
network=True
except urllib2.URLError, e:
pass
return network
I have set a flag to stop the thread inside while loop. I also want to exit the thread by pressing Ctrl-C. So I have used try-except but thread is still working and does not exit. If I try to use
if KeyboardInterrupt:
exitFlag=1
instead of try-except, thread just works for first time execution of while loop and then exist.
p.s.
I have created the instance of myThread class in another module.
Finally, I got the answer of my question. I need to flag my thread as Daemon. So when I will create the instance if myThread class, I will add one more line:
thread1.myThread(2)
thread1.setDaemon(True)
thread1.start()
You only get signals or KeyboardInterrupt on the main thread. There are various ways to handle it, but perhaps you could make exitFlag a global and move the exception handler to your main thread.
Here is how I catch a CTRL-C in general.
import time
import signal
import sys
stop = False
def run():
while not stop:
print 'I am alive'
time.sleep(3)
def signal_handler(signal, frame):
global stop
print 'You pressed Ctrl+C!'
stop = True
t1 = threading.Thread(target=run)
t1.start()
signal.signal(signal.SIGINT, signal_handler)
print 'Press Ctrl+C'
signal.pause()
output:
python threads.py
Press Ctrl+C
I am alive
I am alive
^CYou pressed Ctrl+C!

threading ignores KeyboardInterrupt exception

I'm running this simple code:
import threading, time
class reqthread(threading.Thread):
def run(self):
for i in range(0, 10):
time.sleep(1)
print('.')
try:
thread = reqthread()
thread.start()
except (KeyboardInterrupt, SystemExit):
print('\n! Received keyboard interrupt, quitting threads.\n')
But when I run it, it prints
$ python prova.py
.
.
^C.
.
.
.
.
.
.
.
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
In fact python thread ignore my Ctrl+C keyboard interrupt and doesn't print Received Keyboard Interrupt. Why? What is wrong with this code?
Try
try:
thread=reqthread()
thread.daemon=True
thread.start()
while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
Without the call to time.sleep, the main process is jumping out of the try...except block too early, so the KeyboardInterrupt is not caught. My first thought was to use thread.join, but that seems to block the main process (ignoring KeyboardInterrupt) until the thread is finished.
thread.daemon=True causes the thread to terminate when the main process ends.
To summarize the changes recommended in the comments, the following works well for me:
try:
thread = reqthread()
thread.start()
while thread.isAlive():
thread.join(1) # not sure if there is an appreciable cost to this.
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
sys.exit()
Slight modification of ubuntu's solution.
Removing tread.daemon = True as suggested by Eric and replacing the sleeping loop by signal.pause():
import signal
try:
thread=reqthread()
thread.start()
signal.pause() # instead of: while True: time.sleep(100)
except (KeyboardInterrupt, SystemExit):
print '\n! Received keyboard interrupt, quitting threads.\n'
My (hacky) solution is to monkey-patch Thread.join() like this:
def initThreadJoinHack():
import threading, thread
mainThread = threading.currentThread()
assert isinstance(mainThread, threading._MainThread)
mainThreadId = thread.get_ident()
join_orig = threading.Thread.join
def join_hacked(threadObj, timeout=None):
"""
:type threadObj: threading.Thread
:type timeout: float|None
"""
if timeout is None and thread.get_ident() == mainThreadId:
# This is a HACK for Thread.join() if we are in the main thread.
# In that case, a Thread.join(timeout=None) would hang and even not respond to signals
# because signals will get delivered to other threads and Python would forward
# them for delayed handling to the main thread which hangs.
# See CPython signalmodule.c.
# Currently the best solution I can think of:
while threadObj.isAlive():
join_orig(threadObj, timeout=0.1)
else:
# In all other cases, we can use the original.
join_orig(threadObj, timeout=timeout)
threading.Thread.join = join_hacked
Putting the try ... except in each thread and also a signal.pause() in true main() works for me.
Watch out for import lock though. I am guessing this is why Python doesn't solve ctrl-C by default.

Categories

Resources