I have a parent class who has a try clause in it, and a child class overrides a method inside of the try clause. Normal exceptions can be caught when the child raises it. However, the keyboard interrupt exception cannot be caught. Moreover, it can be caught inside the child method, but not the parent method.
Please see the example code like the following, where the interrupt cannot be caught in bar1, but can be caught after bar2 changes it into an assertion error. I produced this problem in python 3.6 in both Linux and Windows.
import time
class foo:
def body(self):
pass
def run(self):
try:
self.body()
except Exception as ex:
print("caught exception:", str(ex))
class bar1(foo):
def body(self):
while(1):
print(1)
time.sleep(0.1)
class bar2(foo):
def body(self):
interrupted = False
while(1):
assert not interrupted, "assert not interrupted"
try:
print(1)
time.sleep(0.1)
except KeyboardInterrupt as ex:
print("received interrupt")
interrupted = True
Interrupting the run method of class bar1 gets
1
....
1
Traceback (most recent call last):
File "tmp.py", line 34, in <module>
b.run()
File "tmp.py", line 7, in run
self.body()
File "tmp.py", line 15, in body
time.sleep(0.1)
KeyboardInterrupt
However, interrupting bar2 gets
1
...
1
received interrupt
caught exception: assert not interrupted
I have searched over StackOverflow and found some problems regarding the keyboard interruption handling with threads and stdIO, but I did not found some problems like this.
Related
I'm using some threads to compute a task in a faster way.
I've seen that if one of the threads I launch raises an exception, all the other threads continue to work and the code doesn't raise that exception.
I'd like that as soon as one thread fails, all the other threads are killed and the main file raises the same exception of the thread.
My thread file is this:
from threading import Thread
class myThread(Thread):
def __init__(self, ...):
Thread.__init__(self)
self.my_variables = ...
def run(self):
# some code that can raise Exception
My main is
import MyThread
threads = []
my_list = ["a_string", "another_string", "..."]
for idx in range(len(my_list)):
threads.append(MyThread(idx = idx, ... )
for t in threads:
t.start()
for t in threads:
t.join()
I know that there are some methods to propagate the exception between the parent and the child thread as here: https://stackoverflow.com/a/2830127/12569908. But in this discussion, there is only 1 thread while I've many. In addition, I don't want to wait for all of them to end if one of them fails at the beginning. I tried to adapt that code to my case, but I still have problems.
How can I do?
You can use PyThreadState_SetAsyncExc function from here. Also look at this link. We can raise an exception in target thread with ctypes.pythonapi.PyThreadState_SetAsyncExc() function, target thread can catch this exception and do some work.
When you look at the below code, you will see that f and g function work in seperate threads. We raise ThreadKill exception in f when ZeroDivisionError occured, and then we catch this exception in myThread class then we killing other thread/threads using PyThreadState_SetAsyncExc function.
Note: If target thread has no controling over the interpreter(like syscall, time.sleep(), I/O blocking operation) then target thread will not get killed until it has controling over the interpreter
I modified your code a little.
import threading
import time,ctypes
class ThreadKill(Exception): # this is our special exception class like ZeroDivisionError
pass
def f():
try:
for i in range(20):
print("hello")
time.sleep(1)
if i==2:
4/0
except ZeroDivisionError:
# your cleanup, close the file, flush the buffer etc.
raise ThreadKill # We do that because we will catch this in myThread class
def g():
try:
for i in range(20):
print("world")
time.sleep(1)
except ThreadKill:
# your cleanup, close the file, flush the buffer
print("i am killing")
class myThread(threading.Thread):
def __init__(self,func):
threading.Thread.__init__(self)
self.func=func
def run(self):
try:
self.func()
except Exception as e: # catch "raise ThreadKill" exception
for thread in threads:
my_ident=threading.get_ident()
if thread.ident!=my_ident: # don't kill yourself without all other threads signaled
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(thread.ident),
ctypes.py_object(ThreadKill))
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(threading._main_thread.ident),
ctypes.py_object(ThreadKill))
threads=[]
threads.append(myThread(func=f))
threads.append(myThread(func=g))
try:
for t in threads:
t.start()
for t in threads:
t.join()
except ThreadKill:
print("ThreadKill Exception")
I am using the Python APScheduler and I wish to catch the raised exceptions from the callback method, however unsure as to how to do this. I have provided my code so far however I am stuck on how to do this appropriately. Any help would be appreciated.
import time
from apscheduler.schedulers.background import BackgroundScheduler
def expiry_callback():
raise ValueError
print("inside job")
sched = BackgroundScheduler(daemon=True)
sched.add_job(expiry_callback,'interval',seconds=1)
try:
sched.start()
except ValueError as e:
print(f'ran into an issue!! {e}')
try:
while True:
time.sleep(5)
except (KeyboardInterrupt, SystemExit):
sched.shutdown()
stacktrace:
/Users/me/Documents/environments/my_env/bin/python3.9 /Users/me/PycharmProjects/pythonProject2/run.py
Job "expiry_callback (trigger: interval[0:00:01], next run at: 2021-08-24 22:33:26 MDT)" raised an exception
Traceback (most recent call last):
File "/Users/me/Documents/environments/my_env/lib/python3.9/site-packages/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/Users/me/PycharmProjects/pythonProject2/run.py", line 6, in expiry_callback
raise ValueError
ValueError
Process finished with exit code 0
Calling sched.start() only starts the background thread that executes the callback function but does not call the callback function itself so it is never going to produce the exception.
If you're looking to handle exceptions from callback functions in a consistent way, you can instead call the via a wrapper function that catches a given exception and outputs the error in a definite manner:
# insert definition of expiry_callback before this line
def catch_exception(func, exception):
def wrapper():
try:
func()
except exception as e:
print(f'ran into an issue!! {e}')
return wrapper
sched = BackgroundScheduler(daemon=True)
sched.add_job(catch_exception(expiry_callback, ValueError),'interval',seconds=1)
sched.start()
# insert idle code after this line
Demo: https://replit.com/#blhsing/BlueCreepyMethod
I'm using GLib.MainLoop() from PyGObject in my Python application and have a question.
Is it possible to handle Python exception that raises in loop.run()?
For example I'm calling some function using GLib.MainContext.invoke_full():
import traceback, gi
from gi.repository import GLib
try:
loop = GLib.MainLoop()
def handler(self):
print('handler')
raise Exception('from handler with love')
loop.get_context().invoke_full(GLib.PRIORITY_DEFAULT, handler, None)
loop.run()
except Exception:
print('catched!')
I thought that handler() should be called somewhere inside loop.run() so raise Exception('from handler with love') should be catched by except Exception:. However, it is not:
$ python test.py
handler
Traceback (most recent call last):
File "test.py", line 9, in handler
raise Exception('from handler with love')
Exception: from handler with love
It seems that handler() called in the middle of nowhere (called from GLib's C code?), and not catched by except Exception:.
Is it possible to catch all Python exceptions that raises in GLib.MainLoop.run()? I have a dozen of handlers called like that so I have to add same try: ... except OneException: ... exceptAnotherException: ... wrapper into each handler.
No, the exception is not propagated. It is caught and printed. No exception in a Python callback causes the loop to exit.
You can handle these types of errors through sys.excepthook
I am trying to debugging multi-thread script. Once the exception is
raised I want to:
report it to monitoring system (just print in following example)
stop whole script (including all other threads)
call post mortem debugger prompt in a perspective raised exception
I prepare pretty complicated example to show how I tried to solve it:
#!/usr/bin/env python
import threading
import inspect
import traceback
import sys
import os
import time
def POST_PORTEM_DEBUGGER(type, value, tb):
traceback.print_exception(type, value, tb)
print
if hasattr(sys, 'ps1') or not sys.stderr.isatty():
import rpdb
rpdb.pdb.pm()
else:
import pdb
pdb.pm()
sys.excepthook = POST_PORTEM_DEBUGGER
class MyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.exception = None
self.info = None
self.the_calling_script_name = os.path.abspath(inspect.currentframe().f_back.f_code.co_filename)
def main(self):
"Virtual method to be implemented by inherited worker"
return self
def run(self):
try:
self.main()
except Exception as exception:
self.exception = exception
self.info = traceback.extract_tb(sys.exc_info()[2])[-1]
# because of bug http://bugs.python.org/issue1230540
# I cannot use just "raise" under threading.Thread
sys.excepthook(*sys.exc_info())
def __del__(self):
print 'MyThread via {} catch "{}: {}" in {}() from {}:{}: {}'.format(self.the_calling_script_name, type(self.exception).__name__, str(self.exception), self.info[2], os.path.basename(self.info[0]), self.info[1], self.info[3])
class Worker(MyThread):
def __init__(self):
super(Worker, self).__init__()
def main(self):
""" worker job """
counter = 0
while True:
counter += 1
print self
time.sleep(1.0)
if counter == 3:
pass # print 1/0
def main():
Worker().start()
counter = 1
while True:
counter += 1
time.sleep(1.0)
if counter == 3:
pass # print 1/0
if __name__ == '__main__':
main()
The trick with
sys.excepthook = POST_PORTEM_DEBUGGER
works perfectly if no threads are involved. I found that in case of
multi-thread script I can use rpdb for debuggig by calling:
import rpdb; rpdb.set_trace()
It works perfectly for defined breakpoint but I want to debug
multi-thread script post mortem (after the uncatched exception is
raised). When I try to use rpdb in the POST_PORTEM_DEBUGGER function
with multi-thread application I get following:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "./demo.py", line 49, in run
sys.excepthook(*sys.exc_info())
File "./demo.py", line 22, in POST_PORTEM_DEBUGGER
pdb.pm()
File "/usr/lib/python2.7/pdb.py", line 1270, in pm
post_mortem(sys.last_traceback)
AttributeError: 'module' object has no attribute 'last_traceback'
I looks like the
sys.excepthook(*sys.exc_info())
did not set up all what the raise command does.
I want the same behavior if the exception is raised in main() even
under started thread.
(I haven't tested my answer, but it seems to me that...)
The call to pdb.pm (pm="post mortem") fails simply because there had been no "mortem" prior to it. I.e. the program is still running.
Looking at the pdb source code, you find the implementation of pdb.pm:
def pm():
post_mortem(sys.last_traceback)
which makes me guess that what you actually want to do is call pdb.post_mortem() with no args. Looks like the default behavior does exactly what you need.
Some more source code (notice the t = sys.exc_info()[2] line):
def post_mortem(t=None):
# handling the default
if t is None:
# sys.exc_info() returns (type, value, traceback) if an exception is
# being handled, otherwise it returns None
t = sys.exc_info()[2]
if t is None:
raise ValueError("A valid traceback must be passed if no "
"exception is being handled")
p = Pdb()
p.reset()
p.interaction(None, t)
Building on #shx2's above, I now use the following pattern in the context of multithreading.
import sys, pdb
try:
... # logic that may fail
except exception as exc:
pdb.post_mortem(exc.__traceback__)
Here is a more verbose alternative:
import sys, pdb
try:
... # logic that may fail
except exception as exc:
if hasattr(sys, "last_traceback"):
pdb.pm()
else:
pdb.post_mortem(exc.__traceback__)
This can help:
import sys
from IPython.core import ultratb
sys.excepthook = ultratb.FormattedTB(mode='Verbose', color_scheme='Linux',
call_pdb=True, ostream=sys.__stdout__)
When an exception is raised inside a thread without catching it anywhere else, will it then kill the whole application/interpreter/process? Or will it only kill the thread?
Let's try it:
import threading
import time
class ThreadWorker(threading.Thread):
def run(self):
print "Statement from a thread!"
raise Dead
class Main:
def __init__(self):
print "initializing the thread"
t = ThreadWorker()
t.start()
time.sleep(2)
print "Did it work?"
class Dead(Exception): pass
Main()
The code above yields the following results:
> initializing the thread
> Statement from a thread!
> Exception in thread
> Thread-1: Traceback (most recent call last): File
> "C:\Python27\lib\threading.py", line 551, in __bootstrap_inner
> self.run() File ".\pythreading.py", line 8, in run
> raise Dead Dead
> ----- here the interpreter sleeps for 2 seconds -----
> Did it work?
So, the answer to your question is that a raised Exception crashes only the thread it is in, not the whole program.
From the threading documentation:
Once the thread’s activity is started, the thread is considered
‘alive’. It stops being alive when its run() method terminates –
either normally, or by raising an unhandled exception. The is_alive()
method tests whether the thread is alive.
And also:
join(timeout=None)
Wait until the thread terminates. This blocks the calling thread until the thread whose join() method is called terminates – either
normally or through an unhandled exception –, or until the optional
timeout occurs.
In other words, the uncaught exception is a way to end a thread, and will be detected in the parent's join call on said thread.