Catch all exceptions in a class - python

I'm wondering if anybody would have an idea to catch all exceptions in a running thread. My program is started as follow, by a service
def main():
global RUNNING
signal.signal(signal.SIGINT, stopHandler)
signal.signal(signal.SIGTERM, stopHandler)
projectAlice = ProjectAlice()
try:
while RUNNING:
time.sleep(0.1)
except KeyboardInterrupt:
pass
finally:
projectAlice.onStop()
_logger.info('Project Alice stopped, see you soon!')
So a CTRL-C or a signal can stop it. ProjectAlice runs forever and answers to mqtt topics that are sent by Snips. It uses paho-mqtt with loop_forever. As it's pretty large, errors can occur, even though they shouldn't. I cover as many as I can, but today, as an exemple, google-translate started to throw out errors, because it can't use the service anymore (free...). Unhandled errors.... So the thread crashes and ProjectAlice is left as is. I would like to, as it's possible per exemple in Java, to super catch all exceptions and work further from there

Here's a simple solution to override the python exception hook, thus enabling you to handle uncaught exceptions:
import sys
def my_custom_exception_hook(exctype, value, tb):
print('Yo, do stuff here, handle specific exceptions and raise others or whatever')
and before your actual code starts do:
sys.excepthook = my_custom_exception_hook

A simple except Exception: will catch all exceptions except KeyboardInterrupt and SystemExit within the same thread.
You'll have to have the try: except ...: block within the code that is run in the thread to catch exceptions occurring in the thread.

Related

Interrupting Python blocks with exception catchers

This appeared to be a very obvious yet annoying issue.
Consider the code block here -
for i in tqdm.notebook.tqdm(range(int(3*len(structures)/4))):
try:
is_mal=np.array([1.]) if 'Malware' in structure_info[i] else np.array([0.])
target=parse_Structure(file=structures[i])
target=np.reshape(target.get_vector(),(-1,1))
is_mal=np.reshape(is_mal,(-1,1))
vectors=np.concatenate((vectors,target), axis=1)
labels=np.concatenate((labels,is_mal), axis=1)
except:
print(i)
The code does not matter anyways. But I have a simple question.
While running this on my Colab Notebook environment online, when I wanted to debug for something in the middle of the loop, I simply tried to interrupt execution.
This resulted in printing of the index i the loop was at, obviously the interrupt was being considered as an exception. While I do agree with the fact the loop is executing try-catch block perfectly, I also want to interrupt the execution badly.
How do I interrupt execution of this block without restarting the runtime?
You can raise a new exception inside the except block to pass it onwards:
try:
<code>
except:
raise Exception
If you want to reraise the same exception that was caught:
try:
<code>
except Exception as E:
raise E
This will pass the exception on to the next handler, If there is no other try/excepts, it will halt the whole script.
If you are interrupting by something that is not caught by Exception (for example Ctrl-C), you can replace Exception with either BaseExceptionor KeyboardInterrupt. Note that these two latter should rarely be blanket caught and not reraised in a production environment, as that could make it a hassle to actually exit the program again.
More info on exceptions: https://docs.python.org/3/library/exceptions.html

Does 'finally' always execute in Python?

For any possible try-finally block in Python, is it guaranteed that the finally block will always be executed?
For example, let’s say I return while in an except block:
try:
1/0
except ZeroDivisionError:
return
finally:
print("Does this code run?")
Or maybe I re-raise an Exception:
try:
1/0
except ZeroDivisionError:
raise
finally:
print("What about this code?")
Testing shows that finally does get executed for the above examples, but I imagine there are other scenarios I haven't thought of.
Are there any scenarios in which a finally block can fail to execute in Python?
"Guaranteed" is a much stronger word than any implementation of finally deserves. What is guaranteed is that if execution flows out of the whole try-finally construct, it will pass through the finally to do so. What is not guaranteed is that execution will flow out of the try-finally.
A finally in a generator or async coroutine might never run, if the object never executes to conclusion. There are a lot of ways that could happen; here's one:
def gen(text):
try:
for line in text:
try:
yield int(line)
except:
# Ignore blank lines - but catch too much!
pass
finally:
print('Doing important cleanup')
text = ['1', '', '2', '', '3']
if any(n > 1 for n in gen(text)):
print('Found a number')
print('Oops, no cleanup.')
Note that this example is a bit tricky: when the generator is garbage collected, Python attempts to run the finally block by throwing in a GeneratorExit exception, but here we catch that exception and then yield again, at which point Python prints a warning ("generator ignored GeneratorExit") and gives up. See PEP 342 (Coroutines via Enhanced Generators) for details.
Other ways a generator or coroutine might not execute to conclusion include if the object is just never GC'ed (yes, that's possible, even in CPython), or if an async with awaits in __aexit__, or if the object awaits or yields in a finally block. This list is not intended to be exhaustive.
A finally in a daemon thread might never execute if all non-daemon threads exit first.
os._exit will halt the process immediately without executing finally blocks.
os.fork may cause finally blocks to execute twice. As well as just the normal problems you'd expect from things happening twice, this could cause concurrent access conflicts (crashes, stalls, ...) if access to shared resources is not correctly synchronized.
Since multiprocessing uses fork-without-exec to create worker processes when using the fork start method (the default on Unix), and then calls os._exit in the worker once the worker's job is done, finally and multiprocessing interaction can be problematic (example).
A C-level segmentation fault will prevent finally blocks from running.
kill -SIGKILL will prevent finally blocks from running. SIGTERM and SIGHUP will also prevent finally blocks from running unless you install a handler to control the shutdown yourself; by default, Python does not handle SIGTERM or SIGHUP.
An exception in finally can prevent cleanup from completing. One particularly noteworthy case is if the user hits control-C just as we're starting to execute the finally block. Python will raise a KeyboardInterrupt and skip every line of the finally block's contents. (KeyboardInterrupt-safe code is very hard to write).
If the computer loses power, or if it hibernates and doesn't wake up, finally blocks won't run.
The finally block is not a transaction system; it doesn't provide atomicity guarantees or anything of the sort. Some of these examples might seem obvious, but it's easy to forget such things can happen and rely on finally for too much.
Yes. Finally always wins.
The only way to defeat it is to halt execution before finally: gets a chance to execute (e.g. crash the interpreter, turn off your computer, suspend a generator forever).
I imagine there are other scenarios I haven't thought of.
Here are a couple more you may not have thought about:
def foo():
# finally always wins
try:
return 1
finally:
return 2
def bar():
# even if he has to eat an unhandled exception, finally wins
try:
raise Exception('boom')
finally:
return 'no boom'
Depending on how you quit the interpreter, sometimes you can "cancel" finally, but not like this:
>>> import sys
>>> try:
... sys.exit()
... finally:
... print('finally wins!')
...
finally wins!
$
Using the precarious os._exit (this falls under "crash the interpreter" in my opinion):
>>> import os
>>> try:
... os._exit(1)
... finally:
... print('finally!')
...
$
I'm currently running this code, to test if finally will still execute after the heat death of the universe:
try:
while True:
sleep(1)
finally:
print('done')
However, I'm still waiting on the result, so check back here later.
According to the Python documentation:
No matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there's an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.
It should also be noted that if there are multiple return statements, including one in the finally block, then the finally block return is the only one that will execute.
Well, yes and no.
What is guaranteed is that Python will always try to execute the finally block. In the case where you return from the block or raise an uncaught exception, the finally block is executed just before actually returning or raising the exception.
(what you could have controlled yourself by simply running the code in your question)
The only case I can imagine where the finally block will not be executed is when the Python interpretor itself crashes for example inside C code or because of power outage.
I found this one without using a generator function:
import multiprocessing
import time
def fun(arg):
try:
print("tried " + str(arg))
time.sleep(arg)
finally:
print("finally cleaned up " + str(arg))
return foo
list = [1, 2, 3]
multiprocessing.Pool().map(fun, list)
The sleep can be any code that might run for inconsistent amounts of time.
What appears to be happening here is that the first parallel process to finish leaves the try block successfully, but then attempts to return from the function a value (foo) that hasn't been defined anywhere, which causes an exception. That exception kills the map without allowing the other processes to reach their finally blocks.
Also, if you add the line bar = bazz just after the sleep() call in the try block. Then the first process to reach that line throws an exception (because bazz isn't defined), which causes its own finally block to be run, but then kills the map, causing the other try blocks to disappear without reaching their finally blocks, and the first process not to reach its return statement, either.
What this means for Python multiprocessing is that you can't trust the exception-handling mechanism to clean up resources in all processes if even one of the processes can have an exception. Additional signal handling or managing the resources outside the multiprocessing map call would be necessary.
You can use a finally with an if statement, below example is checking for network connection and if its connected it will run the finally block
try:
reader1, writer1 = loop.run_until_complete(self.init_socket(loop))
x = 'connected'
except:
print("cant connect server transfer") #open popup
x = 'failed'
finally :
if x == 'connected':
with open('text_file1.txt', "r") as f:
file_lines = eval(str(f.read()))
else:
print("not connected")

Robust endless loop for server written in Python

I write a server which handles events and uncaught exceptions during handling the event must not terminate the server.
The server is a single non-threaded python process.
I want to terminate on these errors types:
KeyboardInterrupt
MemoryError
...
The list of built in exceptions is long: https://docs.python.org/2/library/exceptions.html
I don't want to re-invent this exception handling, since I guess it was done several times before.
How to proceed?
Have a white-list: A list of exceptions which are ok and processing the next event is the right choice
Have a black-list: A list of exceptions which indicate that terminating the server is the right choice.
Hint: This question is not about running a unix daemon in background. It is not about double fork and not about redirecting stdin/stdout :-)
I would do this in a similar way you're thinking of, using the 'you shall not pass' Gandalf exception handler except Exception to catch all non-system-exiting exceptions while creating a black-listed set of exceptions that should pass and end be re-raised.
Using the Gandalf handler will make sure GeneratorExit, SystemExit and KeyboardInterrupt (all system-exiting exceptions) pass and terminate the program if no other handlers are present higher in the call stack. Here is where you can check with type(e) that a __class__ of a caught exception e actually belongs in the set of black-listed exceptions and re-raise it.
As a small demonstration:
import exceptions # Py2.x only
# dictionary holding {exception_name: exception_class}
excptDict = vars(exceptions)
exceptionNames = ['MemoryError', 'OSError', 'SystemError'] # and others
# set containing black-listed exceptions
blackSet = {excptDict[exception] for exception in exceptionNames}
Now blackSet = {OSError, SystemError, MemoryError} holding the classes of the non-system-exiting exceptions we want to not handle.
A try-except block can now look like this:
try:
# calls that raise exceptions:
except Exception as e:
if type(e) in blackSet: raise e # re-raise
# else just handle it
An example which catches all exceptions using BaseException can help illustrate what I mean. (this is done for demonstration purposes only, in order to see how this raising will eventually terminate your program). Do note: I'm not suggesting you use BaseException; I'm using it in order to demonstrate what exception will actually 'pass through' and cause termination (i.e everything that BaseException catches):
for i, j in excptDict.iteritems():
if i.startswith('__'): continue # __doc__ and other dunders
try:
try:
raise j
except Exception as ex:
# print "Handler 'Exception' caught " + str(i)
if type(ex) in blackSet:
raise ex
except BaseException:
print "Handler 'BaseException' caught " + str(i)
# prints exceptions that would cause the system to exit
Handler 'BaseException' caught GeneratorExit
Handler 'BaseException' caught OSError
Handler 'BaseException' caught SystemExit
Handler 'BaseException' caught SystemError
Handler 'BaseException' caught KeyboardInterrupt
Handler 'BaseException' caught MemoryError
Handler 'BaseException' caught BaseException
Finally, in order to make this Python 2/3 agnostic, you can try and import exceptions and if that fails (which it does in Python 3), fall-back to importing builtins which contains all Exceptions; we search the dictionary by name so it makes no difference:
try:
import exceptions
excDict = vars(exceptions)
except ImportError:
import builtins
excDict = vars(builtins)
I don't know if there's a smarter way to actually do this, another solution might be instead of having a try-except with a signle except, having 2 handlers, one for the black-listed exceptions and the other for the general case:
try:
# calls that raise exceptions:
except tuple(blackSet) as be: # Must go first, of course.
raise be
except Exception as e:
# handle the rest
The top-most exception is BaseException. There are two groups under that:
Exception derived
everything else
Things like Stopiteration, ValueError, TypeError, etc., are all examples of Exception.
Things like GeneratorExit, SystemExit and KeyboardInterrupt are not descended from Execption.
So the first step is to catch Exception and not BaseException which will allow you to easily terminate the program. I recommend also catching GeneratorExit as 1) it should never actually be seen unless it is raised manually; 2) you can log it and restart the loop; and 3) it is intended to signal a generator has exited and can be cleaned up, not that the program should exit.
The next step is to log each exception with enough detail that you have the possibility of figuring out what went wrong (when you later get around to debugging).
Finally, you have to decide for yourself which, if any, of the Exception derived exceptions you want to terminate on: I would suggest RuntimeError and MemoryError, although you may be able to get around those by simply stopping and restarting your server loop.
So, really, it's up to you.
If there is some other error (such as IOError when trying to load a config file) that is serious enough to quit on, then the code responsible for loading the config file should be smart enough to catch that IOError and raise SystemExit instead.
As far as whitelist/blacklist -- use a black list, as there should only be a handful, if any, Exception-based exceptions that you need to actually terminate the server on.

Python + PyQt + Threads | Destructor never called, no other references left

In this pseudo code, the destructor of o is never called.
class MyObj():
def __del__():
print "Destroyed!"
do_upon_death()
def parse():
o = MyObj()
print "Ok so far..."
raise Exception
run_in_QThread(parse)
do_other_things()
I see the exception printed on the terminal, but o seems not being garbage-collected. The interpreter does not exit upon the exception (the main thread keeps running).
The whole purpose behind this is to be able to do certain cleanup in the destructor when exceptions occur.
Another way to explain this is with a different pattern:
try:
run_in_thread(parse)
except:
print "Will I ever get printed?"
Exceptions happening in parse will not get caught.
Even further:
with MyContextManager() as manager:
run_in_thread(lambda: do_something(manager))
Here manager.__exit__() will get called with all None arguments as if the code completed cleanly, even if there is an exception in do_something.
The purpose behind this is because I'm running certain tasks which can fail for too many different reasons, and these tasks can be done partially in the main thread, and partially in a worker thread. It can fail in either.
So my initial idea was to keep references to an object while the task was running, so if the task completed by exiting the context or by an exception, references would be lost and eventually the destructor would be run.
I'm trying to keep track of these tasks and keep a visual indicator on a GUI of which tasks are still doing their job.
You can catch Exception after it was raised and do what you want to do
try:
o = MyObj()
print "Ok so far..."
raise Exception
except Exception:
del o
or use such form if is no difference was exception raised or not:
finaly:
del o
If you need just keep Thread alive, may be, you should use: Thread.sleep()
As i understood in Thread is only parse(), so you need to create signal by emit() about deleting and send it in main thread
I believe that caughting destructor is not the best decision, it will be better to catch some signal. Please add more info about why you do this all.

Catching KeyboardInterrupt in Python during program shutdown

I'm writing a command line utility in Python which, since it is production code, ought to be able to shut down cleanly without dumping a bunch of stuff (error codes, stack traces, etc.) to the screen. This means I need to catch keyboard interrupts.
I've tried using both a try catch block like:
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print 'Interrupted'
sys.exit(0)
and catching the signal itself (as in this post):
import signal
import sys
def sigint_handler(signal, frame):
print 'Interrupted'
sys.exit(0)
signal.signal(signal.SIGINT, sigint_handler)
Both methods seem to work quite well during normal operation. However, if the interrupt comes during cleanup code at the end of the application, Python seems to always print something to the screen. Catching the interrupt gives
^CInterrupted
Exception KeyboardInterrupt in <bound method MyClass.__del__ of <path.to.MyClass object at 0x802852b90>> ignored
whereas handling the signal gives either
^CInterrupted
Exception SystemExit: 0 in <Finalize object, dead> ignored
or
^CInterrupted
Exception SystemExit: 0 in <bound method MyClass.__del__ of <path.to.MyClass object at 0x802854a90>> ignored
Not only are these errors ugly, they're not very helpful (especially to an end user with no source code)!
The cleanup code for this application is fairly big, so there's a decent chance that this issue will be hit by real users. Is there any way to catch or block this output, or is it just something I'll have to deal with?
Checkout this thread, it has some useful information about exiting and tracebacks.
If you are more interested in just killing the program, try something like this (this will take the legs out from under the cleanup code as well):
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print('Interrupted')
try:
sys.exit(130)
except SystemExit:
os._exit(130)
[Edited to change the exit code as suggested in comments. 130 is the code typically returned on Linux for a script terminated by Ctrl-C. We may not be on Linux, but the important thing is to return a non-zero value, and 130 is as good as any.]
You could ignore SIGINTs after shutdown starts by calling signal.signal(signal.SIGINT, signal.SIG_IGN) before you start your cleanup code.

Categories

Resources