This is a hard question to phrase, but here's a stripped-down version of the situation. I'm using some library code that accepts a callback. It has its own error-handling, and raises an error if anything goes wrong while executing the callback.
class LibraryException(Exception):
pass
def library_function(callback, string):
try:
# (does other stuff here)
callback(string)
except:
raise LibraryException('The library code hit a problem.')
I'm using this code inside an input loop. I know of potential errors that could arise in my callback function, depending on values in the input. If that happens, I'd like to reprompt, after getting helpful feedback from its error message. I imagine it looking something like this:
class MyException(Exception):
pass
def my_callback(string):
raise MyException("Here's some specific info about my code hitting a problem.")
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except MyException as e:
print(e)
continue
Of course, this doesn't work, because MyException will be caught within library_function, which will raise its own (much less informative) Exception and halt the program.
The obvious thing to do would be to validate my input before calling library_function, but that's a circular problem, because parsing is what I'm using the library code for in the first place. (For the curious, it's Lark, but I don't think my question is specific enough to Lark to warrant cluttering it with all the specific details.)
One alternative would be to alter my code to catch any error (or at least the type of error the library generates), and directly print the inner error message:
def my_callback(string):
error_str = "Here's some specific info about my code hitting a problem."
print(error_str)
raise MyException(error_str)
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except LibraryException:
continue
But I see two issues with this. One is that I'm throwing a wide net, potentially catching and ignoring errors other than in the scope I'm aiming at. Beyond that, it just seems... inelegant, and unidiomatic, to print the error message, then throw the exception itself into the void. Plus the command line event loop is only for testing; eventually I plan to embed this in a GUI application, and without printed output, I'll still want to access and display the info about what went wrong.
What's the cleanest and most Pythonic way to achieve something like this?
There seems to be many ways to achieve what you want. Though, which one is more robust - I cannot find a clue about. I'll try to explain all the methods that seemed apparent to me. Perhaps you'll find one of them useful.
I'll be using the example code you provided to demonstrate these methods, here's a fresher on how it looks-
class MyException(Exception):
pass
def my_callback(string):
raise MyException("Here's some specific info about my code hitting a problem.")
def library_function(callback, string):
try:
# (does other stuff here)
callback(string)
except:
raise Exception('The library code hit a problem.')
The simplest approach - traceback.format_exc
import traceback
try:
library_function(my_callback, 'boo!')
except:
# NOTE: Remember to keep the `chain` parameter of `format_exc` set to `True` (default)
tb_info = traceback.format_exc()
This does not require much know-how about exceptions and stack traces themselves, nor does it require you to pass any special frame/traceback/exception to the library function. But look at what this returns (as in, the value of tb_info)-
'''
Traceback (most recent call last):
File "path/to/test.py", line 14, in library_function
callback(string)
File "path/to/test.py", line 9, in my_callback
raise MyException("Here's some specific info about my code hitting a problem.")
MyException: Here's some specific info about my code hitting a problem.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "path/to/test.py", line 19, in <module>
library_function(my_callback, 'boo!')
File "path/to/test.py", line 16, in library_function
raise Exception('The library code hit a problem.')
Exception: The library code hit a problem.
'''
That's a string, the same thing you'd see if you just let the exception happen without catching. Notice the exception chaining here, the exception at the top is the exception that happened prior to the exception at the bottom. You could parse out all the exception names-
import re
exception_list = re.findall(r'^(\w+): (\w+)', tb_info, flags=re.M)
With that, you'll get [('MyException', "Here's some specific info about my code hitting a problem"), ('Exception', 'The library code hit a problem')] in exception_list
Although this is the easiest way out, it's not very context aware. I mean, all you get are class names in string form. Regardless, if that is what suits your needs - I don't particularly see a problem with this.
The "robust" approach - recursing through __context__/__cause__
Python itself keeps track of the exception trace history, the exception currently at hand, the exception that caused this exception and so on. You can read about the intricate details of this concept in PEP 3134
Whether or not you go through the entirety of the PEP, I urge you to at least familiarize yourself with implicitly chained exceptions and explicitly chained exceptions. Perhaps this SO thread will be useful for that.
As a small refresher, raise ... from is for explicitly chaining exceptions. The method you show in your example, is implicit chaining
Now, you need to make a mental note - TracebackException#__cause__ is for explicitly chained exceptions and TracebackException#__context__ is for implicitly chained exceptions. Since your example uses implicit chaining, you can simply follow __context__ backwards and you'll reach MyException. In fact, since this is only one level of nesting, you'll reach it instantly!
import sys
import traceback
try:
library_function(my_callback, 'boo!')
except:
previous_exc = traceback.TracebackException(*sys.exc_info()).__context__
This first constructs the TracebackException from sys.exc_info. sys.exc_info returns a tuple of (exc_type, exc_value, exc_traceback) for the exception at hand (if any). Notice that those 3 values, in that specific order, are exactly what you need to construct TracebackException - so you can simply destructure it using * and pass it to the class constructor.
This returns a TracebackException object about the current exception. The exception that it is implicitly chained from is in __context__, the exception that it is explicitly chained from is in __cause__.
Note that both __cause__ and __context__ will return either a TracebackException object, or None (if you're at the end of the chain). This means, you can call __cause__/__context__ again on the return value and basically keep going till you reach the end of the chain.
Printing a TracebackException object just prints the message of the exception, if you want to get the class itself (the actual class, not a string), you can do .exc_type
print(previous_exc)
# prints "Here's some specific info about my code hitting a problem."
print(previous_exc.exc_type)
# prints <class '__main__.MyException'>
Here's an example of recursing through .__context__ and printing the types of all exceptions in the implicit chain. (You can do the same for .__cause__)
def classes_from_excs(exc: traceback.TracebackException):
print(exc.exc_type)
if not exc.__context__:
# chain exhausted
return
classes_from_excs(exc.__context__)
Let's use it!
try:
library_function(my_callback, 'boo!')
except:
classes_from_excs(traceback.TracebackException(*sys.exc_info()))
That will print-
<class 'Exception'>
<class '__main__.MyException'>
Once again, the point of this is to be context aware. Ideally, printing isn't the thing you'll want to do in a practical environment, you have the class objects themselves on your hands, with all the info!
NOTE: For implicitly chained exceptions, if an exception is explicitly suppressed, it'll be a bad day trying to recover the chain - regardless, you might give __supressed_context__ a shot.
The painful way - walking through traceback.walk_tb
This is probably the closest you can get to the low level stuff of exception handling. If you want to capture entire frames of information instead of just the exception classes and messages and such, you might find walk_tb useful....and a bit painful.
import traceback
try:
library_function(my_callback, 'foo')
except:
tb_gen = traceback.walk_tb(sys.exc_info()[2])
There is....entirely too much to discuss here. .walk_tb takes a traceback object, you may remember from the previous method that the 2nd index of the returned tuple from sys.exec_info is just that. It then returns a generator of tuples of frame object and int (Iterator[Tuple[FrameType, int]]).
These frame objects have all kinds of intricate information. Though, whether or not you'll actually find exactly what you're looking for, is another story. They may be complex, but they aren't exhaustive unless you play around with a lot of frame inspection. Regardless, this is what the frame objects represent.
What you do with the frames is upto you. They can be passed to many functions. You can pass the entire generator to StackSummary.extract to get framesummary objects, you can iterate through each frame to have a look at [0].f_locals (The [0] on Tuple[FrameType, int] returns the actual frame object) and so on.
for tb in tb_gen:
print(tb[0].f_locals)
That will give you a dict of the locals for each frame. Within the first tb from tb_gen, you'll see MyException as part of the locals....among a load of other stuff.
I have a creeping feeling I have overlooked some methods, most probably with inspect. But I hope the above methods will be good enough so that no one has to go through the jumble that is inspect :P
Chase's answer above is phenomenal. For completeness's sake, here's how I implemented their second approach in this situation. First, I made a function that can search the stack for the specified error type. Even though the chaining in my example is implicit, this should be able to follow implicit and/or explicit chaining:
import sys
import traceback
def find_exception_in_trace(exc_type):
"""Return latest exception of exc_type, or None if not present"""
tb = traceback.TracebackException(*sys.exc_info())
prev_exc = tb.__context__ or tb.__cause__
while prev_exc:
if prev_exc.exc_type == exc_type:
return prev_exc
prev_exc = prev_exc.__context__ or prev_exc.__cause__
return None
With that, it's as simple as:
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except LibraryException as exc:
if (my_exc := find_exception_in_trace(MyException)):
print(my_exc)
continue
raise exc
That way I can access my inner exception (and print it for now, although eventually I may do other things with it) and continue. But if my exception wasn't in there, I simply reraise whatever the library raised. Perfect!
Let’s say I have this function :
def a ():
try :
b
except Error:
raise Error
return True
Is this considered good practice to only return true if it is successful otherwise stop block execution with raise error ?
Think about your function and error processing as separate concerns. A function should either return an object as the result (or automatically None with no return statement) or perform some action without returning an explicit result. Avoid mixing the two.
If an error that your function code can detect happens, rise an Exception at the appropriate abstraction level, so that the client code can wrap the call in a try block and properly treat the error case, if there is a wish to do so.
I use the following code to call an arbitrary callable f() with appropriate number of parameters:
try:
res = f(arg1)
except TypeError:
res = f(arg1, arg2)
If f() is a two parameter function, calling it with just one parameter raises TypeError, so the function can be called properly in the except branch.
The problem is when f() is one-parameter function and the exception is raisen in the body of f() (possibly because of a bad call to some function), for example:
def f(arg):
map(arg) # raises TypeError
The control flow goes to the except branch because of the internal error of f(). Of course calling f() with two arguments raises a new TypeError. Then instead of traceback to the original error I get traceback to the call of f() with two parameters, which is much less helpful when debugging.
How can my code recognize exceptions raised not in the current scope to reraise them?
I want to write the code like this:
try:
res = f(arg1)
except TypeError:
if exceptionRaisedNotInTheTryBlockScope(): # <-- subject of the question
raise
res = f(arg1, arg2)
I know I can use a walkarround by adding exc_info = sys.exc_info() in the except block.
One of assumptions is I have no control over f() since it is given by user of my module. Also its __name__ attribute may be other than 'f'. Internal exception may be raised by bad recurrent call to f().
The walkarround is unsuitable since it complicates debugging by the author of f().
You can capture the exception object and examine it.
try:
res = f(arg1)
except TypeError as e:
if "f() missing 1 required positional argument" in e.args[0]:
res = f(arg1, arg2)
else:
raise
Frankly, though, not going the extra length to classify the exception should work fine as you should be getting both the original traceback and the secondary traceback if the error originated inside f() -- debugging should not be a problem.
Also, if you have control over f() you can make second argument optional and not have to second-guess it:
def f(a, b=None):
pass
Now you can call it either way.
How about this:
import inspect
if len(inspect.getargspec(f).args) == 1:
res = f(arg1)
else:
res = f(arg1, arg2)
Finally figured it out:
def exceptionRaisedNotInTheTryBlockScope():
return sys.exc_info()[2].tb_next is not None
sys.exc_info() returns a 3-element tuple. Its last element is the traceback of the last exception. If the traceback object is the only one in the traceback chain, then the exception has been raisen in the scope of the try block (https://docs.python.org/2/reference/datamodel.html).
According to https://docs.python.org/2/library/sys.html#sys.exc_info, it should be avoided to store the traceback value.
I have an outer function that calls an inner function by passing the arguments along. Is it possible to test that both functions throw the same exception/error without knowing the exact error type?
I'm looking for something like:
def test_invalidInput_throwsSameError(self):
arg = 'invalidarg'
self.assertRaisesSameError(
innerFunction(arg),
outerFunction(arg)
)
Assuming you're using unittest (and python2.7 or newer) and that you're not doing something pathological like raising old-style class instances as errors, you can get the exception from the error context if you use assertRaises as a context manager.
with self.assertRaises(Exception) as err_context1:
innerFunction(arg)
with self.assertRaises(Exception) as err_context2:
outerFunction(arg)
# Or some other measure of "sameness"
self.assertEqual(
type(err_context1.exception),
type(err_context2.exception))
First call the first function and capture what exception it raises:
try:
innerfunction(arg)
except Exception as e:
pass
else:
e = None
Then assert that the other function raises the same exception:
self.assertRaises(e, outerfunction, arg)
I believe this will do what you want; just add it as another method of your TestCase class:
def assertRaisesSameError(self, *funcs, bases=(Exception,)):
exceptions = []
for func in funcs:
with self.assertRaises(base) as error_context:
func()
exceptions.append(error_context.exception)
for exc in exceptions:
self.assertEqual(type(exc), type(exc[-1]))
I haven't tested this, but it should work. For simplicity this only takes functions with no arguments, so if your function does have arguments, you'll have to make a wrapper (even just with a lambda). It would not be hard at all to expand this to allow for passing in the *args and **kwargs.
Let me know of any changes anyone thinks should be made.
I am writing a class in Python and part of the code deals with a server. Therefore I need to deal with exceptions like ExpiredSession or ConnectionError.
Instead of writing exception handling code for every try/except block, I have a single function in the class to deal with the exceptions. something like this (inside the class definition)
def job_a(self):
try:
do something
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def job_b(self):
try:
do something else
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def exception_handling(self,e):
if isInstanceOf(e,ExpiredSession):
#deal with expired session.
self.reconnect()
if isInstanceOf(e,ConnectionError):
#deal with connection error
else:
#other exceptions
I am not sure if this kind of code would cause any problem because I haven't seen code do this. like, possible memory leak? (Now I notice the memory usage grows(though slowly) when I have more and more error/exception and eventually I have to restart the process before it eats all my memories). Not sure this is the cause.
Is it a good practice to pass exceptions to a single function?
This is a good use case for a context manager. You can see some examples of using context managers for error handling here. The contextmanager decorator allows you to write context managers concisely in the form of a single function. Here's a simple example:
class Foo(object):
def meth1(self):
with self.errorHandler():
1/0
def meth2(self):
with self.errorHandler():
2 + ""
def meth3(self):
with self.errorHandler():
# an unhandled ("unexpected") kind of exception
""[3]
#contextlib.contextmanager
def errorHandler(self):
try:
yield
except TypeError:
print "A TypeError occurred"
except ZeroDivisionError:
print "Divide by zero occurred"
Then:
>>> x = Foo()
>>> x.meth1()
Divide by zero occurred
>>> x.meth2()
A TypeError occurred
The with statement allows you to "offload" the error handling into a separate function where you catch the exceptions and do what you like with them. In your "real" functions (i.e., the functions that do the work but may raise the exceptions), you just need a with statement instead of an entire block of complicated try/except statements.
An additional advantage of this approach is that if an unforeseen exception is raised, it will propagate up normally with no extra effort:
>>> x.meth3()
Traceback (most recent call last):
File "<pyshell#394>", line 1, in <module>
x.meth3()
File "<pyshell#389>", line 12, in meth3
""[3]
IndexError: string index out of range
In your proposed solution, on the other hand, the exception is already caught in each function and the actual exception object is passed the handler. If the handler gets an unexpected error, it would have to manually reraise it (and can't even use a bare raise to do so). Using a context manager, unexpected exceptions have their ordinary behavior with no extra work required.