I use the following code to call an arbitrary callable f() with appropriate number of parameters:
try:
res = f(arg1)
except TypeError:
res = f(arg1, arg2)
If f() is a two parameter function, calling it with just one parameter raises TypeError, so the function can be called properly in the except branch.
The problem is when f() is one-parameter function and the exception is raisen in the body of f() (possibly because of a bad call to some function), for example:
def f(arg):
map(arg) # raises TypeError
The control flow goes to the except branch because of the internal error of f(). Of course calling f() with two arguments raises a new TypeError. Then instead of traceback to the original error I get traceback to the call of f() with two parameters, which is much less helpful when debugging.
How can my code recognize exceptions raised not in the current scope to reraise them?
I want to write the code like this:
try:
res = f(arg1)
except TypeError:
if exceptionRaisedNotInTheTryBlockScope(): # <-- subject of the question
raise
res = f(arg1, arg2)
I know I can use a walkarround by adding exc_info = sys.exc_info() in the except block.
One of assumptions is I have no control over f() since it is given by user of my module. Also its __name__ attribute may be other than 'f'. Internal exception may be raised by bad recurrent call to f().
The walkarround is unsuitable since it complicates debugging by the author of f().
You can capture the exception object and examine it.
try:
res = f(arg1)
except TypeError as e:
if "f() missing 1 required positional argument" in e.args[0]:
res = f(arg1, arg2)
else:
raise
Frankly, though, not going the extra length to classify the exception should work fine as you should be getting both the original traceback and the secondary traceback if the error originated inside f() -- debugging should not be a problem.
Also, if you have control over f() you can make second argument optional and not have to second-guess it:
def f(a, b=None):
pass
Now you can call it either way.
How about this:
import inspect
if len(inspect.getargspec(f).args) == 1:
res = f(arg1)
else:
res = f(arg1, arg2)
Finally figured it out:
def exceptionRaisedNotInTheTryBlockScope():
return sys.exc_info()[2].tb_next is not None
sys.exc_info() returns a 3-element tuple. Its last element is the traceback of the last exception. If the traceback object is the only one in the traceback chain, then the exception has been raisen in the scope of the try block (https://docs.python.org/2/reference/datamodel.html).
According to https://docs.python.org/2/library/sys.html#sys.exc_info, it should be avoided to store the traceback value.
Related
I have a decorator:
def remediation_decorator(dec_mthd):
def new_func(*args, **kwargs):
try:
return dec_mthd(*args, **kwargs)
except (KeyError, HTTPError) as err:
print(f'error = {err}... call the remediation function')
return new_func
Inside the generator function, another function is called to raise specific exceptions under certain conditions:
def check(number):
if number == 1:
raise HTTPError
if number == 2:
raise KeyError
This generator function is decorated like so:
#remediation_decorator
def dec_mthd_b(number):
check(number)
for i in range(0,3):
yield i+1
When an exception is raised by the check function, the decorator's except is not hit.
[ins] In [16]: dec_mthd_b(1)
Out[16]: <generator object dec_mthd_b at 0x10e79cc80>
It appears to behave like this because it's a generator function - from Yield expressions:
When a generator function is called, it returns an iterator known as a generator.
(I wonder whether to take this in the literal sense 'it returns the iterator first irrespective of other logic in the function', hence why check() does not raise the exception?)
and,
By suspended, we mean that all local state is retained, including the current bindings of local variables, the instruction pointer, the internal evaluation stack, and the state of any exception handling.
Have I understood this correctly? Please can anyone explain this further?
Yes you got it.
#remediation_decorator is a Syntactic Sugar in python for decorators. I'm going to use the verbose(?) form:
def dec_mthd_b(number):
check(number)
for i in range(0, 3):
yield i + 1
dec_mthd_b = remediation_decorator(dec_mthd_b)
What does this line do ? remediation_decorator is your decorator, it gives you the inner function, in your case new_func.
What is new_func ? It is a normal function, when you call it, it runs the body of the function.
What will return from new_func ? dec_mthd(*args, **kwargs).
Here dec_mthd points to dec_mthd_b and it is a function again. But when you call it, since dec_mthd_b has yield` keyword inside, it gives you back the generator object.
Now here is the point. The body of your inner function, here new_func, is executed without any problem. You got your generator object back. No error is raised...
# this is the result of calling inner function, which gives you the generator object
gen = dec_mthd_b(1)
# Here is where you're going to face the exceptions.
for i in gen:
print(i)
What will happen in the for loop ? Python runs the body of the dec_mthd_b. The error is raised from there...
So in order to catch the exceptions, you have two options, either catch it inside the dec_mthd_b, or in the last for loop.
You have understood it correctly. The below code will throw an exception. When a generator is created nothing gets executed. You need to fetch the next element, hence get the value from generator, then it will raise the exception.
g = dec_mthd_b(1)
next(g) #throws httperror
In fact thats how iteration is done, we repeatadly call next method, until IterationError exception is thrown.
I'm trying to define my own IFERROR function in python like in Excel. (Yes, I know I can write try/except. I'm just trying to create an inline shorthand for a try/except pattern I often use.) The current use case is trying to get several attributes of some remote tables. The module used to connect to them gives a variety of errors and if that happens, I simply want to record that an error was hit when attempting to get that attribute.
What I've tried:
A search revealed a number of threads, the most helpful of which were:
Frequently repeated try/except in Python
Python: try-except as an Expression?
After reading these threads, I tried writing the following:
>>> def iferror(success, failure, *exceptions):
... try:
... return success
... except exceptions or Exception:
... return failure
...
>>> iferror(1/0,0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
I also tried using a context manager (new to me):
>>> from contextlib import contextmanager as cm
>>> #cm
... def iferror(failure, *exceptions):
... try:
... yield
... except exceptions or Exception:
... return failure
...
>>> with iferror(0,ZeroDivisionError) as x:
... x=1/0
...
>>> print(x)
None
Is there a way to define a function which will perform a predefined try/except pattern like IFERROR?
The problem with iferror(1/0,0) is that function arguments are evaluated before the function is entered (this is the case in most programming languages, the one big exception being Haskell). No matter what iferror does, 1/0 runs first and throws an error.
We must somehow delay the evaluation of 1/0 so it happens inside the function, in the context of a try block. One way is to use a string (iferror('1/0', 1)) that iferror can then eval. But eval should be avoided where possible, and there is a more light-weight alternative: Function bodies are not evaluated until the function is called, so we can just wrap our expression in a function and pass that:
def iferror(success, failure, *exceptions):
try:
return success()
except exceptions or Exception:
return failure
def my_expr():
return 1/0
print(iferror(my_expr, 42))
42
The crucial part here is that we don't call my_expr directly. We pass it as a function into iferror, which then invokes success(), which ends up executing return 1/0.
The only problem is that we had to pull the function argument (1/0) out of the normal flow of code and into a separate function definition, which we had to give a name (even thought it's only used once).
These shortcomings can be avoided by using lambda, which lets us define single-expression functions inline:
def iferror(success, failure, *exceptions):
try:
return success()
# ^^
except exceptions or Exception:
return failure
print(iferror(lambda: 1/0, 42))
# ^^^^^^^
42
[Live demo]
Compared to your original attempt, only two changes were necessary: Wrap the expression in a lambda:, which delays evaluation, and use () in try: return success() to call the lambda, which triggers evaluation of the function body.
Well I found a way but I'm not quite sure, if it's that what you are looking for.
First of all the error occurs if you call the function, as a result your function doesn't start at all! So I gave the first parameter of the function a string.
So the function gets your "test" and "controls" it with eval(). Here is my Code:
def iferror(success: str, failure, *exceptions):
try:
# Test
return eval(success)
except exceptions or Exception:
return failure
iferror("1/0", "Hi there!")
I hope I could hope you.
I have an outer function that calls an inner function by passing the arguments along. Is it possible to test that both functions throw the same exception/error without knowing the exact error type?
I'm looking for something like:
def test_invalidInput_throwsSameError(self):
arg = 'invalidarg'
self.assertRaisesSameError(
innerFunction(arg),
outerFunction(arg)
)
Assuming you're using unittest (and python2.7 or newer) and that you're not doing something pathological like raising old-style class instances as errors, you can get the exception from the error context if you use assertRaises as a context manager.
with self.assertRaises(Exception) as err_context1:
innerFunction(arg)
with self.assertRaises(Exception) as err_context2:
outerFunction(arg)
# Or some other measure of "sameness"
self.assertEqual(
type(err_context1.exception),
type(err_context2.exception))
First call the first function and capture what exception it raises:
try:
innerfunction(arg)
except Exception as e:
pass
else:
e = None
Then assert that the other function raises the same exception:
self.assertRaises(e, outerfunction, arg)
I believe this will do what you want; just add it as another method of your TestCase class:
def assertRaisesSameError(self, *funcs, bases=(Exception,)):
exceptions = []
for func in funcs:
with self.assertRaises(base) as error_context:
func()
exceptions.append(error_context.exception)
for exc in exceptions:
self.assertEqual(type(exc), type(exc[-1]))
I haven't tested this, but it should work. For simplicity this only takes functions with no arguments, so if your function does have arguments, you'll have to make a wrapper (even just with a lambda). It would not be hard at all to expand this to allow for passing in the *args and **kwargs.
Let me know of any changes anyone thinks should be made.
I am writing a class in Python and part of the code deals with a server. Therefore I need to deal with exceptions like ExpiredSession or ConnectionError.
Instead of writing exception handling code for every try/except block, I have a single function in the class to deal with the exceptions. something like this (inside the class definition)
def job_a(self):
try:
do something
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def job_b(self):
try:
do something else
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def exception_handling(self,e):
if isInstanceOf(e,ExpiredSession):
#deal with expired session.
self.reconnect()
if isInstanceOf(e,ConnectionError):
#deal with connection error
else:
#other exceptions
I am not sure if this kind of code would cause any problem because I haven't seen code do this. like, possible memory leak? (Now I notice the memory usage grows(though slowly) when I have more and more error/exception and eventually I have to restart the process before it eats all my memories). Not sure this is the cause.
Is it a good practice to pass exceptions to a single function?
This is a good use case for a context manager. You can see some examples of using context managers for error handling here. The contextmanager decorator allows you to write context managers concisely in the form of a single function. Here's a simple example:
class Foo(object):
def meth1(self):
with self.errorHandler():
1/0
def meth2(self):
with self.errorHandler():
2 + ""
def meth3(self):
with self.errorHandler():
# an unhandled ("unexpected") kind of exception
""[3]
#contextlib.contextmanager
def errorHandler(self):
try:
yield
except TypeError:
print "A TypeError occurred"
except ZeroDivisionError:
print "Divide by zero occurred"
Then:
>>> x = Foo()
>>> x.meth1()
Divide by zero occurred
>>> x.meth2()
A TypeError occurred
The with statement allows you to "offload" the error handling into a separate function where you catch the exceptions and do what you like with them. In your "real" functions (i.e., the functions that do the work but may raise the exceptions), you just need a with statement instead of an entire block of complicated try/except statements.
An additional advantage of this approach is that if an unforeseen exception is raised, it will propagate up normally with no extra effort:
>>> x.meth3()
Traceback (most recent call last):
File "<pyshell#394>", line 1, in <module>
x.meth3()
File "<pyshell#389>", line 12, in meth3
""[3]
IndexError: string index out of range
In your proposed solution, on the other hand, the exception is already caught in each function and the actual exception object is passed the handler. If the handler gets an unexpected error, it would have to manually reraise it (and can't even use a bare raise to do so). Using a context manager, unexpected exceptions have their ordinary behavior with no extra work required.
The documentation for the raise statement with no arguments says
If no expressions are present, raise re-raises the last exception that was active in the current scope.
I used to think that meant that the current function had to be executing an except clause. After reading this question and experimenting a little, I think it means that any function on the stack has to be executing an except clause, but I'm not sure. Also, I've realized I have no idea how the stack trace works with a no-arg raise:
def f():
try:
raise Exception
except:
g()
def g():
raise
f()
produces
Traceback (most recent call last):
File "foo", line 10, in <module>
f()
File "foo", line 5, in f
g()
File "foo", line 3, in f
raise Exception
Exception
That doesn't look like the stack at the time of the initial raise, or the stack at the time of the re-raise, or the concatenation of both stacks, or anything I can make sense of.
Am I right about a no-arg raise looking for any function on the stack executing an except clause? Also, how does the stack trace work on a reraise?
When you raise without arguments, the interpreter looks for the last exception raised and handled. It then acts the same as if you used raise with the most recent exception type, value and traceback.
This is stored in the interpreter state for the current thread, and the same information can be retrieved using sys.exc_info(). By 'handled' I mean that an except clause caught the exception. Quoting the try statement documentation:
Before an except clause’s suite is executed, details about the exception are assigned to three variables in the sys module: sys.exc_type receives the object identifying the exception; sys.exc_value receives the exception’s parameter; sys.exc_traceback receives a traceback object (see section The standard type hierarchy identifying the point in the program where the exception occurred. These details are also available through the sys.exc_info() function, which returns a tuple (exc_type, exc_value, exc_traceback).
See the implemenation notes in the Python evaluation loop (C code), specifically:
The second
bullet was for backwards compatibility: it was (and is) common to
have a function that is called when an exception is caught, and to
have that function access the caught exception via sys.exc_ZZZ.
(Example: traceback.print_exc()).
The traceback reflects how you came to the re-raise accurately. It is the current stack (line 10 calling f(), line 5 calling g()) plus the original location of the exception raised: line 3.
It turns out Python uses a surprising way of building tracebacks. Rather than building the whole stack trace on exception creation (like Java) or when an exception is raised (like I used to think), Python builds up a partial traceback one frame at a time as the exception bubbles up.
Every time an exception bubbles up to a new stack frame, as well as when an exception is raised with the one-argument form of raise (or the two-argument form, on Python 2), the Python bytecode interpreter loop executes PyTraceback_Here to add a new head to the linked list of traceback objects representing the stack trace. (0-argument raise, and 3-argument raise on Python 2, skip this step.)
Python maintains a per-thread stack of exceptions (and tracebacks) suspended by except and finally blocks that haven't finished executing. 0-argument raise restores the exception (and traceback) represented by the top entry on this stack, even if the except or finally is in a different function.
When f executes its raise:
raise Exception
Python builds a traceback corresponding to just that line:
File "foo", line 3, in f
raise Exception
When g executes 0-argument raise, this traceback is restored, but no entry is added for the 0-argument raise line.
Afterwards, as the exception bubbles up through the rest of the stack, entries for the g() and f() calls are added to the stack trace, resulting in the final stack trace that gets displayed:
Traceback (most recent call last):
File "foo", line 10, in <module>
f()
File "foo", line 5, in f
g()
File "foo", line 3, in f
raise Exception
Exception
The following piece of code might help you understand how the raise keyword works:
def fun(n):
try:
return 0 / n
except:
print('an exception raised:')
raise
try:
fun(0)
except:
print('cannot divide by zero')
try:
fun('0')
except:
print('cannot divide by a string')