I'm trying to catch an exception thrown in the caller of a generator:
class MyException(Exception):
pass
def gen():
for i in range(3):
try:
yield i
except MyException:
print 'handled exception'
for i in gen():
print i
raise MyException
This outputs
$ python x.py
0
Traceback (most recent call last):
File "x.py", line 14, in <module>
raise MyException
__main__.MyException
when I was intending for it to output
$ python x.py
0
handled exception
1
handled exception
2
handled exception
In retrospect, I think this is because the caller has a different stack from the generator, so the exception isn't bubbled up to the generator. Is that correct? Is there some other way to catch exceptions raised in the caller?
Aside: I can make it work using generator.throw(), but that requires modifying the caller:
def gen():
for i in range(3):
try:
yield i
except MyException:
print 'handled exception'
yield
import sys
g = gen()
for i in g:
try:
print i
raise MyException
except:
g.throw(*sys.exc_info())
You may be thinking that when execution hits yield in the generator, the generator executes the body of the for loop, sort of like a Ruby function with yield and a block. That's not how things work in Python.
When execution hits yield, the generator's stack frame is suspended and removed from the stack, and control returns to the code that (implicitly) called the generator's next method. That code then enters the loop body. At the time the exception is raised, the generator's stack frame is not on the stack, and the exception does not go through the generator as it bubbles up.
The generator has no way to respond to this exception.
You might also have gotten confused - as I was just now and arriving here - by the yield in the context managers (contextlib.contextmanager).
Their usage can be:
from contextlib import contextmanager
#contextmanager
def mycontext():
try:
yield
except MyException:
print 'handled exception'
So my solution to a similar case to what you described above is:
def gen():
for i in range(3):
yield i
for ii in gen():
with mycontext():
print ii
raise MyException
Which gives the expected output and uses all of the yields.
A bit late to the party here, but maybe someone with similar knots in their minds will find it helpful.
Note: putting the context INSIDE the generator will be the same mistake as using try ... except there!!
Related
For example, my function call make look like the following:
def parse(self, text):
...
return self.parse_helper(text)
#staticmethod
def parser_helper(text):
...
return normalize(text)
#staticmethod
def normalize(text):
...
try:
...
except:
raise ValueError('normalize failed.')
If the 'parse' is the function to be provided to users to call, if an exception occurs in normalize(), the whole program terminates. To avoid this, to let users decide what to do when exception occurs, do I have to and try ... except blocks into both 'parser_helper' and 'parse', and let use to use try...except when 'parse' is called?
What's the normal practice of handling this? If there are a few more layers of function calls embedded other than 3 as shown below, do I have to use try ... except block in each layer of function, in order to transfer the handling of exception to the end users at the very top?
The practice to pass a parameter to decide wether to raise or be silent regarding an exception, is something that exists, for example in os.makedirs
You could do
def parse(self, text, error_silent=False):
...
try:
return self.parse_helper(text)
except ValueError as e:
if error_silent:
return None
raise e
Is it possible to tell if there was an exception once you're in the finally clause? Something like:
try:
funky code
finally:
if ???:
print('the funky code raised')
I'm looking to make something like this more DRY:
try:
funky code
except HandleThis:
# handle it
raised = True
except DontHandleThis:
raised = True
raise
else:
raised = False
finally:
logger.info('funky code raised %s', raised)
I don't like that it requires to catch an exception, which you don't intend to handle, just to set a flag.
Since some comments are asking for less "M" in the MCVE, here is some more background on the use-case. The actual problem is about escalation of logging levels.
The funky code is third party and can't be changed.
The failure exception and stack trace does not contain any useful diagnostic information, so using logger.exception in an except block is not helpful here.
If the funky code raised then some information which I need to see has already been logged, at level DEBUG. We do not and can not handle the error, but want to escalate the DEBUG logging because the information needed is in there.
The funky code does not raise, most of the time. I don't want to escalate logging levels for the general case, because it is too verbose.
Hence, the code runs under a log capture context (which sets up custom handlers to intercept log records) and some debug info gets re-logged retrospectively:
try:
with LogCapture() as log:
funky_code() # <-- third party badness
finally:
# log events are buffered in memory. if there was an exception,
# emit everything that was captured at a WARNING level
for record in log.captured:
if <there was an exception>:
log_fn = mylogger.warning
else:
log_fn = getattr(mylogger, record.levelname.lower())
log_fn(record.msg, record.args)
Using a contextmanager
You could use a custom contextmanager, for example:
class DidWeRaise:
__slots__ = ('exception_happened', ) # instances will take less memory
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# If no exception happened the `exc_type` is None
self.exception_happened = exc_type is not None
And then use that inside the try:
try:
with DidWeRaise() as error_state:
# funky code
finally:
if error_state.exception_happened:
print('the funky code raised')
It's still an additional variable but it's probably a lot easier to reuse if you want to use it in multiple places. And you don't need to toggle it yourself.
Using a variable
In case you don't want the contextmanager I would reverse the logic of the trigger and toggle it only in case no exception has happened. That way you don't need an except case for exceptions that you don't want to handle. The most appropriate place would be the else clause that is entered in case the try didn't threw an exception:
exception_happened = True
try:
# funky code
except HandleThis:
# handle this kind of exception
else:
exception_happened = False
finally:
if exception_happened:
print('the funky code raised')
And as already pointed out instead of having a "toggle" variable you could replace it (in this case) with the desired logging function:
mylog = mylogger.WARNING
try:
with LogCapture() as log:
funky_code()
except HandleThis:
# handle this kind of exception
else:
# In case absolutely no exception was thrown in the try we can log on debug level
mylog = mylogger.DEBUG
finally:
for record in log.captured:
mylog(record.msg, record.args)
Of course it would also work if you put it at the end of your try (as other answers here suggested) but I prefer the else clause because it has more meaning ("that code is meant to be executed only if there was no exception in the try block") and may be easier to maintain in the long run. Although it's still more to maintain than the context manager because the variable is set and toggled in different places.
Using sys.exc_info (works only for unhandled exceptions)
The last approach I want to mention is probably not useful for you but maybe useful for future readers who only want to know if there's an unhandled exception (an exception that was not caught in any except block or has been raised inside an except block). In that case you can use sys.exc_info:
import sys
try:
# funky code
except HandleThis:
pass
finally:
if sys.exc_info()[0] is not None:
# only entered if there's an *unhandled* exception, e.g. NOT a HandleThis exception
print('funky code raised')
raised = True
try:
funky code
raised = False
except HandleThis:
# handle it
finally:
logger.info('funky code raised %s', raised)
Given the additional background information added to the question about selecting a log level, this seems very easily adapted to the intended use-case:
mylog = WARNING
try:
funky code
mylog = DEBUG
except HandleThis:
# handle it
finally:
mylog(...)
You can easily assign your caught exception to a variable and use it in the finally block, eg:
>>> x = 1
>>> error = None
>>> try:
... x.foo()
... except Exception as e:
... error = e
... finally:
... if error is not None:
... print(error)
...
'int' object has no attribute 'foo'
Okay, so what it sounds like you actually just want to either modify your existing context manager, or use a similar approach: logbook actually has something called a FingersCrossedHandler that would do exactly what you want. But you could do it yourself, like:
#contextmanager
def LogCapture():
# your existing buffer code here
level = logging.WARN
try:
yield
except UselessException:
level = logging.DEBUG
raise # Or don't, if you just want it to go away
finally:
# emit logs here
Original Response
You're thinking about this a bit sideways.
You do intend to handle the exception - you're handling it by setting a flag. Maybe you don't care about anything else (which seems like a bad idea), but if you care about doing something when an exception is raised, then you want to be explicit about it.
The fact that you're setting a variable, but you want the exception to continue on means that what you really want is to raise your own specific exception, from the exception that was raised:
class MyPkgException(Exception): pass
class MyError(PyPkgException): pass # If there's another exception type, you can also inherit from that
def do_the_badness():
try:
raise FileNotFoundError('Or some other code that raises an error')
except FileNotFoundError as e:
raise MyError('File was not found, doh!') from e
finally:
do_some_cleanup()
try:
do_the_badness()
except MyError as e:
print('The error? Yeah, it happened')
This solves:
Explicitly handling the exception(s) that you're looking to handle
Making the stack traces and original exceptions available
Allowing your code that's going to handle the original exception somewhere else to handle your exception that's thrown
Allowing some top-level exception handling code to just catch MyPkgException to catch all of your exceptions so it can log something and exit with a nice status instead of an ugly stack trace
If it was me, I'd do a little re-ordering of your code.
raised = False
try:
# funky code
except HandleThis:
# handle it
raised = True
except Exception as ex:
# Don't Handle This
raise ex
finally:
if raised:
logger.info('funky code was raised')
I've placed the raised boolean assignment outside of the try statement to ensure scope and made the final except statement a general exception handler for exceptions that you don't want to handle.
This style determines if your code failed. Another approach might me to determine when your code succeeds.
success = False
try:
# funky code
success = True
except HandleThis:
# handle it
pass
except Exception as ex:
# Don't Handle This
raise ex
finally:
if success:
logger.info('funky code was successful')
else:
logger.info('funky code was raised')
If exception happened --> Put this logic in the exception block(s).
If exception did not happen --> Put this logic in the try block after the point in code where the exception can occur.
Finally blocks should be reserved for "cleanup actions," according to the Python language reference. When finally is specified the interpreter proceeds in the except case as follows: Exception is saved, then the finally block is executed first, then lastly the Exception is raised.
This question already has answers here:
is there a pythonic way to try something up to a maximum number of times?
(10 answers)
Closed 7 months ago.
I am writing in Python 2.7 and encounter the following situation. I would like to try calling a function three times. If all three times raise errors, I will raise the last error I get. If any one of the calls succeed, I will quit trying and continue immediately.
Here is what I have right now:
output = None
error = None
for _e in range(3):
error = None
try:
print 'trial %d!' % (_e + 1)
output = trial_function()
except Exception as e:
error = e
if error is None:
break
if error is not None:
raise error
Is there a better snippet that achieve the same use case?
use decorator
from functools import wraps
def retry(times):
def wrapper_fn(f):
#wraps(f)
def new_wrapper(*args,**kwargs):
for i in range(times):
try:
print 'try %s' % (i + 1)
return f(*args,**kwargs)
except Exception as e:
error = e
raise error
return new_wrapper
return wrapper_fn
#retry(3)
def foo():
return 1/0;
print foo()
Here is one possible approach:
def attempt(func, times=3):
for _ in range(times):
try:
return func()
except Exception as err:
pass
raise err
A demo with a print statement in:
>>> attempt(lambda: 1/0)
Attempt 1
Attempt 2
Attempt 3
Traceback (most recent call last):
File "<pyshell#18>", line 1, in <module>
attempt(lambda: 1/0)
File "<pyshell#17>", line 8, in attempt
raise err
ZeroDivisionError: integer division or modulo by zero
If you're using Python 3.x and get an UnboundLocalError, you can adapt as follows:
def attempt(func, times=3):
to_raise = None
for _ in range(times):
try:
return func()
except Exception as err:
to_raise = err
raise to_raise
This is because the err is cleared at the end of the try statement; per the docs:
When an exception has been assigned using as target, it is cleared
at the end of the except clause.
Ignoring the debug output and the ancient Python dialect, this looks good. The only thing I would change is to put it into a function, you could then simply return the result of trial_function(). Also, the error = None then becomes unnecessary, including the associated checks. If the loop terminates, error must have been set, so you can just throw it. If you don't want a function, consider using else in combination with the for loop and breaking after the first result.
for i in range(3):
try:
result = foo()
break
except Exception as error:
pass
else:
raise error
use_somehow(result)
Of course, the suggestion to use a decorator for the function still holds. You can also apply this locally, the decorator syntax is only syntactic sugar after all:
# retry from powerfj's answer below
rfoo = retry(3)(foo)
result = rfoo()
Came across a clean way of doing the retries. There is a module called retry.
First install the module using
pip install retry
Then import the module in the code.
from retry import retry
Use #retry decorator above the method, We can pass the parameters to the decorator. Some of the parameters are tries , delay , Exception.
Example
from retry import retry
#retry(AssertionError, tries=3, delay=2)
def retryfunc():
try:
ret = False
assert ret, "Failed"
except Exception as ex:
print(ex)
raise ex
The above code asserts and fails everytime, but the retry decorator retries for 3 times with a delay of 2 seconds between retries. Also this only retries on Assertion failures since we have specified the error type as AssertionError on any other error the function wont retry.
Is there a way I can catch exceptions in the __enter__ method of a context manager without wrapping the whole with block inside a try?
class TstContx(object):
def __enter__(self):
raise Exception("I'd like to catch this exception")
def __exit__(self, e_typ, e_val, trcbak):
pass
with TstContx():
raise Exception("I don't want to catch this exception")
pass
I know that I can catch the exception within __enter__() itself, but can I access that error from the function that contains the with statement?
On the surface the question Catching exception in context manager __enter__() seems to be the same thing but that question is actually about making sure that __exit__ gets called, not with treating the __enter__ code differently from the block that the with statement encloses.
...evidently the motivation should be clearer. The with statement is setting up some logging for a fully automated process. If the program fails before the logging is set up, then I can't rely on the logging to notify me, so I have to do something special. And I'd rather achieve the effect without having to add more indentation, like this:
try:
with TstContx():
try:
print "Do something"
except Exception:
print "Here's where I would handle exception generated within the body of the with statement"
except Exception:
print "Here's where I'd handle an exception that occurs in __enter__ (and I suppose also __exit__)"
Another downside to using two try blocks is that the code that handles the exception in __enter__ comes after the code that handles exception in the subsequent body of the with block.
You can catch the exception using try/except inside of __enter__, then save the exception instance as an instance variable of the TstContx class, allowing you to access it inside of the with block:
class TstContx(object):
def __enter__(self):
self.exc = None
try:
raise Exception("I'd like to catch this exception")
except Exception as e:
self.exc = e
return self
def __exit__(self, e_typ, e_val, trcbak):
pass
with TstContx() as tst:
if tst.exc:
print("We caught an exception: '%s'" % tst.exc)
raise Exception("I don't want to catch this exception")
Output:
We caught an exception: 'I'd like to catch this exception'.
Traceback (most recent call last):
File "./torn.py", line 20, in <module>
raise Exception("I don't want to catch this exception")
Exception: I don't want to catch this exception
Not sure why you'd want to do this, though....
You can use contextlib.ExitStack as outlined in this doc example in order to check for __enter__ errors separately:
from contextlib import ExitStack
stack = ExitStack()
try:
stack.enter_context(TstContx())
except Exception: # `__enter__` produced an exception.
pass
else:
with stack:
... # Here goes the body of the `with`.
I was reading this article and it was showing this interesting bit of code:
class Car(object):
def _factory_error_handler(self):
try:
yield
except FactoryColorError, err:
stacktrace = sys.exc_info()[2]
raise ValidationError(err.message), None, stacktrace
def _create_customizer_error_handler(self, vin):
try:
yield
except CustomizerError, err:
self._factory.remove_car(vin)
stacktrace = sys.exc_info()[2]
raise ValidationError(err.message), None, stacktrace
def _update_customizer_error_handler(self, vin):
try:
yield
except CustomizerError, err:
stacktrace = sys.exc_info()[2]
raise ValidationError(err.message), None, stacktrace
def create(self, color, stereo):
with self._factory_error_handler():
vin = self._factory.make_car(color)
with self._create_customizer_error_handler(vin):
self._customizer.update_car(vin, stereo)
return vin
def update(self, vin, color, stereo):
with self._factory_error_handler():
self._factory.update_car_color(vin, color)
with self._update_customizer_error_handler(vin):
self._customizer.update_car(vin, stereo)
return
Here, they are yielding with no argument in a try block. And then using it within a with block. Can someone please explain what is going on here?
The code in the post appears to be wrong. It would make sense if the various error handlers were decorated with contextmanager. Note that, in the post, the code imports contextmanager but doesn't use it. This makes me think the person just made a mistake in creating the post and left contextmanager out of that example. (The later examples in the post do use contextmanager.) I think the code as posted will cause an AttributeError, because the various _error_handler functions aren't context managers and don't have the right __enter__ and __exit__ methods.
With contextmanager, the code makes sense based on the documentation:
At the point where the generator yields, the block nested in the with statement is executed. The generator is then resumed after the block is exited. If an unhandled exception occurs in the block, it is reraised inside the generator at the point where the yield occurred.