Generic Exception Handling in Python the "Right Way" - python

Sometimes I find myself in the situation where I want to execute several sequential commands like such:
try:
foo(a, b)
except Exception, e:
baz(e)
try:
bar(c, d)
except Exception, e:
baz(e)
...
This same pattern occurs when exceptions simply need to be ignored.
This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code.
In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python.
Question: How can I best reduce the code footprint and increase code readability when coming across this pattern?

You could use the with statement if you have python 2.5 or above:
from __future__ import with_statement
import contextlib
#contextlib.contextmanager
def handler():
try:
yield
except Exception, e:
baz(e)
Your example now becomes:
with handler():
foo(a, b)
with handler():
bar(c, d)

If this is always, always the behaviour you want when a particular function raises an exception, you could use a decorator:
def handle_exception(handler):
def decorate(func):
def call_function(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception, e:
handler(e)
return call_function
return decorate
def baz(e):
print(e)
#handle_exception(baz)
def foo(a, b):
return a + b
#handle_exception(baz)
def bar(c, d):
return c.index(d)
Usage:
>>> foo(1, '2')
unsupported operand type(s) for +: 'int' and 'str'
>>> bar('steve', 'cheese')
substring not found

If they're simple one-line commands, you can wrap them in lambdas:
for cmd in [
(lambda: foo (a, b)),
(lambda: bar (c, d)),
]:
try:
cmd ()
except StandardError, e:
baz (e)
You could wrap that whole thing up in a function, so it looked like this:
ignore_errors (baz, [
(lambda: foo (a, b)),
(lambda: bar (c, d)),
])

The best approach I have found, is to define a function like such:
def handle_exception(function, reaction, *args, **kwargs):
try:
result = function(*args, **kwargs)
except Exception, e:
result = reaction(e)
return result
But that just doesn't feel or look right in practice:
handle_exception(foo, baz, a, b)
handle_exception(bar, baz, c, d)

You could try something like this. This is vaguely C macro-like.
class TryOrBaz( object ):
def __init__( self, that ):
self.that= that
def __call__( self, *args ):
try:
return self.that( *args )
except Exception, e:
baz( e )
TryOrBaz( foo )( a, b )
TryOrBaz( bar )( c, d )

In your specific case, you can do this:
try:
foo(a, b)
bar(c, d)
except Exception, e:
baz(e)
Or, you can catch the exception one step above:
try:
foo_bar() # This function can throw at several places
except Exception, e:
baz(e)

Related

How to perform error handling for 200 functions?

Following is a sample function
def temp(data):
a = sample_function_1(data)
a = a[0][0]
b = sample_function_2(data,'no',0,30)
b = b[-1]
c = sample_function_3(data, 'thank', 0, None, 15, 5)
c = c[-2]
print(a, b, c)
return a, b, c
Error handling is done for sample_function_1, sample_function_2, and sample_function_3 . So the values returned by these functions cause no problem but the calculations done on the variables a, b, c can throw errors.
There are more than 200 functions like this and I need to manage the errors effectively. I know that try, catch, except is an option but are there any approaches that can be taken to solve this problem.
With 200 functions to change everything is going to be some hard work but you could try to use a decorator, you will only be able to intercept one error per function but it may be a good compromise for your case.
def exception_logger(func):
wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except IndexError:
...doSomething...
raise
return wrapper
#exception_logger
def temp(data):
...doSomething...
I think you can just define a function and just call the exception_logger function in any of your except block in any function.
import traceback
def exception_logger():
exception_log = traceback.format_exc().splitlines()
exception_log.append(str(datetime.now()))
print exception_log
return exception_log
def hello():
try:
print("Hello World!")
except Exception as e:
exception_logger()

Repetitive wrapper functions for logging

I am writing a script that needs to use a class from an external library, do some
operations on instances of that class, and then repeat with some more instances.
Something like this:
import some_library
work_queue = get_items()
for item in work_queue:
some_object = some_library.SomeClass(item)
operation_1(some_object)
# ...
operation_N(some_object)
However, each of the operations in the loop body can raise some different exceptions.
When these happen I need to log them and skip to the next item. If they raise
some unexpected exception I need to log that before crashing.
I could catch all the exceptions in the main loop, but that would obscure what it does.
So I find myself writing a bunch of wrapper functions that all look kind of similar:
def wrapper_op1(some_object):
try:
some_object.method_1()
except (some_library.SomeOtherError, ValueError) as error_message:
logger.error("Op1 error on {}".format(some_object.friendly_name))
return False
except Exception as error_message:
logger.error("Unknown error during op1 on {} - crashing: {}".format(some_object.friendly_name, error_message))
raise
else:
return True
# Notice there is a different tuple of anticipated exceptions
# and the message formatting is different
def wrapper_opN(some_object):
try:
some_function(some_object.some_attr)
except (RuntimeError, AttributeError) as error_message:
logger.error("OpN error on {} with {}".format(some_object.friendly_name, some_object.some_attr, error_message))
return False
except Exception as error_message:
logger.error("Unknown error during opN on {} with {} - crashing: {}".(some_object.friendly_name, some_object.some_attr, error_message))
raise
else:
return True
And modifying my main loop to be:
for item in work_queue:
some_object = some_library.SomeClass(item)
if not wrapper_op1(some_object):
continue
# ...
if not wrapper_opN(some_object):
continue
This does the job, but it feels like a lot of copy and paste programming with
the wrappers. What would be great is to write a decorator function that could
do all that try...except...else stuff so I could do:
# logged_call(known_exception, known_error_message, unknown_error_message)
def wrapper_op1(some_object):
some_object.method_1()
The wrapper would return True if the operation succeeds, catch the known exceptions
and log with a specified format, and catch any unknown exceptions for logging before re-raising.
However, I can't seem to fathom how to make the error messages work - I can do it with fixed strings:
def logged_call(known_exceptions, s_err, s_fatal):
def decorate(f):
#wraps(f)
def wrapper(*args, **kwargs):
try:
f(*args, **kwargs)
# How to get parameters from args in the message?
except known_exceptions as error:
print(s_err.format(error))
return False
except Exception as error:
print(s_fatal.format(error))
raise
else:
return True
return wrapper
return decorate
However, my error messages need to get attributes that belong to the decorated function.
Is there some Pythonic way to make this work? Or a different pattern to be using
when dealing with might-fail-in-known-ways functions?
The below tryblock function can be used for this purpose, to implement your logged_call concept and reduce the total amount of code, assuming you have enough checks to overcome the decorator implementation. Folks not used to functional programming in Python may actually find this more difficult to understand than simply writing out the try blocks as you have done. Simplicity, like so many things, is in the eye of the beholder.
Python 2.7 using no imports. This uses the exec statement to create a customized try block that works in a functional pattern.
def tryblock(tryf, *catchclauses, **otherclauses):
u'return a general try-catch-else-finally block as a function'
elsef = otherclauses.get('elsef', None)
finallyf = otherclauses.get('finallyf', None)
namespace = {'tryf': tryf, 'elsef': elsef, 'finallyf': finallyf, 'func': []}
for pair in enumerate(catchclauses):
namespace['e%s' % (pair[0],)] = pair[1][0]
namespace['f%s' % (pair[0],)] = pair[1][1]
source = []
add = lambda indent, line: source.append(' ' * indent + line)
add(0, 'def block(*args, **kwargs):')
add(1, "u'generated function executing a try block'")
add(1, 'try:')
add(2, '%stryf(*args, **kwargs)' % ('return ' if otherclauses.get('returnbody', elsef is None) else '',))
for index in xrange(len(catchclauses)):
add(1, 'except e%s as ex:' % (index,))
add(2, 'return f%s(ex, *args, **kwargs)' % (index,))
if elsef is not None:
add(1, 'else:')
add(2, 'return elsef(*args, **kwargs)')
if finallyf is not None:
add(1, 'finally:')
add(2, '%sfinallyf(*args, **kwargs)' % ('return ' if otherclauses.get('returnfinally', False) else '',))
add(0, 'func.append(block)')
exec '\n'.join(source) in namespace
return namespace['func'][0]
This tryblock function is general enough to go into a common library, as it is not specific to the logic of your checks. Add to it your logged_call decorator, implemented as (one import here):
import functools
resultof = lambda func: func() # # token must be followed by an identifier
#resultof
def logged_call():
truism = lambda *args, **kwargs: True
def raisef(ex, *args, **kwargs):
raise ex
def impl(exlist, err, fatal):
return lambda func: \
functools.wraps(func)(tryblock(func,
(exlist, lambda ex, *args, **kwargs: err(ex, *args, **kwargs) and False),
(Exception, lambda ex, *args, **kwargs: fatal(ex, *args, **kwargs) and raisef(ex))),
elsef=truism)
return impl # impl therefore becomes logged_call
Using logged_call as implemented, your two sanity checks look like:
op1check = logged_call((some_library.SomeOtherError, ValueError),
lambda _, obj: logger.error("Op1 error on {}".format(obj.friendly_name)),
lambda ex, obj: logger.error("Unknown error during op1 on {} - crashing: {}".format(obj.friendly_name, ex.message)))
opNcheck = logged_call((RuntimeError, AttributeError),
lambda ex, obj: logger.error("OpN error on {} with {}".format(obj.friendly_name, obj.some_attr, ex.message)),
lambda ex, obj: logger.error("Unknown error during opN on {} with {} - crashing: {}".format(obj.friendly_name, obj.some_attr, ex.message)))
#op1check
def wrapper_op1(obj):
return obj.method_1()
#opNcheck
def wrapper_opN(obj):
return some_function(obj.some_attr)
Neglecting blank lines, this is more compact than your original code by 10 lines, though at the sunk cost of the tryblock and logged_call implementations; whether it is now more readable is a matter of opinion.
You also have the option to define logged_call itself and all distinct decorators derived from it in a separate module, if that's sensible for your code; and therefore to use each derived decorator multiple times.
You may also find more of the logic structure that you can factor into logged_call by tweaking the actual checks.
But in the worst case, where each check has logic that no other does, you may find that it's more readable to just write out each one like you have already. It really depends.
For completeness, here's a unit test for the tryblock function:
import examplemodule as ex
from unittest import TestCase
class TestTryblock(TestCase):
def test_tryblock(self):
def tryf(a, b):
if a % 2 == 0:
raise ValueError
return a + b
def proc_ve(ex, a, b):
self.assertIsInstance(ex, ValueError)
if a % 3 == 0:
raise ValueError
return a + b + 10
def elsef(a, b):
return a + b + 20
def finallyf(a, b):
return a + b + 30
block = ex.tryblock(tryf, (ValueError, proc_ve))
self.assertRaises(ValueError, block, 0, 4)
self.assertRaises(ValueError, block, 6, 4)
self.assertEqual([5, 16, 7, 18, 9], map(lambda v: block(v, 4), xrange(1, 6)))
block = ex.tryblock(tryf, (ValueError, proc_ve), elsef=elsef)
self.assertEqual([25, 16, 27, 18, 29], map(lambda v: block(v, 4), xrange(1, 6)))
block = ex.tryblock(tryf, (ValueError, proc_ve), elsef=elsef, returnbody=True)
self.assertEqual([5, 16, 7, 18, 9], map(lambda v: block(v, 4), xrange(1, 6)))
block = ex.tryblock(tryf, (ValueError, proc_ve), finallyf=finallyf)
self.assertEqual([5, 16, 7, 18, 9], map(lambda v: block(v, 4), xrange(1, 6)))
block = ex.tryblock(tryf, (ValueError, proc_ve), finallyf=finallyf, returnfinally=True)
self.assertEqual([35, 36, 37, 38, 39], map(lambda v: block(v, 4), xrange(1, 6)))

Multiple try codes in one block

I have a problem with my code in the try block.
To make it easy this is my code:
try:
code a
code b #if b fails, it should ignore, and go to c.
code c #if c fails, go to d
code d
except:
pass
Is something like this possible?
You'll have to make this separate try blocks:
try:
code a
except ExplicitException:
pass
try:
code b
except ExplicitException:
try:
code c
except ExplicitException:
try:
code d
except ExplicitException:
pass
This assumes you want to run code c only if code b failed.
If you need to run code c regardless, you need to put the try blocks one after the other:
try:
code a
except ExplicitException:
pass
try:
code b
except ExplicitException:
pass
try:
code c
except ExplicitException:
pass
try:
code d
except ExplicitException:
pass
I'm using except ExplicitException here because it is never a good practice to blindly ignore all exceptions. You'll be ignoring MemoryError, KeyboardInterrupt and SystemExit as well otherwise, which you normally do not want to ignore or intercept without some kind of re-raise or conscious reason for handling those.
You can use fuckit module.
Wrap your code in a function with #fuckit decorator:
#fuckit
def func():
code a
code b #if b fails, it should ignore, and go to c.
code c #if c fails, go to d
code d
Extract (refactor) your statements. And use the magic of and and or to decide when to short-circuit.
def a():
try: # a code
except: pass # or raise
else: return True
def b():
try: # b code
except: pass # or raise
else: return True
def c():
try: # c code
except: pass # or raise
else: return True
def d():
try: # d code
except: pass # or raise
else: return True
def main():
try:
a() and b() or c() or d()
except:
pass
If you don't want to chain (a huge number of) try-except clauses, you may try your codes in a loop and break upon 1st success.
Example with codes which can be put into functions:
for code in (
lambda: a / b,
lambda: a / (b + 1),
lambda: a / (b + 2),
):
try: print(code())
except Exception as ev: continue
break
else:
print("it failed: %s" % ev)
Example with arbitrary codes (statements) directly in the current scope:
for i in 2, 1, 0:
try:
if i == 2: print(a / b)
elif i == 1: print(a / (b + 1))
elif i == 0: print(a / (b + 2))
break
except Exception as ev:
if i:
continue
print("it failed: %s" % ev)
Lets say each code is a function and its already written then the following can be used to iter through your coding list and exit the for-loop when a function is executed without error using the "break".
def a(): code a
def b(): code b
def c(): code c
def d(): code d
for func in [a, b, c, d]: # change list order to change execution order.
try:
func()
break
except Exception as err:
print (err)
continue
I used "Exception " here so you can see any error printed. Turn-off the print if you know what to expect and you're not caring (e.g. in case the code returns two or three list items (i,j = msg.split('.')).
You could try a for loop
for func,args,kwargs in zip([a,b,c,d],
[args_a,args_b,args_c,args_d],
[kw_a,kw_b,kw_c,kw_d]):
try:
func(*args, **kwargs)
break
except:
pass
This way you can loop as many functions as you want without making the code look ugly
I use a different way, with a new variable:
continue_execution = True
try:
command1
continue_execution = False
except:
pass
if continue_execution:
try:
command2
except:
command3
to add more commands you just have to add more expressions like this:
try:
commandn
continue_execution = False
except:
pass
I ran into this problem, but then it was doing the things in a loop which turned it into a simple case of issueing the continue command if successful. I think one could reuse that technique if not in a loop, at least in some cases:
while True:
try:
code_a
break
except:
pass
try:
code_b
break
except:
pass
etc
raise NothingSuccessfulError
Like Elazar suggested:
"I think a decorator would fit here."
# decorator
def test(func):
def inner(*args, **kwargs):
try:
func(*args, **kwargs)
except: pass
return inner
# code blocks as functions
#test
def code_a(x):
print(1/x)
#test
def code_b(x):
print(1/x)
#test
def code_c(x):
print(1/x)
#test
def code_d(x):
print(1/x)
# call functions
code_a(0)
code_b(1)
code_c(0)
code_c(4)
output:
1.0
0.25

How to write multiple try statements in one block in python?

I want to do:
try:
do()
except:
do2()
except:
do3()
except:
do4()
If do() fails, execute do2(), if do2() fails too, exceute do3() and so on.
best Regards
If you really don't care about the exceptions, you could loop over cases until you succeed:
for fn in (do, do2, do3, do4):
try:
fn()
break
except:
continue
This at least avoids having to indent once for every case. If the different functions need different arguments you can use functools.partial to 'prime' them before the loop.
I'd write a quick wrapper function first() for this.
usage: value = first([f1, f2, f3, ..., fn], default='All failed')
#!/usr/bin/env
def first(flist, default=None):
""" Try each function in `flist` until one does not throw an exception, and
return the return value of that function. If all functions throw exceptions,
return `default`
Args:
flist - list of functions to try
default - value to return if all functions fail
Returns:
return value of first function that does not throw exception, or
`default` if all throw exceptions.
TODO: Also accept a list of (f, (exceptions)) tuples, where f is the
function as above and (exceptions) is a tuple of exceptions that f should
expect. This allows you to still re-raise unexpected exceptions.
"""
for f in flist:
try:
return f()
except:
continue
else:
return default
# Testing.
def f():
raise TypeError
def g():
raise IndexError
def h():
return 1
# We skip two exception-throwing functions and return value of the last.
assert first([f, g, h]) == 1
assert first([f, g, f], default='monty') == 'monty'
It seems like a really odd thing to want to do, but I would probably loop over the functions and break out when there were no exception raised:
for func in [do, do2, do3]:
try:
func()
except Exception:
pass
else:
break
Here is the simplest way I found, just embed the try under the previous except.
try:
do()
except:
try:
do2()
except:
do3()
You should specify the type of the exception you are trying to catch each time.
try:
do()
except TypeError: #for example first one - TypeError
do_2()
except KeyError: #for example second one - KeyError
do_3()
and so on.
if you want multiple try statments you can do it like this, including the except statement. Extract (refactor) your statements. And use the magic of and and or to decide when to short-circuit.
def a():
try: # a code
except: pass # or raise
else: return True
def b():
try: # b code
except: pass # or raise
else: return True
def c():
try: # c code
except: pass # or raise
else: return True
def d():
try: # d code
except: pass # or raise
else: return True
def main():
try:
a() and b() or c() or d()
except:
pass
import sys
try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error: {0}".format(err))
except ValueError:
print("Could not convert data to an integer.")
except:
print("Unexpected error:", sys.exc_info()[0])
raise

Python Exceptions Control/Flow Issue

I've been working in Python and ran into something that must be a common occurrence.
I have five statements that all fall into a common pitfall of raising
FooException and BarException. I want to run each of them, guarding against
these exceptions but continuing to process even if an exception is raised after
some handling is done. Now, I could do this like so:
try:
foo()
except (FooException, BarException):
pass
try:
bar()
except (FooException, BarException):
pass
try:
baz()
except (FooException, BarException):
pass
try:
spam()
except (FooException, BarException):
pass
try:
eggs()
except (FooException, BarException):
pass
but that is really verbose and in extreme violation of DRY. A rather brute-force
and obvious solution is something like this:
def wish_i_had_macros_for_this(statements, exceptions, gd, ld):
""" execute statements inside try/except handling exceptions with gd and ld
as global dictionary and local dictionary
statements is a list of strings to be executed as statements
exceptions is a list of strings that resolve to Exceptions
gd is a globals() context dictionary
ld is a locals() context dictionary
a list containing None or an Exception if an exception that wasn't
guarded against was raised during execution of the statement for each
statement is returned
"""
s = """
try:
$STATEMENT
except (%s):
pass
""" % ','.join(exceptions)
t = string.Template(s)
code = [t.substitute({'STATEMENT': s}) for s in statements]
elist = list()
for c in code:
try:
exec c in gd, ld
elist.append(None)
except Exception, e:
elist.append(e)
return elist
With usage along the lines of:
>>> results = wish_i_had_macros_for_this(
['foo()','bar()','baz','spam()','eggs()'],
['FooException','BarException'],
globals(),
locals())
[None,None,None,SpamException,None]
Is there a better way?
def execute_silently(fn, exceptions = (FooException, BarException)):
try:
fn()
except Exception as e:
if not isinstance(e, exceptions):
raise
execute_silently(foo)
execute_silently(bar)
# ...
# or even:
for fn in (foo, bar, ...):
execute_silently(fn)
What about this?
#!/usr/bin/env python
def foo():
print "foo"
def bar():
print "bar"
def baz():
print "baz"
for f in [foo, bar, baz]:
try:
f()
except (FooException, BarException):
pass
This version allows statement execution as well:
from contextlib import contextmanager
from functools import partial
#contextmanager
def exec_silent(exc=(StandardError,)):
try:
yield
except exc:
pass
silent_foobar = partial(exec_silent, (FooException, BarException))
with silent_foobar():
print 'foo'
foo()
with silent_foobar():
print 'bar'
bar()

Categories

Resources