There are ways how programmer can make programming and refactoring easier and more simple, python is very good in this area.
I'm curious whether is there a more elegant way to solve my problem than brute-force writing the same code multiple times again and again.
Situation:
I'm writing a code. There are many equal methods calling with different arguments sequentially.
For example - I have this code:
...
...
my_method(1)
my_method(2)
my_method(3)
my_method(4)
...
my_method(10)
...
So I have this code written, everything works fine but suddenly I find out that I need to make a log file so I have to put try-except on everyone of this methods so the code will look like this:
...
...
try:
my_method(3)
except Exception as e:
print_to_file(log.txt,str(e))
...
...
try:
my_method(8)
except Exception as e:
print_to_file(log.txt,str(e))
...
...
Do I have a better option than changing every my_method(x) calling and putting it into try-except clause? I know that it is a mistake of the programmer who had to think about it at the beginning but these situations happens.
EDIT: According to the answer - the code above is the simple example. In real code there are no int arguments given but dates where there is no logic there so I can't put it into the loop. Assume that the arguments can't be generated.
If you're using the logger supplied by python, you can redirect exception output to the log as opposed to have to put a ton of try blocks everywhere:
import os, sys
import logging
logger = logging.getLogger(__name__)
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
if __name__ == "__main__":
raise RuntimeError("Test unhandled")
Now is an exception is thrown, you won't need a try block, it will be written to the log regardless
ref
You can take advantage of the fact that a function, in python, is totally an object, and write a function that takes in another function, runs it, and logs any exceptions
def sloppyRun(func, *args, **kwargs):
"""Runs a function, catching all exceptions
and writing them to a log file."""
try:
return func(*args, **kwargs) #running function here
except:
logging.exception(func.__name__ + str(args) + str(kwargs))
#incidentally, the logging module is wonderful. I'd recommend using it.
#It'll even write the traceback to a file.
And then you can write something like
sloppyRun(my_method, 8) #note the lack of parens for my_method
You could have like a context manager or a decorator to log what you need, when you need to. if you intend to always log an exception when you use that function, I would suggest going the simple decorator rule or even a try and except inside that function. If it is functions not in your code or you dont want them to always log,then I would used a context manager (called as with ..:)
A context manager example code
import functools
class LoggerContext():
def __enter__(self):
# function that is called on enter of the with context
# we dont need this
pass
def __exit__(self, type, value, traceback):
# If there was an exception, it will be passed to the
# exit function.
# type = type of exception
# value = the string arg of the exception
# traceback object for you to extract the traceback if you need to
if traceback:
# do something with exception like log it etc
print(type, value, traceback)
# If the return value of the exit function is not True, python
# interpreter re-raises the exception. We dont want to re-raise
# the exception
return True
def __call__(self, f):
# this is just to make a context manager a decorator
# so that you could use the #context on a function
#functools.wraps(f)
def decorated(*args, **kwds):
with self:
return f(*args, **kwds)
return decorated
#LoggerContext()
def myMethod(test):
raise FileNotFoundError(test)
def myMethod2(test):
raise TypeError(test)
myMethod('asdf')
with LoggerContext():
myMethod2('asdf')
A simple decorator example:
import functools
def LoggerDecorator(f):
#functools.wraps(f)
def decorated(*args, **kwds):
try:
return f(*args, **kwds)
except Exception as e:
# do something with exception
print('Exception:', e)
return decorated
#LoggerDecorator
def myMethod3(test):
raise IOError(test)
myMethod3('asdf')
Related
I have created the following decorator looking at some online resources:
import logging
def exception_handler(func):
def inner_function(*args, **kwargs):
try:
output=func(*args, **kwargs)
return output
except Exception:
logging.error('Error in "{}" function'.format(func.__name__))
return inner_function
This decorator applied to every function tries to execute it otherwise it writes to a log file.
Example of use:
#exception_handler
def sum_2_numbers(x,y):
return x+y
sum_2_numbers(3,'a')
Is it also possible to allow some action to be executed after the exception, depending on the function to which it is applied?
I mean I want to add some istructions if the exception is rised. (for example a rollback after the exception).
Any help?
I have a class with plenty static methods with Tornado coroutine decorator. And I want to add another decorator, to catch exceptions and write them to a file:
# my decorator
def lifesaver(func):
def silenceit(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as ex:
# collect info and format it
res = ' ... '
# writelog(res)
print(res)
return None
return silenceit
However, it doesn't work with gen.coroutine decorator:
class SomeClass:
# This doesn't work!
# I tried to pass decorators in different orders,
# but got no result.
#staticmethod
#lifesaver
#gen.coroutine
#lifesaver
def dosomething1():
raise Exception("Test error!")
# My decorator works well
# if it is used without gen.coroutine.
#staticmethod
#gen.coroutine
def dosomething2():
SomeClass.dosomething3()
#staticmethod
#lifesaver
def dosomething3():
raise Exception("Test error!")
I understand, that Tornado uses raise Return(...) approach which is probably based on Exceptions, and maybe it somehow blocks try-catches of other decorators... So, how can I used my decorator to handle Exceptions with Tornado coroutines?
The answer
Thanks to Martijn Pieters, I got this code working:
def lifesaver(func):
def silenceit(*args, **kwargs):
try:
return func(*args, **kwargs)
except (gen.Return, StopIteration):
raise
except Exception as ex:
# collect info and format it
res = ' ... '
# writelog(res)
print(res)
raise gen.Return(b"")
return silenceit
So, I only needed to specify Tornado Return. I tried to add #gen.coroutine decorator to silenceit function and use yield in it, but this leads to Future objects of Future objects and some other strange unpredictable behaviour.
You are decorating the output of gen.coroutine, because decorators are applied from bottom to top (as they are nested inside one another from top to bottom).
Rather than decorate the coroutine, decorate your function and apply the gen.coroutine decorator to that result:
#gen.coroutine
#lifesaver
def dosomething1():
raise Exception("Test error!")
Your decorator can't really handle the output that a #gen.coroutine decorated function produces. Tornado relies on exceptions to communicate results (because in Python 2, generators can't use return to return results). You need to make sure you pass through the exceptions Tornado relies on. You also should re-wrap your wrapper function:
from tornado import gen
def lifesaver(func):
#gen.coroutine
def silenceit(*args, **kwargs):
try:
return func(*args, **kwargs)
except (gen.Return, StopIteration):
raise
except Exception as ex:
# collect info and format it
res = ' ... '
# writelog(res)
print(res)
raise gen.Return(b"")
return silenceit
On exception, an empty Return() object is raised; adjust this as needed.
Do yourself a favour and don't use a class just put staticmethod functions in there. Just put those functions at the top level in the module. Classes are there to combine methods and shared state, not to create a namespace. Use modules to create namespaces instead.
I was wondering, is there a simple magic method in python that allows customization of the behaviour of an exception-derived object when it is raised? I'm looking for something like __raise__ if that exists. If no such magic methods exist, is there any way I could do something like the following (it's just an example to prove my point):
class SpecialException(Exception):
def __raise__(self):
print('Error!')
raise SpecialException() #this is the part of the code that must stay
Is it possible?
I don't know about such magic method but even if it existed it is just some piece of code that gets executed before actually raising the exception object. Assuming that its a good practice to raise exception objects that are instantiated in-place you can put such code into the __init__ of the exception. Another workaround: instead of raising your exception directly you call an error handling method/function that executes special code and then finally raises an exception.
import time
from functools import wraps
def capture_exception(callback=None, *c_args, **c_kwargs):
"""捕获到异常后执行回调函数"""
assert callable(callback), "callback 必须是可执行对象"
def _out(func):
#wraps(func)
def _inner(*args, **kwargs):
try:
res = func(*args, **kwargs)
return res
except Exception as e:
callback(*c_args, **c_kwargs)
raise e
return _inner
return _out
def send_warning():
print("warning message..............")
class A(object):
#capture_exception(callback=send_warning)
def run(self):
print('run')
raise SystemError("测试异常捕获回调功能")
time.sleep(0.2)
if __name__ == '__main__':
a = A()
a.run()
I'm writing a program in Python, and nearly every method im my class is written like this:
def someMethod(self):
try:
#...
except someException:
#in case of exception, do something here
#e.g display a dialog box to inform the user
#that he has done something wrong
As the class grows, it is a little bit annoying to write the same try-except block over and over. Is it possible to create some sort of 'global' exception for the whole class? What's the recommended way in Python to deal with this?
Write one or more exception handler functions that, given a function and the exception raised in it, does what you want to do (e.g. displays an alert). If you need more than one, write them.
def message(func, e):
print "Exception", type(e).__name__, "in", func.__name__
print str(e)
Now write a decorator that applies a given handler to a called function:
import functools
def handle_with(handler, *exceptions):
try:
handler, cleanup = handler
except TypeError:
cleanup = lambda f, e: None
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions or Exception as e:
return handler(func, e)
else:
e = None
finally:
cleanup(func, e)
return wrapper
return decorator
This only captures the exceptions you specify. If you don't specify any, Exception is caught. Additionally, the first argument can be a tuple (or other sequence) of two handler functions; the second handler, if given, is called in a finally clause. The value returned from the primary handler is returned as the value of the function call.
Now, given the above, you can write:
#handle_with(message, TypeError, ValueError)
def add(x, y):
return x + y
You could also do this with a context manager:
from contextlib import contextmanager
#contextmanager
def handler(handler, *exceptions):
try:
handler, cleanup = handler
except TypeError:
cleanup = lambda e: None
try:
yield
except exceptions or Exception as e:
handler(e)
else:
e = None
finally:
cleanup(e)
Now you can write:
def message(e):
print "Exception", type(e).__name__
print str(e)
def add(x, y):
with handler(message, TypeError, ValueError):
return x + y
Note that the context manager doesn't know what function it's in (you can find this out, sorta, using inspect, though this is "magic" so I didn't do it) so it gives you a little less useful information. Also, the context manager doesn't give you the opportunity to return anything in your handler.
I can think of two options:
Write a decorator that can wrap each method in the try block.
Write a "dispatcher" method that calls the appropriate method inside a try block, then call that method instead of the individual ones. That is, instead of calling obj.someMethod(), obj.otherMethod, you call obj.dispatch('someMethod') or obj.dispatch('otherMethod'), where dispatch is a wrapper that contains the try block.
Your approach seems like a bit of a strange design, though. It might make more sense to have the dialog-box stuff in some other part of the code, some higher-level event loop that catches errors and displays messages about them.
In my code, I need to be able to open and close a device properly, and therefore see the need to use a context manager. While a context manager is usually defined as a class with __enter__ and __exit__ methods, there also seem to be the possibility to decorate a function for use with the context manager (see a recent post and another nice example here).
In the following (working) code snippet, I have implemented the two possibilities; one just need to swap the commented line with the other one:
import time
import contextlib
def device():
return 42
#contextlib.contextmanager
def wrap():
print("open")
yield device
print("close")
return
class Wrap(object):
def __enter__(self):
print("open")
return device
def __exit__(self, type, value, traceback):
print("close")
#with wrap() as mydevice:
with Wrap() as mydevice:
while True:
time.sleep(1)
print mydevice()
What I try is to run the code and stop it with CTRL-C. When I use the Wrap class in the context manager, the __exit__ method is called as expeced (the text 'close' is printed in the terminal), but when I try the same thing with the wrap function, the text 'close' is not printed to the terminal.
My question: Is there a problem with the code snippet, am I missing something, or why is the line print("close") not called with the decorated function?
The example in the documentation for contextmanager is somewhat misleading. The portion of the function after yield does not really correspond to the __exit__ of the context manager protocol. The key point in the documentation is this:
If an unhandled exception occurs in the block, it is reraised inside the generator at the point where the yield occurred. Thus, you can use a try...except...finally statement to trap the error (if any), or ensure that some cleanup takes place.
So if you want to handle an exception in your contextmanager-decorated function, you need to write your own try that wraps the yield and handle the exceptions yourself, executing cleanup code in a finally (or just block the exception in except and execute your cleanup after the try/except). For example:
#contextlib.contextmanager
def cm():
print "before"
exc = None
try:
yield
except Exception, exc:
print "Exception was caught"
print "after"
if exc is not None:
raise exc
>>> with cm():
... print "Hi!"
before
Hi!
after
>>> with cm():
... print "Hi!"
... 1/0
before
Hi!
Exception was caught
after
This page also shows an instructive example.