Writing python decorators with "with" - python

I'm trying to write a decorator that can be used with with keyword.
# regular code ...
with my_exception_handler():
# dangerous code ...
# regular code ...
And my_exception_handler would receive a function and wrap it in a huge try-except.
I want to make it a decorator/wrapper because it's a lot of code that I don't want to copy-paste. I can't figure out where to start. I wrote a regular decorator and it works on functions, but not on intermediate chunks of code.

The thing that you use with with, is a context manager not a decorator—those are 2 completely different things.
See http://docs.python.org/release/2.5/whatsnew/pep-343.html for how with and context managers work.
See https://wiki.python.org/moin/PythonDecorators for decorators.
EDIT: see kindall's post for a good example on how to write a simple context manager without having to use a full-fledged class; I didn't have time to amend my answer with such an example :)

You need to write a context manager, not a decorator. You can very easily do what you want to do using the contextlib.contextmanager decorator.
from contextlib import contextmanager
#contextmanager
def my_exception_handler():
try:
yield # execute the code in the "with" block
except Exception as e:
# your exception-handling code goes here
print e
# try it out
with my_exception_handler():
raise ValueError("this error has value")

After learning about context managers I took an extended traceback function and turned it into a decorator and a context manager with a few snippets below:
def traceback_decorator(function):
def wrap(*args, **kwargs):
try:
return function(*args, **kwargs)
except:
print_exc_plus()
def traceback_wrapper(function=None, *args, **kwargs):
context = _TracebackContext()
if function is None:
return context
with context:
function(*args, **kwargs)
class _TracebackContext(object):
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, traceback):
if exc_type:
print_exc_plus()

Related

Can we mix contextmanager decorator with __enter__() and __exit__() methods in another class inside the same with statement?

In python3.8 I'm very familiar with the traditional __enter__ and __exit__ magic methods but new to the #contextlib.contextmanager decorator. Is it possible to mix the two patterns inside a single with statement?
The following (highly contrived) script should explain the problem more clearly. Is there a definition of ContextClass.enter_context_function() and ContextClass.exit_context_function() (I imagine something needs to change inside __init__ as well) that only use the context_function() function and makes the unit tests pass? Or are these patterns mutually exclusive?
import contextlib
NUMBERS = []
#contextlib.contextmanager
def context_function():
NUMBERS.append(3)
try:
yield
finally:
NUMBERS.append(5)
class ContextClass:
def __init__(self):
self.numbers = NUMBERS
self.numbers.append(1)
def __enter__(self):
self.numbers.append(2)
self.enter_context_function() # should append 3
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.exit_context_function() # should append 5
self.numbers.append(6)
def function_call(self):
self.numbers.append(4)
def enter_context_function(self):
# FIX ME!
pass
def exit_context_function(self):
# FIX ME!
pass
if __name__ == "__main__":
import unittest
class TestContextManagerFunctionAndClass(unittest.TestCase):
def test_context_function_and_class(self):
with ContextClass() as cc:
cc.function_call()
self.assertEqual(NUMBERS, [1, 2, 3, 4, 5, 6])
unittest.main()
I understand there are better ways to solve a similar problem (specifically rewriting context_function as a class with its own __enter__ and __exit__ methods, but I'm trying to better understand exactly how the contextmanager decorator works.
No change in the __init__ is necessary. The manual way which "makes the unit tests pass" would be:
def enter_context_function(self):
self._context_mgr = context_function()
self._context_mgr.__enter__()
def exit_context_function(self):
self._context_mgr.__exit__(None, None, None)
However, it's kind of missing the point of context-managers. They're intended to be used in a with-statement.
Also note that, as written, the NUMBERS.append(5) line (the "teardown") may not be reached if the code after yielding raises. It should be written like this:
#contextlib.contextmanager
def context_function():
NUMBERS.append(3)
try:
yield
finally:
NUMBERS.append(5)

How to define a global error handler in gRPC python

Im trying to catch any exception that is raised in any servicer so I can make sure that I only propagate known exceptions and not unexpected ones like ValueError, TypeError etc.
I'd like to be able to catch any raised error, and format them or convert them to other errors to better control the info that is exposed.
I don't want to have to enclose every servicer method with try/except.
I've tried with an interceptor, but im not able to catch the errors there.
Is there a way of specifying an error handler for the grpc Server? like what you do with flask or any other http server?
gRPC Python currently don't support server-side global error handler. The interceptor won't execute the server handler inside the intercept_service function, so there is no way to try/except.
Also, I found the gRPC Python server interceptor implementation is different from what they proposed original at L13-Python-Interceptors.md#server-interceptors. If the implementation stick to the original design, we can use interceptor as global error handler easily with handler and request/request_iterator.
# Current Implementation
intercept_service(self, continuation, handler_call_details)
# Original Design
intercept_unary_unary_handler(self, handler, method, request, servicer_context)
intercept_unary_stream_handler(self, handler, method, request, servicer_context)
intercept_stream_unary_handler(self, handler, method, request_iterator, servicer_context)
intercept_stream_stream_handler(self, handler, method, request_iterator, servicer_context)
Please submit a feature request issue to https://github.com/grpc/grpc/issues.
Maybe this will help you :)
def _wrap_rpc_behavior(handler, fn):
if handler is None:
return None
if handler.request_streaming and handler.response_streaming:
behavior_fn = handler.stream_stream
handler_factory = grpc.stream_stream_rpc_method_handler
elif handler.request_streaming and not handler.response_streaming:
behavior_fn = handler.stream_unary
handler_factory = grpc.stream_unary_rpc_method_handler
elif not handler.request_streaming and handler.response_streaming:
behavior_fn = handler.unary_stream
handler_factory = grpc.unary_stream_rpc_method_handler
else:
behavior_fn = handler.unary_unary
handler_factory = grpc.unary_unary_rpc_method_handler
return handler_factory(fn(behavior_fn,
handler.request_streaming,
handler.response_streaming),
request_deserializer=handler.request_deserializer,
response_serializer=handler.response_serializer)
class TracebackLoggerInterceptor(grpc.ServerInterceptor):
def intercept_service(self, continuation, handler_call_details):
def latency_wrapper(behavior, request_streaming, response_streaming):
def new_behavior(request_or_iterator, servicer_context):
try:
return behavior(request_or_iterator, servicer_context)
except Exception as err:
logger.exception(err, exc_info=True)
return new_behavior
return _wrap_rpc_behavior(continuation(handler_call_details), latency_wrapper)
As some of the previous comments suggested, I tried the meta-class approach which works quite well.
Attached is a simple example to demonstrate how to intercept the grpc calls.
You could extend this by providing the metaclass a list of decorators which you could apply on each function.
Also, it would be wise to be more selective regarding the methods you apply the wrapper to. A good option would be to list the methods of the autogenerated base class and only wrap those.
from types import FunctionType
from functools import wraps
def wrapper(method):
#wraps(method)
def wrapped(*args, **kwargs):
# do stuff here
return method(*args, **kwargs)
return wrapped
class ServicerMiddlewareClass(type):
def __new__(meta, classname, bases, class_dict):
new_class_dict = {}
for attribute_name, attribute in class_dict.items():
if isinstance(attribute, FunctionType):
# replace it with a wrapped version
attribute = wrapper(attribute)
new_class_dict[attribute_name] = attribute
return type.__new__(meta, classname, bases, new_class_dict)
# In order to use
class MyGrpcService(grpc.MyGrpcServicer, metaclass=ServicerMiddlewareClass):
...

Python, Tornado: gen.coroutine decorator breaks try-catch in another decorator

I have a class with plenty static methods with Tornado coroutine decorator. And I want to add another decorator, to catch exceptions and write them to a file:
# my decorator
def lifesaver(func):
def silenceit(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as ex:
# collect info and format it
res = ' ... '
# writelog(res)
print(res)
return None
return silenceit
However, it doesn't work with gen.coroutine decorator:
class SomeClass:
# This doesn't work!
# I tried to pass decorators in different orders,
# but got no result.
#staticmethod
#lifesaver
#gen.coroutine
#lifesaver
def dosomething1():
raise Exception("Test error!")
# My decorator works well
# if it is used without gen.coroutine.
#staticmethod
#gen.coroutine
def dosomething2():
SomeClass.dosomething3()
#staticmethod
#lifesaver
def dosomething3():
raise Exception("Test error!")
I understand, that Tornado uses raise Return(...) approach which is probably based on Exceptions, and maybe it somehow blocks try-catches of other decorators... So, how can I used my decorator to handle Exceptions with Tornado coroutines?
The answer
Thanks to Martijn Pieters, I got this code working:
def lifesaver(func):
def silenceit(*args, **kwargs):
try:
return func(*args, **kwargs)
except (gen.Return, StopIteration):
raise
except Exception as ex:
# collect info and format it
res = ' ... '
# writelog(res)
print(res)
raise gen.Return(b"")
return silenceit
So, I only needed to specify Tornado Return. I tried to add #gen.coroutine decorator to silenceit function and use yield in it, but this leads to Future objects of Future objects and some other strange unpredictable behaviour.
You are decorating the output of gen.coroutine, because decorators are applied from bottom to top (as they are nested inside one another from top to bottom).
Rather than decorate the coroutine, decorate your function and apply the gen.coroutine decorator to that result:
#gen.coroutine
#lifesaver
def dosomething1():
raise Exception("Test error!")
Your decorator can't really handle the output that a #gen.coroutine decorated function produces. Tornado relies on exceptions to communicate results (because in Python 2, generators can't use return to return results). You need to make sure you pass through the exceptions Tornado relies on. You also should re-wrap your wrapper function:
from tornado import gen
def lifesaver(func):
#gen.coroutine
def silenceit(*args, **kwargs):
try:
return func(*args, **kwargs)
except (gen.Return, StopIteration):
raise
except Exception as ex:
# collect info and format it
res = ' ... '
# writelog(res)
print(res)
raise gen.Return(b"")
return silenceit
On exception, an empty Return() object is raised; adjust this as needed.
Do yourself a favour and don't use a class just put staticmethod functions in there. Just put those functions at the top level in the module. Classes are there to combine methods and shared state, not to create a namespace. Use modules to create namespaces instead.

Do the same change (try-except) on multiple lines - python

There are ways how programmer can make programming and refactoring easier and more simple, python is very good in this area.
I'm curious whether is there a more elegant way to solve my problem than brute-force writing the same code multiple times again and again.
Situation:
I'm writing a code. There are many equal methods calling with different arguments sequentially.
For example - I have this code:
...
...
my_method(1)
my_method(2)
my_method(3)
my_method(4)
...
my_method(10)
...
So I have this code written, everything works fine but suddenly I find out that I need to make a log file so I have to put try-except on everyone of this methods so the code will look like this:
...
...
try:
my_method(3)
except Exception as e:
print_to_file(log.txt,str(e))
...
...
try:
my_method(8)
except Exception as e:
print_to_file(log.txt,str(e))
...
...
Do I have a better option than changing every my_method(x) calling and putting it into try-except clause? I know that it is a mistake of the programmer who had to think about it at the beginning but these situations happens.
EDIT: According to the answer - the code above is the simple example. In real code there are no int arguments given but dates where there is no logic there so I can't put it into the loop. Assume that the arguments can't be generated.
If you're using the logger supplied by python, you can redirect exception output to the log as opposed to have to put a ton of try blocks everywhere:
import os, sys
import logging
logger = logging.getLogger(__name__)
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
if __name__ == "__main__":
raise RuntimeError("Test unhandled")
Now is an exception is thrown, you won't need a try block, it will be written to the log regardless
ref
You can take advantage of the fact that a function, in python, is totally an object, and write a function that takes in another function, runs it, and logs any exceptions
def sloppyRun(func, *args, **kwargs):
"""Runs a function, catching all exceptions
and writing them to a log file."""
try:
return func(*args, **kwargs) #running function here
except:
logging.exception(func.__name__ + str(args) + str(kwargs))
#incidentally, the logging module is wonderful. I'd recommend using it.
#It'll even write the traceback to a file.
And then you can write something like
sloppyRun(my_method, 8) #note the lack of parens for my_method
You could have like a context manager or a decorator to log what you need, when you need to. if you intend to always log an exception when you use that function, I would suggest going the simple decorator rule or even a try and except inside that function. If it is functions not in your code or you dont want them to always log,then I would used a context manager (called as with ..:)
A context manager example code
import functools
class LoggerContext():
def __enter__(self):
# function that is called on enter of the with context
# we dont need this
pass
def __exit__(self, type, value, traceback):
# If there was an exception, it will be passed to the
# exit function.
# type = type of exception
# value = the string arg of the exception
# traceback object for you to extract the traceback if you need to
if traceback:
# do something with exception like log it etc
print(type, value, traceback)
# If the return value of the exit function is not True, python
# interpreter re-raises the exception. We dont want to re-raise
# the exception
return True
def __call__(self, f):
# this is just to make a context manager a decorator
# so that you could use the #context on a function
#functools.wraps(f)
def decorated(*args, **kwds):
with self:
return f(*args, **kwds)
return decorated
#LoggerContext()
def myMethod(test):
raise FileNotFoundError(test)
def myMethod2(test):
raise TypeError(test)
myMethod('asdf')
with LoggerContext():
myMethod2('asdf')
A simple decorator example:
import functools
def LoggerDecorator(f):
#functools.wraps(f)
def decorated(*args, **kwds):
try:
return f(*args, **kwds)
except Exception as e:
# do something with exception
print('Exception:', e)
return decorated
#LoggerDecorator
def myMethod3(test):
raise IOError(test)
myMethod3('asdf')

python custom exception handler for a class

I want to build my application with a redis cache. but maybe redis is not available all the time in our case,
so I hope, it redis works well, we use it. if it can't work, just logging and ignore it this time.
for example:
try:
conn.sadd('s', *array)
except :
...
since there are many place I will run some conn.{rediscommand}, I don't like to use try/except every place.
so the solution maybe :
class softcache(redis.StrictRedis):
def sadd(key, *p):
try:
super(redis.StrictRedis, self).sadd(key, p)
except:
..
but since redis have many commands, I have to warp them one by one.
is it possible to custom a exception handler for a class to handle all the exceptions which come from this class ?
Silencing per default all exceptions is probably the worst thing you can do.
Anyway, for your problem you can write a generic wrapper that just redirects to the connection object.
class ReddisWrapper(object):
conn = conn # Here your reddis object
def __getattr__(self, attr):
def wrapper(*args, **kwargs):
# Get the real reddis function
fn = getattr(self.conn, attr)
# Execute the function catching exceptions
try:
return fn(*args, **kwargs)
# Specify here the exceptions you expect
except:
log(...)
return wrapper
And then you would call like this:
reddis = ReddisWrapper()
reddis.do_something(4)
This has not been tested, and will only work with methods. For properties you should catch the non callable exception and react properly.
Is it always the same Exception?
If so you could write a custom, Exception catching and logging decorator.
Something like the following:
def exception_catcher(fn):
try:
fn()
except Exception as e:
log(e)
Then just use it around your code:
#exception_catcher
sadd('s', *array)
The comment and link to Exceptions for the whole class suggested by #idanshmu will offer more detailed handling of different Exceptions per method.

Categories

Resources